Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 291 result(s)
The GHO data repository is WHO's gateway to health-related statistics for its 194 Member States. It provides access to over 1000 indicators on priority health topics including mortality and burden of diseases, the Millennium Development Goals (child nutrition, child health, maternal and reproductive health, immunization, HIV/AIDS, tuberculosis, malaria, neglected diseases, water and sanitation), non communicable diseases and risk factors, epidemic-prone diseases, health systems, environmental health, violence and injuries, equity among others. In addition, the GHO provides on-line access to WHO's annual summary of health-related data for its Member states: the World Health Statistics.
The Met Office is the UK's National Weather Service. We have a long history of weather forecasting and have been working in the area of climate change for more than two decades. As a world leader in providing weather and climate services, we employ more than 1,800 at 60 locations throughout the world. We are recognised as one of the world's most accurate forecasters, using more than 10 million weather observations a day, an advanced atmospheric model and a high performance supercomputer to create 3,000 tailored forecasts and briefings a day. These are delivered to a huge range of customers from the Government, to businesses, the general public, armed forces, and other organisations.
>>>!!!<<< caArray Retirement Announcement >>>!!!<<< The National Cancer Institute (NCI) Center for Biomedical Informatics and Information Technology (CBIIT) instance of the caArray database was retired on March 31st, 2015. All publicly-accessible caArray data and annotations will be archived and will remain available via FTP download https://wiki.nci.nih.gov/x/UYHeDQ and is also available at GEO http://www.ncbi.nlm.nih.gov/geo/ . >>>!!!<<< While NCI will not be able to provide technical support for the caArray software after the retirement, the source code is available on GitHub https://github.com/NCIP/caarray , and we encourage continued community development. Molecular Analysis of Brain Neoplasia (Rembrandt fine-00037) gene expression data has been loaded into ArrayExpress: http://www.ebi.ac.uk/arrayexpress/experiments/E-MTAB-3073 >>>!!!<<< caArray is an open-source, web and programmatically accessible microarray data management system that supports the annotation of microarray data using MAGE-TAB and web-based forms. Data and annotations may be kept private to the owner, shared with user-defined collaboration groups, or made public. The NCI instance of caArray hosts many cancer-related public datasets available for download.
The WOUDC processes, archives and publishes world ozone and UV data reported by over 400 stations comprising over 100 international agencies and universities. The World Ozone and Ultraviolet Radiation Data Centre (WOUDC) has the two component parts: the World Ozone Data Centre (WODC) and the World Ultraviolet Radiation Data Centre (WUDC). These data are available on-line with updates occuring every week and in addition to the on-line archive, data are published annually on CD-ROM, now DVD.
The project is set up in order to improve the infrastructure for text-based linguistic research and development by building a huge, automatically annotated German text corpus and the corresponding tools for corpus annotation and exploitation. DeReKo constitutes the largest linguistically motivated collection of contemporary German texts, contains fictional, scientific and newspaper texts, as well as several other text types, contains only licenced texts, is encoded with rich meta-textual information, is fully annotated morphosyntactically (three concurrent annotations), is continually expanded, with a focus on size and stratification of data, may be analyzed free of charge via the query system COSMAS II, serves as a 'primordial sample' from which users may draw specialized sub-samples (socalled 'virtual corpora') to represent the language domain they wish to investigate. !!! Access to data of Das Deutsche Referenzkorpus is also provided by: IDS Repository https://www.re3data.org/repository/r3d100010382 !!!
Weed Images is a project of the University of Georgia’s Center for Invasive Species and Ecosystem Health and one of the four major parts of BugwoodImages. The Focus is on damages of weed. It provides an easily accessible archive of high quality images for use in educational applications. In most cases, the images found in this system were taken by and loaned to us by photographers other than ourselves. Most are in the realm of public sector images. The photographs are in this system to be used.
The Astronomy data repository at Harvard is currently open to all scientific data from astronomical institutions worldwide. Incorporating Astroinformatics of galaxies and quasars Dataverse. The Astronomy Dataverse is connected to the indexing services provided by the SAO/NASA Astrophysical Data Service (ADS).
Knoema is a knowledge platform. The basic idea is to connect data with analytical and presentation tools. As a result, we end with one uniformed platform for users to access, present and share data-driven content. Within Knoema, we capture most aspects of a typical data use cycle: accessing data from multiple sources, bringing relevant indicators into a common space, visualizing figures, applying analytical functions, creating a set of dashboards, and presenting the outcome.
EIDA, an initiative within ORFEUS, is a distributed data centre established to (a) securely archive seismic waveform data and related metadata, gathered by European research infrastructures, and (b) provide transparent access to the archives by the geosciences research communities. EIDA nodes are data centres which collect and archive data from seismic networks deploying broad-band sensors, short period sensors, accelerometers, infrasound sensors and other geophysical instruments. Networks contributing data to EIDA are listed in the ORFEUS EIDA networklist (http://www.orfeus-eu.org/data/eida/networks/). Data from the ORFEUS Data Center (ODC), hosted by KNMI, are available through EIDA. Technically, EIDA is based on an underlying architecture developed by GFZ to provide transparent access to all nodes' data. Data within the distributed archives are accessible via the ArcLink protocol (http://www.seiscomp3.org/wiki/doc/applications/arclink).
CLARIN.SI is the Slovenian node of the European CLARIN (Common Language Resources and Technology Infrastructure) Centers. The CLARIN.SI repository is hosted at the Jožef Stefan Institute and offers long-term preservation of deposited linguistic resources, along with their descriptive metadata. The integration of the repository with the CLARIN infrastructure gives the deposited resources wide exposure, so that they can be known, used and further developed beyond the lifetime of the projects in which they were produced. Among the resources currently available in the CLARIN.SI repository are the multilingual MULTEXT-East resources, the CC version of Slovenian reference corpus Gigafida, the morphological lexicon Sloleks, the IMP corpora and lexicons of historical Slovenian, as well as many other resources for a variety of languages. Furthermore, several REST-based web services are provided for different corpus-linguistic and NLP tasks.
Country
!!! <<< this record is no longer maintained, please use https://www.re3data.org/repository/r3d100011876 or https://www.re3data.org/repository/r3d100011647 >>> !!!: e!DAL stands for electronic Data Archive Library. It is a lightweight open source software software framework for publishing and sharing research data. e!DAL was developed based on experiences coming from decades of research data management and has grown towards being a general data archiving and publication infrastructure [https://doi.org/10.1186/1471-2105-15-214]. First research data repository is "Plant Genomics and Phenomics Research Data Repository" [https://doi.org/10.1093/database/baw033].
The LINZ Data Service provides free online access to New Zealand’s most up-to-date land and seabed data. The data can be searched, browsed and downloaded. The LINZ web services can be also integrated into other applications.
Country
The Climate Change Centre Austria - Data Centre provides the central national archive for climate data and information. The data made accessible includes observation and measurement data, scenario data, quantitative and qualitative data, as well as the measurement data and findings of research projects.
Country
The CosmoSim database provides results from cosmological simulations performed within different projects: the MultiDark and Bolshoi project, and the CLUES project. The CosmoSim webpage provides access to several cosmological simulations, with a separate database for each simulation. Simulations overview: https://www.cosmosim.org/cms/simulations/simulations-overview/ . CosmoSim is a contribution to the German Astrophysical Virtual Observatory.
The Deep Carbon Observatory (DCO) is a global community of multi-disciplinary scientists unlocking the inner secrets of Earth through investigations into life, energy, and the fundamentally unique chemistry of carbon. Deep Carbon Observatory Digital Object Registry (“DCO-VIVO”) is a centrally-managed digital object identification, object registration and metadata management service for the DCO. Digital object registration includes DCO-ID generation based on the global Handle System infrastructure and metadata collection using VIVO. Users will be able to deposit their data into the DCO Data Repository and have that data discoverable and accessible by others.
This is CSDB version 1 merged from Bacterial (BCSDB) and Plant&Fungal (PFCSDB) databases. This database aims at provision of structural, bibliographic, taxonomic, NMR spectroscopic and other information on glycan and glycoconjugate structures of prokaryotic, plant and fungal origin. It has been merged from the Bacterial and Plant&Fungal Carbohydrate Structure Databases (BCSDB+PFCSDB). The key points of this service are: High coverage. The coverage for bacteria (up to 2016) and archaea (up to 2016) is above 80%. Similar coverage for plants and fungi is expected in the future. The database is close to complete up to 1998 for plants, and up to 2006 for fungi. Data quality. High data quality is achieved by manual curation using original publications which is assisted by multiple automatic procedures for error control. Errors present in publications are reported and corrected, when possible. Data from other databases are verified on import. Detailed annotations. Structural data are supplied with extended bibliography, assigned NMR spectra, taxon identification including strains and serogroups, and other information if available in the original publication. Services. CSDB serves as a platform for a number of computational services tuned for glycobiology, such as NMR simulation, automated structure elucidation, taxon clustering, 3D molecular modeling, statistical processing of data etc. Integration. CSDB is cross-linked to other glycoinformatics projects and NCBI databases. The data are exportable in various formats, including most widespread encoding schemes and records using GlycoRDF ontology. Free web access. Users can access the database for free via its web interface (see Help). The main source of data is retrospective literature analysis. About 20% of data were imported from CCSD (Carbbank, University of Georgia, Athens; structures published before 1996) with subsequent manual curation and approval. The current coverage is displayed in red on the top of the left menu. The time lag between the publication of new data and their deposition into CSDB is ca. 1 year. In the scope of bacterial carbohydrates, CSDB covers nearly all structures of this origin published up to 2016. Prokaryotic, plant and fungal means that a glycan was found in the organism(s) belonging to these taxonomic domains or was obtained by modification of those found in them. Carbohydrate means a structure composed of any residues linked by glycosidic, ester, amidic, ketal, phospho- or sulpho-diester bonds in which at least one residue is a sugar or its derivative.
Country
The German National Library offers free access to its bibliographic data and several collections of digital objects. As the central access point for presenting, accessing and reusing digital resources, DNBLab allows users to access our data, object files and full texts. The access is available by download and through various interfaces.
The Royal Library of the Netherlands (Dutch: Koninklijke Bibliotheek or KB; Royal Library) is the national library of the Netherlands. The KB collects everything that is published in and concerning the Netherlands, from medieval literature to today's publications. The e-Depot contains the Dutch National Library Collection of born-digital publications from, and about, the Netherlands, and international publications consisting of born-digital scholarly articles included in journals produced by publishers originally based in the Netherlands
Constellation is a digital object identifier (DOI) based science network for supercomputing data. Constellation makes it possible for OLCF researchers to obtain DOIs for large data collections by tying them together with the associated resources and processes that went into the production of the data (e.g., jobs, collaborators, projects), using a scalable database. It also allows the annotation of the scientific conduct with rich metadata, and enables the cataloging and publishing of the artifacts for open access, aiding in scalable data discovery. OLCF users can use the DOI service to publish datasets even before the publication of the paper, and retain key data even after project expiration. From a center standpoint, DOIs enable the stewardship of data, and better management of the scratch and archival storage.
Ag Data Commons provides access to a wide variety of open data relevant to agricultural research. We are a centralized repository for data already on the web, as well as for new data being published for the first time. While compliance with the U.S. Federal public access and open data directives is important, we aim to surpass them. Our goal is to foster innovative data re-use, integration, and visualization to support bigger, better science and policy.
Country
Paris Astronomical Data Centre aims at providing VO access to its data collections, at participating to international standards developments, at implementing VO compliant simulation codes, data visualization and analysis software. This centre hosts high level permanent activities for tools and data distribution under the format of reference services. These sustainable services are recognized at the national level as CNRS labeled services. The various activities are organised as portals whose functions are to provide visibility and information on the projects and to encourage collaboration.
Country
The ZFMK Biodiversity Data Center is aimed at hosting, archiving, publishing and distributing data from biodiversity research and zoological collections. The Biodiversity Data Center handles and curates data on: - The specimens of the institutes collection, including provenance, distribution, habitat, and taxonomic data. - Observations, recordings and measurements from field research, monitoring and ecological inventories. - Morphological measurements, descriptions on specimens, as well as - Genetic barcode libraries, and - Genetic and molecular research data associated with specimens or environmental samples. For this purpose, suitable software and hardware systems are operated and the required infrastructure is further developed. Core components of the software architecture are: The DiversityWorkbench suite for managing all collection-related information. The Digital Asset Management system easyDB for multimedia assets. The description database Morph·D·Base for morphological data sets and character matrices.
Country
This website is the public interface to the "Canadian Database of Geochemical Surveys". The database has two long-term goals. Firstly, it aims to catalogue all of the regional geochemical surveys that have been carried out across Canada, beginning in the 1950s. Secondly, it aims to make the raw data from those surveys available in a standardised format. Over 1,500 surveys have been catalogued. Approximately 500 are considered to be of long-term strategic value for mineral exploration and environmental baseline studies. Work is progressing on standardising the data for these 500 surveys. To date over 250 datasets have been converted.