Reset all


Content Types


AID systems



Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type


Metadata standards

PID systems

Provider types

Quality management

Repository languages



Repository types


  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 67 result(s)
The World Bank recognizes that transparency and accountability are essential to the development process and central to achieving the Bank’s mission to alleviate poverty. The Bank’s commitment to openness is also driven by a desire to foster public ownership, partnership and participation in development from a wide range of stakeholders. As a knowledge institution, the World Bank’s first step is to share its knowledge freely and openly.
China Earthquake Data Center provides Seismic data, geomagnetic data, geoelectric data, terrain data and underground fluid change data. It is only open in the Seismological Bureau.
WorldClim is a set of global climate layers (climate grids) with a spatial resolution of about 1 square kilometer. The data can be used for mapping and spatial modeling in a GIS or with other computer programs.
Exposome-Explorer is the first database dedicated to biomarkers of exposure to environmental risk factors for diseases. It contains detailed information on the nature of biomarkers, populations and subjects where measured, samples analyzed, methods used for biomarker analyses, concentrations in biospecimens, correlations with external exposure measurements, and biological reproducibility over time.
>>>!!!<<< 2019-01: Global Land Cover Facility goes offline see ; no more access to >>>!!!<<< The Global Land Cover Facility (GLCF) provides earth science data and products to help everyone to better understand global environmental systems. In particular, the GLCF develops and distributes remotely sensed satellite data and products that explain land cover from the local to global scales.
This interactive database provides complete access to statistics on seasonal cotton supply and use for each country and each region in the world, from 1920/21 to date. This project is part of ICAC’s efforts to improve the transparency of world cotton statistics.
We present the MUSE-Wide survey, a blind, 3D spectroscopic survey in the CANDELS/GOODS-S and CANDELS/COSMOS regions. Each MUSE-Wide pointing has a depth of 1 hour and hence targets more extreme and more luminous objects over 10 times the area of the MUSE-Deep fields (Bacon et al. 2017). The legacy value of MUSE-Wide lies in providing "spectroscopy of everything" without photometric pre-selection. We describe the data reduction, post-processing and PSF characterization of the first 44 CANDELS/GOODS-S MUSE-Wide pointings released with this publication. Using a 3D matched filtering approach we detected 1,602 emission line sources, including 479 Lyman-α (Lya) emitting galaxies with redshifts 2.9≲z≲6.3. We cross-match the emission line sources to existing photometric catalogs, finding almost complete agreement in redshifts and stellar masses for our low redshift (z < 1.5) emitters. At high redshift, we only find ~55% matches to photometric catalogs. We encounter a higher outlier rate and a systematic offset of Δz≃0.2 when comparing our MUSE redshifts with photometric redshifts. Cross-matching the emission line sources with X-ray catalogs from the Chandra Deep Field South, we find 127 matches, including 10 objects with no prior spectroscopic identification. Stacking X-ray images centered on our Lya emitters yielded no signal; the Lya population is not dominated by even low luminosity AGN. A total of 9,205 photometrically selected objects from the CANDELS survey lie in the MUSE-Wide footprint, which we provide optimally extracted 1D spectra of. We are able to determine the spectroscopic redshift of 98% of 772 photometrically selected galaxies brighter than 24th F775W magnitude. All the data in the first data release - datacubes, catalogs, extracted spectra, maps - are available at the website.
coastDat is a model based data bank developed mainly for the assessment of long-term changes in data sparse regions. A sequence of numerical models is employed to reconstruct all aspects of marine climate (such as storms, waves, surges etc.) over many decades of years relying only on large-scale information such as large-scale atmospheric conditions or bathymetry.
The ISSAID website gathers resources related to the systemic autoinflammatory diseases in order to facilitate contacts between interested physicians and researchers. The website provides support to share and rapidly disseminate information, thoughts, feelings and experiences to improve the quality of life of patients and families affected by systemic autoinflammatory diseases, and promote advances in the search for causes and cures.
The ENCODE Encyclopedia organizes the most salient analysis products into annotations, and provides tools to search and visualize them. The Encyclopedia has two levels of annotations: Integrative-level annotations integrate multiple types of experimental data and ground level annotations. Ground-level annotations are derived directly from the experimental data, typically produced by uniform processing pipelines.
The ILO Department of Statistics is the focal point to the United Nations on labour statistics. They develop international standards for better measurement of labour issues and enhanced international comparability; provide relevant, timely and comparable labour statistics; and help Member States develop and improve their labour statistics.
<<<!!!<<< The data is in the phase of migration to another system. Therefore the repository is no longer available. This record is out-dated.; 2020-10-06 !!! >>>!!!>>> Due to the changes at the individual IGS analysis centers during these years the resulting time series of global geodetic parameters are inhomogeneous and inconsistent. A geophysical interpretation of these long series and the realization of a high-accuracy global reference frame are therefore difficult and questionable. The GPS reprocessing project GPS-PDR (Potsdam Dresden Reprocessing), initiated by TU München and TU Dresden and continued by GFZ Potsdam and TU Dresden, provides selected products of a homogeneously reprocessed global GPS network such as GPS satellite orbits and Earth rotation parameters.
The Brain Transcriptome Database (BrainTx) project aims to create an integrated platform to visualize and analyze our original transcriptome data and publicly accessible transcriptome data related to the genetics that underlie the development, function, and dysfunction stages and states of the brain.
The Portal aims to serve as a unique access point to timely, comprehensive migration statistics and reliable information about migration data globally. The site is designed to help policy makers, national statistics officers, journalists and the general public interested in the field of migration to navigate the increasingly complex landscape of international migration data, currently scattered across different organisations and agencies. Especially in critical times, such as those faced today, it is essential to ensure that responses to migration are based on sound facts and accurate analysis. By making the evidence about migration issues accessible and easy to understand, the Portal aims to contribute to a more informed public debate. The Portal was launched in December 2017 and is managed and developed by IOM’s Global Migration Data Analysis Centre (GMDAC), with the guidance of its Advisory Board, and was supported in its conception by the Economist Intelligence Unit (EIU). The Portal is supported financially by the Governments of Germany, the United States of America and the UK Department for International Development (DFID).
DBpedia is a crowd-sourced community effort to extract structured information from Wikipedia and make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia, and to link the different data sets on the Web to Wikipedia data. We hope that this work will make it easier for the huge amount of information in Wikipedia to be used in some new interesting ways. Furthermore, it might inspire new mechanisms for navigating, linking, and improving the encyclopedia itself.
SCISAT, also known as the Atmospheric Chemistry Experiment (ACE), is a Canadian Space Agency small satellite mission for remote sensing of the Earth's atmosphere using solar occultation. The satellite was launched on 12 August 2003 and continues to function perfectly. The primary mission goal is to improve our understanding of the chemical and dynamical processes that control the distribution of ozone in the stratosphere and upper troposphere, particularly in the Arctic. The high precision and accuracy of solar occultation makes SCISAT useful for monitoring changes in atmospheric composition and the validation of other satellite instruments. The satellite carries two instruments. A high resolution (0.02 cm-¹) infrared Fourier transform spectrometer (FTS) operating from 2 to 13 microns (750-4400 cm-¹) is measuring the vertical distribution of trace gases, particles and temperature. This provides vertical profiles of atmospheric constituents including essentially all of the major species associated with ozone chemistry. Aerosols and clouds are monitored using the extinction of solar radiation at 1.02 and 0.525 microns as measured by two filtered imagers. The vertical resolution of the FTS is about 3-4 km from the cloud tops up to about 150 km. Peter Bernath of the University of Waterloo is the principal investigator. A dual optical spectrograph called MAESTRO (Measurement of Aerosol Extinction in the Stratosphere and Troposphere Retrieved by Occultation) covers the 400-1030 nm spectral region and measures primarily ozone, nitrogen dioxide and aerosol/cloud extinction. It has a vertical resolution of about 1-2 km. Tom McElroy of Environment and Climate Change Canada is the principal investigator. ACE data are freely available from the University of Waterloo website. SCISAT was designated an ESA Third Party Mission in 2005. ACE data are freely available through an ESA portal.
The Sloan Digital Sky Survey (SDSS) is one of the most ambitious and influential surveys in the history of astronomy. Over eight years of operations (SDSS-I, 2000-2005; SDSS-II, 2005-2008; SDSS-III 2008-2014; SDSS-IV 2013 ongoing), it obtained deep, multi-color images covering more than a quarter of the sky and created 3-dimensional maps containing more than 930,000 galaxies and more than 120,000 quasars. DSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), Max-Planck-Institut für Astronomie (MPIA Heidelberg), National Astronomical Observatory of China, New Mexico State University, New York University, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Portsmouth, University of Utah, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
GeneCards is a searchable, integrative database that provides comprehensive, user-friendly information on all annotated and predicted human genes. It automatically integrates gene-centric data from ~125 web sources, including genomic, transcriptomic, proteomic, genetic, clinical and functional information.
mentha archives evidence collected from different sources and presents these data in a complete and comprehensive way. Its data comes from manually curated protein-protein interaction databases that have adhered to the IMEx consortium. The aggregated data forms an interactome which includes many organisms. mentha is a resource that offers a series of tools to analyse selected proteins in the context of a network of interactions. Protein interaction databases archive protein-protein interaction (PPI) information from published articles. However, no database alone has sufficient literature coverage to offer a complete resource to investigate "the interactome". mentha's approach generates every week a consistent interactome (graph). Most importantly, the procedure assigns to each interaction a reliability score that takes into account all the supporting evidence. mentha offers eight interactomes (Homo sapiens, Arabidopsis thaliana, Caenorhabditis elegans, Drosophila melanogaster, Escherichia coli K12, Mus musculus, Rattus norvegicus, Saccharomyces cerevisiae) plus a global network that comprises every organism, including those not mentioned. The website and the graphical application are designed to make the data stored in mentha accessible and analysable to all users. Source databases are: MINT, IntAct, DIP, MatrixDB and BioGRID.
RADAM portal is an interface to the network of RADAM (RADiation DAMage) Databases collecting data on interactions of ions, electrons, positrons and photons with biomolecular systems, on radiobiological effects and relevant phenomena occurring at different time, spatial and energy scales in irradiated targets during and after the irradiation. This networking system has been created by the Consortium of COST Action MP1002 (Nano-IBCT: Nano-scale insights into Ion Beam Cancer Therapy) during 2011-2014 using the Virtual Atomic and Molecular Data Center (VAMDC) standards.
The Polinsky Language Sciences Lab at Harvard University is a linguistics lab that examines questions of language structure and its effect on the ways in which people use and process language in real time. We engage in linguistic and interdisciplinary research projects ourselves; offer linguistic research capabilities for undergraduate and graduate students, faculty, and visitors; and build relationships with the linguistic communities in which we do our research. We are interested in a broad range of issues pertaining to syntax, interfaces, and cross-linguistic variation. We place a particular emphasis on novel experimental evidence that facilitates the construction of linguistic theory. We have a strong cross-linguistic focus, drawing upon English, Russian, Chinese, Korean, Mayan languages, Basque, Austronesian languages, languages of the Caucasus, and others. We believe that challenging existing theories with data from as broad a range of languages as possible is a crucial component of the successful development of linguistic theory. We investigate both fluent speakers and heritage speakers—those who grew up hearing or speaking a particular language but who are now more fluent in a different, societally dominant language. Heritage languages, a novel field of linguistic inquiry, are important because they provide new insights into processes of linguistic development and attrition in general, thus increasing our understanding of the human capacity to maintain and acquire language. Understanding language use and processing in real time and how children acquire language helps us improve language study and pedagogy, which in turn improves communication across the globe. Although our lab does not specialize in language acquisition, we have conducted some studies of acquisition of lesser-studied languages and heritage languages, with the purpose of comparing heritage speakers to adults.
Content type(s)
The National Archives and Records Administration (NARA) is the nation's record keeper. Of all documents and materials created in the course of business conducted by the United States Federal government, only 1%-3% are so important for legal or historical reasons that they are kept by us forever. Those valuable records are preserved and are available to you, whether you want to see if they contain clues about your family’s history, need to prove a veteran’s military service, or are researching an historical topic that interests you. combines crowd sourcing and authoritative sources to enrich and provide data for protected areas around the world. Data are provided in partnership with the World Database on Protected Areas (WDPA). The data include the location, designation type, status year, and size of the protected areas, as well as species information.
The Square Kilometre Array (SKA) is a radio telescope with around one million square metres of collecting area, designed to study the Universe with unprecedented speed and sensitivity. The SKA is not a single telescope, but a collection of various types of antennas, called an array, to be spread over long distances. The SKA will be used to answer fundamental questions of science and about the laws of nature, such as: how did the Universe, and the stars and galaxies contained in it, form and evolve? Was Einstein’s theory of relativity correct? What is the nature of ‘dark matter’ and ‘dark energy’? What is the origin of cosmic magnetism? Is there life somewhere else in the Universe?