Reset all


Content Types


AID systems



Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type


Metadata standards

PID systems

Provider types

Quality management

Repository languages



Repository types


  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 33 result(s)
In February 1986 the NIST measurements were communicated to appropriate astronomers for use in ground-based testing and calibration programs for the GHRS, and in 1990 the NIST group published the new wavelengths for about 3000 lines in the Supplement Series of the Astrophysical Journal. The full report on the NIST measurements in the form of a complete and detailed atlas of the platinum/neon spectrum presented in this special issue of the Journal of Research of NIST will be highly useful to a wide range of scientists.
The European Monitoring and Evaluation Programme (EMEP) is a scientifically based and policy driven programme under the Convention on Long-range Transboundary Air Pollution (CLRTAP) for international co-operation to solve transboundary air pollution problems.
The Northern California Earthquake Data Center (NCEDC) is a permanent archive and distribution center primarily for multiple types of digital data relating to earthquakes in central and northern California. The NCEDC is located at the Berkeley Seismological Laboratory, and has been accessible to users via the Internet since mid-1992. The NCEDC was formed as a joint project of the Berkeley Seismological Laboratory (BSL) and the U.S. Geological Survey (USGS) at Menlo Park in 1991, and current USGS funding is provided under a cooperative agreement for seismic network operations.
SeaBASS, the publicly shared archive of in situ oceanographic and atmospheric data maintained by the NASA Ocean Biology Processing Group (OBPG). High quality in situ measurements are prerequisite for satellite data product validation, algorithm development, and many climate-related inquiries. As such, the NASA Ocean Biology Processing Group (OBPG) maintains a local repository of in situ oceanographic and atmospheric data to support their regular scientific analyses. The SeaWiFS Project originally developed this system, SeaBASS, to catalog radiometric and phytoplankton pigment data used their calibration and validation activities. To facilitate the assembly of a global data set, SeaBASS was expanded with oceanographic and atmospheric data collected by participants in the SIMBIOS Program, under NASA Research Announcements NRA-96 and NRA-99, which has aided considerably in minimizing spatial bias and maximizing data acquisition rates. Archived data include measurements of apparent and inherent optical properties, phytoplankton pigment concentrations, and other related oceanographic and atmospheric data, such as water temperature, salinity, stimulated fluorescence, and aerosol optical thickness. Data are collected using a number of different instrument packages, such as profilers, buoys, and hand-held instruments, and manufacturers on a variety of platforms, including ships and moorings.
HyperLeda is an information system for astronomy: It consists in a database and tools to process that data according to the user's requirements. The scientific goal which motivates the development of HyperLeda is the study of the physics and evolution of galaxies. LEDA was created more than 20 years ago, in 1983, and became HyperLeda after the merging with Hypercat in 2000
NOAA's National Centers for Environmental Information (NCEI) are responsible for hosting and providing public access to one of the most significant archives for environmental data on Earth with over 20 petabytes of comprehensive atmospheric, coastal, oceanic, and geophysical data. NCEI headquarters are located in Asheville, North Carolina. Most employees work in the four main locations, but apart from those locations, NCEI has employees strategically located throughout the United States. The main locations are Cooperative Institute for Climate and Satellites–North Carolina (CICS-NC) at Asheville, North Carolina, Cooperative Institute for Research in Environmental Sciences (CIRES) at Boulder Colorado, Cooperative Institute for Climate and Satellites–Maryland (CICS-MD) at Silver Spring Maryland and Stennis Space Center, Mississippi.
The Brown Digital Repository (BDR) is a place to gather, index, store, preserve, and make available digital assets produced via the scholarly, instructional, research, and administrative activities at Brown.
GLOBE (Global Collaboration Engine) is an online collaborative environment that enables land change researchers to share, compare and integrate local and regional studies with global data to assess the global relevance of their work.
Socrata’s cloud-based solution allows government organizations to put their data online, make data-driven decisions, operate more efficiently, and share insights with citizens.
CODEX is a database of NGS mouse and human experiments. Although, the main focus of CODEX is Haematopoiesis and Embryonic systems, the database includes a large variety of cell types. In addition to the publically available data, CODEX also includes a private site hosting non-published data. CODEX provides access to processed and curated NGS experiments. To use CODEX: (i) select a specialized repository (HAEMCODE or ESCODE) or choose the whole compendium (CODEX), then (ii) filter by organism and (iii) choose how to explore the database.
PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system that produces content concerning the impact of significant earthquakes around the world, informing emergency responders, government and aid agencies, and the media of the scope of the potential disaster. PAGER rapidly assesses earthquake impacts by comparing the population exposed to each level of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world. Earthquake alerts – which were formerly sent based only on event magnitude and location, or population exposure to shaking – now will also be generated based on the estimated range of fatalities and economic losses. PAGER uses these earthquake parameters to calculate estimates of ground shaking by using the methodology and software developed for ShakeMaps. ShakeMap sites provide near-real-time maps of ground motion and shaking intensity following significant earthquakes. These maps are used by federal, state, and local organizations, both public and private, for post-earthquake response and recovery, public and scientific information, as well as for preparedness exercises and disaster planning.
!! OFFLINE !! A recent computer security audit has revealed security flaws in the legacy HapMap site that require NCBI to take it down immediately. We regret the inconvenience, but we are required to do this. That said, NCBI was planning to decommission this site in the near future anyway (although not quite so suddenly), as the 1,000 genomes (1KG) project has established itself as a research standard for population genetics and genomics. NCBI has observed a decline in usage of the HapMap dataset and website with its available resources over the past five years and it has come to the end of its useful life. The International HapMap Project is a multi-country effort to identify and catalog genetic similarities and differences in human beings. Using the information in the HapMap, researchers will be able to find genes that affect health, disease, and individual responses to medications and environmental factors. The Project is a collaboration among scientists and funding agencies from Japan, the United Kingdom, Canada, China, Nigeria, and the United States. All of the information generated by the Project will be released into the public domain. The goal of the International HapMap Project is to compare the genetic sequences of different individuals to identify chromosomal regions where genetic variants are shared. By making this information freely available, the Project will help biomedical researchers find genes involved in disease and responses to therapeutic drugs. In the initial phase of the Project, genetic data are being gathered from four populations with African, Asian, and European ancestry. Ongoing interactions with members of these populations are addressing potential ethical issues and providing valuable experience in conducting research with identified populations. Public and private organizations in six countries are participating in the International HapMap Project. Data generated by the Project can be downloaded with minimal constraints. The Project officially started with a meeting in October 2002 ( and is expected to take about three years.
Content type(s)
The repository is no longer available. <<<!!!<<< TOXNET's ITER is migrated to the NCBI Bookshelf ( >>>!!!>>>
The UTM Data Service is responsible for managing spatial data acquired during oceanographic cruises. The purpose is, on the one hand, to disseminate which data exist and where, how and when they have been acquired and on the other hand, to provide access to as much of the interoperable data as possible so that they can be used and reused. For this purpose, the UTM has a Spatial Data Infrastructure at a national level that consists, on the one hand, of a Oceanographic Cruises Catalog that includes more than 500 cruises carried out since 1991 -with links to documentation associated to the cruise, navigation maps and datasets- and, on the other hand, a Geoportal that collects information from the datasets and allows you to create maps from different data layers. At a international level, the UTM is a National Oceanographic Data Center (NODC) of the Distributed European Marine Data Infrastructure SeaDataNet, to which the UTM provides metadata published in the Cruise Summary Report Catalog and in the data catalog Common Data Index, as well as public data to be shared.
Content type(s)
The Network for the Detection of Atmospheric Composition Change (NDACC), a major contributor to the worldwide atmospheric research effort, consists of a set of globally distributed research stations providing consistent, standardized, long-term measurements of atmospheric trace gases, particles, spectral UV radiation reaching the Earth's surface, and physical parameters, centered around the following priorities.
The Community Coordinated Modeling Center (CCMC) is a multi-agency partnership based at the NASA Goddard Space Flight Center in Greenbelt, Maryland and a component of the National Space Weather Program. The CCMC provides, to the international research community, access to modern space science simulations. In addition, the CCMC supports the transition to space weather operations of modern space research models.
The Coriolis Data Centre handles operational oceanography measurements made in situ, complementing the measurement of the ocean surface made using instruments aboard satellites. This work is realised through the establishment of permanent networks with data collected by ships or autonomous systems that are either fixed or drifting. This data can be used to construct a snapshot of water mass structure and current intensity.
PeanutBase is a peanut community resource providing genetic, genomic, gene function, and germplasm data to support peanut breeding and molecular research. This includes molecular markers, genetic maps, QTL data, genome assemblies, germplasm records, and traits. Data is curated from literature and submitted directly by researchers. Funding for PeanutBase is provided by the Peanut Foundation with in-kind contributions from the USDA-ARS.
The CPTAC Data Portal is the centralized repository for the dissemination of proteomic data collected by the Proteome Characterization Centers (PCCs) for the CPTAC program. The portal also hosts analyses of the mass spectrometry data (mapping of spectra to peptide sequences and protein identification) from the PCCs and from a CPTAC-sponsored common data analysis pipeline (CDAP).