Reset all


Content Types


AID systems



Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type


Metadata standards

PID systems

Provider types

Quality management

Repository languages



Repository types


  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 15 result(s)
The Comprehensive Epidemiologic Data Resource (CEDR) is the Department of Energy's (DOE) electronic database comprised of health studies of DOE contract workers and environmental studies of areas surrounding DOE facilities. DOE recognizes the benefits of data sharing and supports the public's right to know about worker and community health risks. CEDR provides independent researchers and the public with access to de-identified data collected since the Department's early production years. Current CEDR holdings include more than 80 studies of over 1 million workers at 31 DOE sites. Access to these data is at no cost to the user. Most of CEDR's holdings are derived from epidemiologic studies of DOE workers at many large nuclear weapons plants, such as Hanford, Los Alamos, the Oak Ridge reservation, Savannah River Site, and Rocky Flats. These studies primarily use death certificate information to identify excess deaths and patterns of disease among workers to determine what factors contribute to the risk of developing cancer and other illnesses. In addition, many of these studies have radiation exposure measurements on individual workers. CEDR is supported by the Oak Ridge Institute for Science and Education (ORISE) in Oak Ridge, Tennessee. Now a mature system in routine operational use, CEDR's modern internet-based systems respond to thousands of requests to its web server daily. With about 1,500 Internet sites pointing to CEDR's web site, CEDR is a national user facility, with a large audience for data that are not available elsewhere.
WFCC Global Catalogue of Microorganisms (GCM) is expected to be a robust, reliable and user-friendly system to help culture collections to manage, disseminate and share the information related to their holdings. It also provides a uniform interface for the scientific and industrial communities to access the comprehensive microbial resource information.
The Norwegian Marine Data Centre (NMD) at the Institute of Marine Research was established as a national data centre dedicated to the professional processing and long-term storage of marine environmental and fisheries data and production of data products. The Institute of Marine Research continuously collects large amounts of data from all Norwegian seas. Data are collected using vessels, observation buoys, manual measurements, gliders – amongst others. NMD maintains the largest collection of marine environmental and fisheries data in Norway.
VAMDC aims to be an interoperable e-infrastructure that provides the international research community with access to a broad range of atomic and molecular (A&M) data compiled within a set of A&M databases accessible through the provision of this portal and of user software. Furthermore VAMDC aims to provide A&M data providers and compilers with a large dissemination platform for their work. VAMDC infrastructure was established to provide a service to a wide international research community and has been developed in conjunction with consultations and advice from the A&M user community.
The arctic data archive system (ADS) collects observation data and modeling products obtained by various Japanese research projects and gives researchers to access the results. By centrally managing a wide variety of Arctic observation data, we promote the use of data across multiple disciplines. Researchers use these integrated databases to clarify the mechanisms of environmental change in the atmosphere, ocean, land-surface and cryosphere. That ADS will be provide an opportunity of collaboration between modelers and field scientists, can be expected.
The Neuroscience Information Framework is a dynamic inventory of Web-based neuroscience resources: data, materials, and tools accessible via any computer connected to the Internet. An initiative of the NIH Blueprint for Neuroscience Research, NIF advances neuroscience research by enabling discovery and access to public research data and tools worldwide through an open source, networked environment.
The CLARIN Centre at the University of Copenhagen, Denmark, hosts and manages a data repository (CLARIN-DK-UCPH Repository), which is part of a research infrastructure for humanities and social sciences financed by the University of Copenhagen, and a part of the national infrastructure collaboration DIGHUMLAB in Denmark. The CLARIN-DK-UCPH Repository provides easy and sustainable access for scholars in the humanities and social sciences to digital language data (in written, spoken, video or multimodal form) and provides advanced tools for discovering, exploring, exploiting, annotating, and analyzing data. CLARIN-DK also shares knowledge on Danish language technology and resources and is the Danish node in the European Research Infrastructure Consortium, CLARIN ERIC.
The Indian Census is the largest single source of a variety of statistical information on different characteristics of the people of India. With a history of more than 130 years, this reliable, time tested exercise has been bringing out a veritable wealth of statistics every 10 years, beginning from 1872 when the first census was conducted in India non-synchronously in different parts. To scholars and researchers in demography, economics, anthropology, sociology, statistics and many other disciplines, the Indian Census has been a fascinating source of data. The rich diversity of the people of India is truly brought out by the decennial census which has become one of the tools to understand and study India The responsibility of conducting the decennial Census rests with the Office of the Registrar General and Census Commissioner, India under Ministry of Home Affairs, Government of India. It may be of historical interest that though the population census of India is a major administrative function; the Census Organisation was set up on an ad-hoc basis for each Census till the 1951 Census. The Census Act was enacted in 1948 to provide for the scheme of conducting population census with duties and responsibilities of census officers. The Government of India decided in May 1949 to initiate steps for developing systematic collection of statistics on the size of population, its growth, etc., and established an organisation in the Ministry of Home Affairs under Registrar General and ex-Officio Census Commissioner, India. This organisation was made responsible for generating data on population statistics including Vital Statistics and Census. Later, this office was also entrusted with the responsibility of implementation of Registration of Births and Deaths Act, 1969 in the country.
NeuroMorpho.Org is a centrally curated inventory of digitally reconstructed neurons associated with peer-reviewed publications. It contains contributions from over 80 laboratories worldwide and is continuously updated as new morphological reconstructions are collected, published, and shared. To date, NeuroMorpho.Org is the largest collection of publicly accessible 3D neuronal reconstructions and associated metadata which can be used for detailed single cell simulations.
This website is a portal that enables access to multi-Terabyte turbulence databases. The data reside on several nodes and disks on our database cluster computer and are stored in small 3D subcubes. Positions are indexed using a Z-curve for efficient access.
LONI’s Image and Data Archive (IDA) is a secure data archiving system. The IDA uses a robust infrastructure to provide researchers with a flexible and simple interface for de-identifying, searching, retrieving, converting, and disseminating their biomedical data. With thousands of investigators across the globe and more than 21 million data downloads to data, the IDA guarantees reliability with a fault-tolerant network comprising multiple switches, routers, and Internet connections to prevent system failure.
CottonGen is a new cotton community genomics, genetics and breeding database being developed to enable basic, translational and applied research in cotton. It is being built using the open-source Tripal database infrastructure. CottonGen consolidates and expands the data from CottonDB and the Cotton Marker Database, providing enhanced tools for easy querying, visualizing and downloading research data.
DIAMM (the Digital Image Archive of Medieval Music) is a leading resource for the study of medieval manuscripts. We present images and metadata for thousands of manuscripts on this website. We also provide a home for scholarly resources and editions, undertake digital restoration of damaged manuscripts and documents, publish high-quality facsimiles, and offer our expertise as consultants.
DLESE is the Digital Library for Earth System Education, a geoscience community resource that supports teaching and learning about the Earth system. It is funded by the National Science Foundation and is being built by a community of educators, students, and scientists to support Earth system education at all levels and in both formal and informal settings. Resources in DLESE include lesson plans, scientific data, visualizations, interactive computer models, and virtual field trips - in short, any web-accessible teaching or learning material. Many of these resources are organized in collections, or groups of related resources that reflect a coherent, focused theme. In many ways, digital collections are analogous to collections in traditional bricks-and-mortar libraries.
The Antimicrobial Peptide Database (APD) was originally created by a graduate student, Zhe Wang, as his master's thesis in the laboratory of Dr. Guangshun Wang. The project was initiated in 2002 and the first version of the database was open to the public in August 2003. It contained 525 peptide entries, which can be searched in multiple ways, including APD ID, peptide name, amino acid sequence, original location, PDB ID, structure, methods for structural determination, peptide length, charge, hydrophobic content, antibacterial, antifungal, antiviral, anticancer, and hemolytic activity. Some results of this bioinformatics tool were reported in the 2004 database paper. The peptide data stored in the APD were gleaned from the literature (PubMed, PDB, Google, and Swiss-Prot) manually in over a decade.