Reset all


Content Types


AID systems


Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type


Metadata standards

PID systems

Provider types

Quality management

Repository languages



Repository types


  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 12 result(s)
The National Cancer Data Base (NCDB), a joint program of the Commission on Cancer (CoC) of the American College of Surgeons (ACoS) and the American Cancer Society (ACS), is a nationwide oncology outcomes database for more than 1,500 Commission-accredited cancer programs in the United States and Puerto Rico. Some 70 percent of all newly diagnosed cases of cancer in the United States are captured at the institutional level and reported to the NCDB. The NCDB, begun in 1989, now contains approximately 29 million records from hospital cancer registries across the United States. Data on all types of cancer are tracked and analyzed. These data are used to explore trends in cancer care, to create regional and state benchmarks for participating hospitals, and to serve as the basis for quality improvement.
It captures and catalogues ancient human genome and microbiome data, including raw sequence and processed data, along with metadata about its provenance and production. Included datasets are generated from ancient samples studied at the Australian Centre for Ancient DNA, University of Adelaide in collaboration with other research groups. Datasets and collections in OAGR are open data resources made freely available in a reusable form, using open file formats and licensed with minimal restrictions for reuse. Digital object identifiers (DOIs) are minted for included datasets and collections to facilitate persistent identification and citation.
The project brings together national key players providing environmentally related biological data and services to develop the ‘German Federation for Biological Data' (GFBio). The overall goal is to provide a sustainable, service oriented, national data infrastructure facilitating data sharing and stimulating data intensive science in the fields of biological and environmental research.
The objective of this Research Coordination Network project is to develop an international network of researchers who use genetic methodologies to study the ecology and evolution of marine organisms in the Indo-Pacific to share data, ideas and methods. DIPnet was created to advance genetic diversity research in the Indo-Pacific by aggregating population genetic metadata into a searchable database (GeOME).
The Abacus Dataverse Network is the research data repository of the British Columbia Research Libraries' Data Services, a collaboration involving the Data Libraries at Simon Fraser University (SFU), the University of British Columbia (UBC), the University of Northern British Columbia (UNBC) and the University of Victoria (UVic).
The HMAP Data Pages are a research resource comprising of information derived largely from historical records relating to fishing catches and effort in selected spatial and temporal contexts. The History of Marine Animal Populations (HMAP), the historical component of the Census of Marine Life, aimed to improve our understanding of ecosystem dynamics, specifically with regard to long-term changes in stock abundance, the ecological impact of large-scale harvesting by man, and the role of marine resources in the historical development of human society. HMAP data is also accessible through the Ocean Biogeographic Information System (OBIS):, see also:
This interface provides access to several types of data related to the Chesapeake Bay. Bay Program databases can be queried based upon user-defined inputs such as geographic region and date range. Each query results in a downloadable, tab- or comma-delimited text file that can be imported to any program (e.g., SAS, Excel, Access) for further analysis. Comments regarding the interface are encouraged. Questions in reference to the data should be addressed to the contact provided on subsequent pages.
The CDAWeb data system enables improved display and coordinated analysis of multi-instrument, multimission data bases of the kind whose analysis is critical to meeting the science objectives of the ISTP program and the InterAgency Consultative Group (IACG) Solar-Terrestrial Science Initiative. The system combines the client-server user interface technology of the World Wide Web with a powerful set of customized IDL routines to leverage the data format standards (CDF) and guidelines for implementation adopted by ISTP and the IACG. The system can be used with any collection of data granules following the extended set of ISTP/IACG standards. CDAWeb is being used both to support coordinated analysis of public and proprietary data and better functional access to specific public data such as the ISTP-precursor CDAW 9 data base that is formatted to the ISTP/IACG standards. Many data sets are available through the Coordinated Data Analysis Web (CDAWeb) service and the data coverage continues to grow. These are largely, but not exclusively, magnetospheric data and nearby solar wind data of the ISTP era (1992-present) at time resolutions of approximately a minute. The CDAWeb service provides graphical browsing, data subsetting, screen listings, file creations and downloads (ASCII or CDF). Public data from current (1992-present) space physics missions (including Cluster, IMAGE, ISTP, FAST, IMP-8, SAMPEX and others). Public data from missions before 1992 (including IMP-8, ISIS1/2, Alouette2, Hawkeye and others). Public data from all current and past space physics missions. CDAWeb ist part of "Space Physics Data Facility" (
The GDR is the submission point for all data collected from researchers funded by the U.S. Department of Energy's Geothermal Technologies Office. It was established to receive, manage and make available all geothermal-relevant data generated from projects funded by the DOE Geothermal Technologies Office. This includes data from GTO-funded projects associated with any portion of the geothermal project life-cycle (exploration, development, operation), as well as data produced by GTO-funded research.
Codex Sinaiticus is one of the most important books in the world. Handwritten well over 1600 years ago, the manuscript contains the Christian Bible in Greek, including the oldest complete copy of the New Testament. The Codex Sinaiticus Project is an international collaboration to reunite the entire manuscript in digital form and make it accessible to a global audience for the first time. Drawing on the expertise of leading scholars, conservators and curators, the Project gives everyone the opportunity to connect directly with this famous manuscript.
The Allele Frequency Net Database (AFND) is a public database which contains frequency information of several immune genes such as Human Leukocyte Antigens (HLA), Killer-cell Immunoglobulin-like Receptors (KIR), Major histocompatibility complex class I chain-related (MIC) genes, and a number of cytokine gene polymorphisms. The Allele Frequency Net Database (AFND) provides a central source, freely available to all, for the storage of allele frequencies from different polymorphic areas in the Human Genome. Users can contribute the results of their work into one common database and can perform database searches on information already available. We have currently collected data in allele, haplotype and genotype format. However, the success of this website will depend on you to contribute your data.
ORTOLANG is an EQUIPEX project accepted in February 2012 in the framework of investissements d’avenir. Its aim is to construct a network infrastructure including a repository of language data (corpora, lexicons, dictionaries etc.) and readily available, well-documented tools for its processing. Expected outcomes comprize: promoting research on analysis, modelling and automatic processing of our language to their highest international levels thanks to effective resource pooling; facilitating the use and transfer of resources and tools set up within public laboratories to industrial partners, notably SMEs which often cannot develop such resources and tools for language processing given the cost of investment; promoting French language and the regional languages of France by sharing expertise acquired by public laboratories. ORTOLANG is a service for the language, which is complementary to the service offered by Huma-Num (très grande infrastructure de recherche). Ortolang gives access to SLDR for speech, and CNRTL for text resources.