Reset all


Content Types


AID systems



Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type


Metadata standards

PID systems

Provider types

Quality management

Repository languages



Repository types


  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 324 result(s)
WFCC Global Catalogue of Microorganisms (GCM) is expected to be a robust, reliable and user-friendly system to help culture collections to manage, disseminate and share the information related to their holdings. It also provides a uniform interface for the scientific and industrial communities to access the comprehensive microbial resource information.
Copernicus is a European system for monitoring the Earth. Copernicus consists of a complex set of systems which collect data from multiple sources: earth observation satellites and in situ sensors such as ground stations, airborne and sea-borne sensors. It processes these data and provides users with reliable and up-to-date information through a set of services related to environmental and security issues. The services address six thematic areas: land monitoring, marine monitoring, atmosphere monitoring, climate change, emergency management and security. The main users of Copernicus services are policymakers and public authorities who need the information to develop environmental legislation and policies or to take critical decisions in the event of an emergency, such as a natural disaster or a humanitarian crisis. Based on the Copernicus services and on the data collected through the Sentinels and the contributing missions , many value-added services can be tailored to specific public or commercial needs, resulting in new business opportunities. In fact, several economic studies have already demonstrated a huge potential for job creation, innovation and growth.
Peptidome was a public repository that archived tandem mass spectrometry peptide and protein identification data generated by the scientific community. This repository is now offline and is in archival mode. All data may be obtained from the Peptidome FTP site. Due to budgetary constraints NCBI has discontinued the Peptidome Repository. All existing data and metadata files will continue to be made available from our ftp server a indefinitely. Those files are named according to their Peptidome accession number, allowing cited data to be identified and downloaded. All of the Peptidome studies have been made publicly available at the PRoteomics IDEntifications (PRIDE) database. A map of Peptidome to Pride accessions may be found at If you have any specific questions, please feel free to contact us at
The Structure database provides three-dimensional structures of macromolecules for a variety of research purposes and allows the user to retrieve structures for specific molecule types as well as structures for genes and proteins of interest. Three main databases comprise Structure-The Molecular Modeling Database; Conserved Domains and Protein Classification; and the BioSystems Database. Structure also links to the PubChem databases to connect biological activity data to the macromolecular structures. Users can locate structural templates for proteins and interactively view structures and sequence data to closely examine sequence-structure relationships.
The IMSR is a searchable online database of mouse strains, stocks, and mutant ES cell lines available worldwide, including inbred, mutant, and genetically engineered strains. The goal of the IMSR is to assist the international scientific community in locating and obtaining mouse resources for research. Note that the data content found in the IMSR is as supplied by strain repository holders. For each strain or cell line listed in the IMSR, users can obtain information about: Where that resource is available (Repository Site); What state(s) the resource is available as (e.g. live, cryopreserved embryo or germplasm, ES cells); Links to descriptive information about a strain or ES cell line; Links to mutant alleles carried by a strain or ES cell line; Links for ordering a strain or ES cell line from a Repository; Links for contacting the Repository to send a query
The project brings together national key players providing environmentally related biological data and services to develop the ‘German Federation for Biological Data' (GFBio). The overall goal is to provide a sustainable, service oriented, national data infrastructure facilitating data sharing and stimulating data intensive science in the fields of biological and environmental research.
The UniProt Knowledgebase (UniProtKB) is the central hub for the collection of functional information on proteins, with accurate, consistent and rich annotation. In addition to capturing the core data mandatory for each UniProtKB entry (mainly, the amino acid sequence, protein name or description, taxonomic data and citation information), as much annotation information as possible is added. This includes widely accepted biological ontologies, classifications and cross-references, and clear indications of the quality of annotation in the form of evidence attribution of experimental and computational data. The Universal Protein Resource (UniProt) is a comprehensive resource for protein sequence and annotation data. The UniProt databases are the UniProt Knowledgebase (UniProtKB), the UniProt Reference Clusters (UniRef), and the UniProt Archive (UniParc). The UniProt Metagenomic and Environmental Sequences (UniMES) database is a repository specifically developed for metagenomic and environmental data. The UniProt Knowledgebase,is an expertly and richly curated protein database, consisting of two sections called UniProtKB/Swiss-Prot and UniProtKB/TrEMBL.
The Ningaloo Atlas was created in response to the need for more comprehensive and accessible information on environmental and socio-economic data on the greater Ningaloo region. As such, the Ningaloo Atlas is a web portal to not only access and share information, but to celebrate and promote the biodiversity, heritage, value, and way of life of the greater Ningaloo region.
Groundbreaking biomedical research requires access to cutting edge scientific resources; however such resources are often invisible beyond the laboratories or universities where they were developed. eagle-i is a discovery platform that helps biomedical scientists find previously invisible, but highly valuable, resources.
The Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC) is a team of researchers, data specialists and computer system developers who are supporting the development of a data management system to store scientific data generated by Gulf of Mexico researchers. The Master Research Agreement between BP and the Gulf of Mexico Alliance that established the Gulf of Mexico Research Initiative (GoMRI) included provisions that all data collected or generated through the agreement must be made available to the public. The Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC) is the vehicle through which GoMRI is fulfilling this requirement. The mission of GRIIDC is to ensure a data and information legacy that promotes continual scientific discovery and public awareness of the Gulf of Mexico Ecosystem.
DBpedia is a crowd-sourced community effort to extract structured information from Wikipedia and make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia, and to link the different data sets on the Web to Wikipedia data. We hope that this work will make it easier for the huge amount of information in Wikipedia to be used in some new interesting ways. Furthermore, it might inspire new mechanisms for navigating, linking, and improving the encyclopedia itself.
The Ministry for the Environment, Land and Sea has promoted the project "Environment 2010" which plays a strong team move to support the National Strategy for Biodiversity . The crux of the system is the National Biodiversity Network (NNB), a network of Centers of Excellence (CoE) and National Focal Point (FP), accredited to international and national level for the management, sharing and information about data on biodiversity.
The Data Catalogue is a service that allows University of Liverpool Researchers to create records of information about their finalised research data, and save those data in a secure online environment. The Data Catalogue provides a good means of making that data available in a structured way, in a form that can be discovered by both general search engines and academic search tools. There are two types of record that can be created in the Data Catalogue: A discovery-only record – in these cases, the research data may be held somewhere else but a record is provided to help people find it. A record is created that alerts users to the existence of the data, and provides a link to where those data are held. A discovery and data record – in these cases, a record is created to help people discover the data exist, and the data themselves are deposited into the Data Catalogue. This process creates a unique Digital Object identifier (DOI) which can be used in citations to the data.
The objective of this Research Coordination Network project is to develop an international network of researchers who use genetic methodologies to study the ecology and evolution of marine organisms in the Indo-Pacific to share data, ideas and methods. The tropical Indian and Pacific Oceans encompass the largest biogeographic region on the planet, the Indo-Pacific
FungiDB belongs to the EuPathDB family of databases and is an integrated genomic and functional genomic database for the kingdom Fungi. FungiDB was first released in early 2011 as a collaborative project between EuPathDB and the group of Jason Stajich (University of California, Riverside). At the end of 2015, FungiDB was integrated into the EuPathDB bioinformatic resource center. FungiDB integrates whole genome sequence and annotation and also includes experimental and environmental isolate sequence data. The database includes comparative genomics, analysis of gene expression, and supplemental bioinformatics analyses and a web interface for data-mining.
The National Sleep Research Resource (NSRR) offers free web access to large collections of de-identified physiological signals and clinical data elements collected in well-characterized research cohorts and clinical trials.
The JenAge Ageing Factor Database AgeFactDB is aimed at the collection and integration of ageing phenotype and lifespan data. Ageing factors are genes, chemical compounds or other factors such as dietary restriction, for example. In a first step ageing-related data are primarily taken from existing databases. In addition, new ageing-related information is included both by manual and automatic information extraction from the scientific literature. Based on a homology analysis, AgeFactDB also includes genes that are homologous to known ageing-related genes. These homologs are considered as candidate or putative ageing-related genes.
The Abacus Dataverse Network is the research data repository of the British Columbia Research Libraries' Data Services, a collaboration involving the Data Libraries at Simon Fraser University (SFU), the University of British Columbia (UBC), the University of Northern British Columbia (UNBC) and the University of Victoria (UVic).
Edinburgh DataShare is an online digital repository of multi-disciplinary research datasets produced at the University of Edinburgh, hosted by the Data Library in Information Services. Edinburgh University researchers who have produced research data associated with an existing or forthcoming publication, or which has potential use for other researchers, are invited to upload their dataset for sharing and safekeeping. A persistent identifier and suggested citation will be provided.
Pubchem contains 3 databases. 1. PubChem BioAssay: The PubChem BioAssay Database contains bioactivity screens of chemical substances described in PubChem Substance. It provides searchable descriptions of each bioassay, including descriptions of the conditions and readouts specific to that screening procedure. 2. PubChem Compound: The PubChem Compound Database contains validated chemical depiction information provided to describe substances in PubChem Substance. Structures stored within PubChem Compounds are pre-clustered and cross-referenced by identity and similarity groups. 3. PubChem Substance. The PubChem Substance Database contains descriptions of samples, from a variety of sources, and links to biological screening results that are available in PubChem BioAssay. If the chemical contents of a sample are known, the description includes links to PubChem Compound.
SSDA Dataverse is one of the archiving opportunities of SSDA, the others are: Data can be archived by SSDA itself ( or by ICPSR or by UCLA Library or by California Digital Library. The Social Science Data Archives serves the UCLA campus as an archive of faculty and graduate student survey research. We provide long term storage of data files and documentation. We ensure that the data are useable in the future by migrating files to new operating systems. We follow government standards and archival best practices. The mission of the Social Science Data Archive has been and continues to be to provide a foundation for social science research with faculty support throughout an entire research project involving original data collection or the reuse of publicly available studies. Data Archive staff and researchers work as partners throughout all stages of the research process, beginning when a hypothesis or area of study is being developed, during grant and funding activities, while data collection and/or analysis is ongoing, and finally in long term preservation of research results. Our role is to provide a collaborative environment where the focus is on understanding the nature and scope of research approach and management of research output throughout the entire life cycle of the project. Instructional support, especially support that links research with instruction is also a mainstay of operations.
The Comprehensive Epidemiologic Data Resource (CEDR) is the Department of Energy's (DOE) electronic database comprised of health studies of DOE contract workers and environmental studies of areas surrounding DOE facilities. DOE recognizes the benefits of data sharing and supports the public's right to know about worker and community health risks. CEDR provides independent researchers and the public with access to de-identified data collected since the Department's early production years. Current CEDR holdings include more than 80 studies of over 1 million workers at 31 DOE sites. Access to these data is at no cost to the user. Most of CEDR's holdings are derived from epidemiologic studies of DOE workers at many large nuclear weapons plants, such as Hanford, Los Alamos, the Oak Ridge reservation, Savannah River Site, and Rocky Flats. These studies primarily use death certificate information to identify excess deaths and patterns of disease among workers to determine what factors contribute to the risk of developing cancer and other illnesses. In addition, many of these studies have radiation exposure measurements on individual workers. CEDR is supported by the Oak Ridge Institute for Science and Education (ORISE) in Oak Ridge, Tennessee. Now a mature system in routine operational use, CEDR's modern internet-based systems respond to thousands of requests to its web server daily. With about 1,500 Internet sites pointing to CEDR's web site, CEDR is a national user facility, with a large audience for data that are not available elsewhere.
Our knowledge of the many life-forms on Earth - of animals, plants, fungi, protists and bacteria - is scattered around the world in books, journals, databases, websites, specimen collections, and in the minds of people everywhere. Imagine what it would mean if this information could be gathered together and made available to everyone – anywhere – at a moment’s notice. This dream is becoming a reality through the Encyclopedia of Life.