Reset all


Content Types


AID systems



Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type


Metadata standards

PID systems

Provider types

Quality management

Repository languages



Repository types


  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 199 result(s)
The CancerData site is an effort of the Medical Informatics and Knowledge Engineering team (MIKE for short) of Maastro Clinic, Maastricht, The Netherlands. Our activities in the field of medical image analysis and data modelling are visible in a number of projects we are running. CancerData is offering several datasets. They are grouped in collections and can be public or private. You can search for public datasets in the NBIA (National Biomedical Imaging Archive) image archives without logging in.
The Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC) is a team of researchers, data specialists and computer system developers who are supporting the development of a data management system to store scientific data generated by Gulf of Mexico researchers. The Master Research Agreement between BP and the Gulf of Mexico Alliance that established the Gulf of Mexico Research Initiative (GoMRI) included provisions that all data collected or generated through the agreement must be made available to the public. The Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC) is the vehicle through which GoMRI is fulfilling this requirement. The mission of GRIIDC is to ensure a data and information legacy that promotes continual scientific discovery and public awareness of the Gulf of Mexico Ecosystem.
The PLANKTON*NET data provider at the Alfred Wegener Institute for Polar and Marine Research is an open access repository for plankton-related information. It covers all types of phytoplankton and zooplankton from marine and freshwater areas. PLANKTON*NET's greatest strength is its comprehensiveness as for the different taxa image information as well as taxonomic descriptions can be archived. PLANKTON*NET also contains a glossary with accompanying images to illustrate the term definitions. PLANKTON*NET therefore presents a vital tool for the preservation of historic data sets as well as the archival of current research results. Because interoperability with international biodiversity data providers (e.g. GBIF) is one of our aims, the architecture behind the new planktonnet@awi repository is observation centric and allows for mulitple assignment of assets (images, references, animations, etc) to any given observation. In addition, images can be grouped in sets and/or assigned tags to satisfy user-specific needs . Sets (and respective images) of relevance to the scientific community and/or general public have been assigned a persistant digital object identifier (DOI) for the purpose of long-term preservation (e.g. set ""Plankton*Net celebrates 50 years of Roman Treaties"", handle: 10013/de.awi.planktonnet.set.495)"
INDEPTH is a global network of research centres that conduct longitudinal health and demographic evaluation of populations in low- and middle-income countries (LMICs). INDEPTH aims to strengthen global capacity for Health and Demographic Surveillance Systems (HDSSs), and to mount multi-site research to guide health priorities and policies in LMICs, based on up-to-date scientific evidence. The data collected by the INDEPTH Network members constitute a valuable resource of population and health data for LMIC countries. This repository aims to make well documented anonymised longitudinal microdata from these Centres available to data users.
The tree of life links all biodiversity through a shared evolutionary history. This project will produce the first online, comprehensive first-draft tree of all 1.8 million named species, accessible to both the public and scientific communities. Assembly of the tree will incorporate previously-published results, with strong collaborations between computational and empirical biologists to develop, test and improve methods of data synthesis. This initial tree of life will not be static; instead, we will develop tools for scientists to update and revise the tree as new data come in. Early release of the tree and tools will motivate data sharing and facilitate ongoing synthesis of knowledge.
Intrepid Bioinformatics serves as a community for genetic researchers and scientific programmers who need to achieve meaningful use of their genetic research data – but can’t spend tremendous amounts of time or money in the process. The Intrepid Bioinformatics system automates time consuming manual processes, shortens workflow, and eliminates the threat of lost data in a faster, cheaper, and better environment than existing solutions. The system also provides the functionality and community features needed to analyze the large volumes of Next Generation Sequencing and Single Nucleotide Polymorphism data, which is generated for a wide range of purposes from disease tracking and animal breeding to medical diagnosis and treatment.
The National Cancer Data Base (NCDB), a joint program of the Commission on Cancer (CoC) of the American College of Surgeons (ACoS) and the American Cancer Society (ACS), is a nationwide oncology outcomes database for more than 1,500 Commission-accredited cancer programs in the United States and Puerto Rico. Some 70 percent of all newly diagnosed cases of cancer in the United States are captured at the institutional level and reported to the NCDB. The NCDB, begun in 1989, now contains approximately 29 million records from hospital cancer registries across the United States. Data on all types of cancer are tracked and analyzed. These data are used to explore trends in cancer care, to create regional and state benchmarks for participating hospitals, and to serve as the basis for quality improvement.
The data in the U of M’s Clinical Data Repository comes from the electronic health records (EHRs) of more than 2 million patients seen at 8 hospitals and more than 40 clinics. For each patient, data is available regarding the patient's demographics (age, gender, language, etc.), medical history, problem list, allergies, immunizations, outpatient vitals, diagnoses, procedures, medications, lab tests, visit locations, providers, provider specialties, and more.
The JenAge Ageing Factor Database AgeFactDB is aimed at the collection and integration of ageing phenotype and lifespan data. Ageing factors are genes, chemical compounds or other factors such as dietary restriction, for example. In a first step ageing-related data are primarily taken from existing databases. In addition, new ageing-related information is included both by manual and automatic information extraction from the scientific literature. Based on a homology analysis, AgeFactDB also includes genes that are homologous to known ageing-related genes. These homologs are considered as candidate or putative ageing-related genes.
Silkworm Pathogen Database (SilkPathDB) is a comprehensive resource for studying on pathogens of silkworm, including microsporidia, fungi, bacteria and virus. SilkPathDB provides access to not only genomic data including functional annotation of genes and gene products, but also extensive biological information for gene expression data and corresponding researches. SilkPathDB will be help with researches on pathogens of silkworm as well as other Lepidoptera insects.
The FREEBIRD website aims to facilitate data sharing in the area of injury and emergency research in a timely and responsible manner. It has been launched by providing open access to anonymised data on over 30,000 injured patients (the CRASH-1 and CRASH-2 trials).
The RAMEDIS system is a platform independent, web-based information system for rare metabolic diseases based on filed case reports. It was developed in close cooperation with clinical partners to allow them to collect information on rare metabolic diseases with extensive details, e.g. about occurring symptoms, laboratory findings, therapy and molecular data.
The CARMEN pilot project seeks to create a virtual laboratory for experimental neurophysiology, enabling the sharing and collaborative exploitation of data, analysis code and expertise. This study by the DCC contributes to an understanding of the data curation requirements of the eScience community, through its extended observation of the CARMEN neurophysiology community’s specification and selection of solutions for the organisation, access and curation of digital research output.
METLIN represents the largest MS/MS collection of data with the database generated at multiple collision energies and in positive and negative ionization modes. The data is generated on multiple instrument types including SCIEX, Agilent, Bruker and Waters QTOF mass spectrometers.
The Brain Biodiversity Bank refers to the repository of images of and information about brain specimens contained in the collections associated with the National Museum of Health and Medicine at the Armed Forces Institute of Pathology in Washington, DC. These collections include, besides the Michigan State University Collection, the Welker Collection from the University of Wisconsin, the Yakovlev-Haleem Collection from Harvard University, the Meyer Collection from the Johns Hopkins University, and the Huber-Crosby and Crosby-Lauer Collections from the University of Michigan and the C.U. Ariëns Kappers brain collection from Amsterdam Netherlands.Introducing online atlases of the brains of humans, sheep, dolphins, and other animals. A world resource for illustrations of whole brains and stained sections from a great variety of mammals
caNanoLab is a data sharing portal designed to facilitate information sharing in the biomedical nanotechnology research community to expedite and validate the use of nanotechnology in biomedicine. caNanoLab provides support for the annotation of nanomaterials with characterizations resulting from physico-chemical and in vitro assays and the sharing of these characterizations and associated nanotechnology protocols in a secure fashion.
The IMEx consortium is an international collaboration between a group of major public interaction data providers who have agreed to share curation effort and develop and work to a single set of curation rules when capturing data from both directly deposited interaction data or from publications in peer-reviewed journals, capture full details of an interaction in a “deep” curation model, perform a complete curation of all protein-protein interactions experimentally demonstrated within a publication, make these interaction available in a single search interface on a common website, provide the data in standards compliant download formats, make all IMEx records freely accessible under the Creative Commons Attribution License
IntAct provides a freely available, open source database system and analysis tools for molecular interaction data. All interactions are derived from literature curation or direct user submissions and are freely available.
The Institutional repository collects, disseminates and preserves in digital form, the intellectual output that results from the academic and research activity of the Universitat Pompeu Fabra (UPF). Its Purpose is to Increase the impact of research done at the UPF and STIs intellectual memory.
Swiss Institute of Bioinformatics (SIB) coordinates research and education in bioinformatics throughout Switzerland and provides bioinformatics services to the national and international research community. ExPASy gives access to numerous repositories and databases of SIB. For example: array map, MetaNetX, SWISS-MODEL and World-2DPAGE, and many others see a list here
CBS offers Comprehensive public databases of DNA- and protein sequences, macromolecular structure, g ene and protein expression levels, pathway organization and cell signalling, have been established to optimise scientific exploitation of the explosion of data within biology. Unlike many other groups in the field of biomolecular informatics, Center for Biological Sequence Analysis directs its research primarily towards topics related to the elucidation of the functional aspects of complex biological mechanisms. Among contemporary bioinformatics concerns are reliable computational interpretation of a wide range of experimental data, and the detailed understanding of the molecular apparatus behind cellular mechanisms of sequence information. By exploiting available experimental data and evidence in the design of algorithms, sequence correlations and other features of biological significance can be inferred. In addition to the computational research the center also has experimental efforts in gene expression analysis using DNA chips and data generation in relation to the physical and structural properties of DNA. In the last decade, the Center for Biological Sequence Analysis has produced a large number of computational methods, which are offered to others via WWW servers.
dbEST is a division of GenBank that contains sequence data and other information on "single-pass" cDNA sequences, or "Expressed Sequence Tags", from a number of organisms. Expressed Sequence Tags (ESTs) are short (usually about 300-500 bp), single-pass sequence reads from mRNA (cDNA). Typically they are produced in large batches. They represent a snapshot of genes expressed in a given tissue and/or at a given developmental stage. They are tags (some coding, others not) of expression for a given cDNA library. Most EST projects develop large numbers of sequences. These are commonly submitted to GenBank and dbEST as batches of dozens to thousands of entries, with a great deal of redundancy in the citation, submitter and library information. To improve the efficiency of the submission process for this type of data, we have designed a special streamlined submission process and data format. dbEST also includes sequences that are longer than the traditional ESTs, or are produced as single sequences or in small batches. Among these sequences are products of differential display experiments and RACE experiments. The thing that these sequences have in common with traditional ESTs, regardless of length, quality, or quantity, is that there is little information that can be annotated in the record. If a sequence is later characterized and annotated with biological features such as a coding region, 5'UTR, or 3'UTR, it should be submitted through the regular GenBank submissions procedure (via BankIt or Sequin), even if part of the sequence is already in dbEST. dbEST is reserved for single-pass reads. Assembled sequences should not be submitted to dbEST. GenBank will accept assembled EST submissions for the forthcoming TSA (Transcriptome Shotgun Assembly) division. The individual reads which make up the assembly should be submitted to dbEST, the Trace archive or the Short Read Archive (SRA) prior to the submission of the assemblies.