Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 23 result(s)
The N3C Data Enclave is a secure portal containing a very large and extensive set of harmonized COVID-19 clinical electronic health record (EHR) data. The data can be accessed through a secure cloud Enclave hosted by NCATS and cannot be downloaded due to regulatory control. Broad access is available to investigators at institutions that sign a Data Use Agreements and via Data Use Requests by investigators. The N3C is a unique open, reproducible, transparent, collaborative team science initiative to leverage sensitive clinical data to expedite COVID-19 discoveries and improve health outcomes.
Database and knowledgebase of authenticated microbial genomics data with full data provenance to physical materials held within American Type Culture Collection's (ATCC) biorepository and culture collections. Data includes whole genome sequencing data for bacterial, viral and fungal strains at ATCC, their genome assemblies, metadata, drug susceptibility data, and more. All data is freely available for non-commercial research use only (RUO) applications via the web portal interface or via a REST-API. The goal is to provide the research community with provenance information and authentication between the biological source materials and reference genome assemblies derived from them.
LSHTM Data Compass is a curated digital repository of research outputs that have been produced by staff and students at the London School of Hygiene & Tropical Medicine and their collaborators. It is used to share outputs intended for reuse, including: qualitative and quantitative data, software code and scripts, search strategies, and data collection tools.
The Bacterial and Viral Bioinformatics Resource Center (BV-BRC) is an information system designed to support research on bacterial and viral infectious diseases. BV-BRC combines two long-running BRCs: PATRIC, the bacterial system, and IRD/ViPR, the viral systems.
The MG-RAST server is an open source system for annotation and comparative analysis of metagenomes. Users can upload raw sequence data in fasta format; the sequences will be normalized and processed and summaries automatically generated. The server provides several methods to access the different data types, including phylogenetic and metabolic reconstructions, and the ability to compare the metabolism and annotations of one or more metagenomes and genomes. In addition, the server offers a comprehensive search capability. Access to the data is password protected, and all data generated by the automated pipeline is available for download in a variety of common formats. MG-RAST has become an unofficial repository for metagenomic data, providing a means to make your data public so that it is available for download and viewing of the analysis without registration, as well as a static link that you can use in publications. It also requires that you include experimental metadata about your sample when it is made public to increase the usefulness to the community.
Content type(s)
Country
The GISAID Initiative promotes the international sharing of all influenza virus sequences, related clinical and epidemiological data associated with human viruses, and geographical as well as species-specific data associated with avian and other animal viruses, to help researchers understand how the viruses evolve, spread and potentially become pandemics. *** GISAID does so by overcoming disincentives/hurdles or restrictions, which discourage or prevented sharing of influenza data prior to formal publication. *** The Initiative ensures that open access to data in GISAID is provided free-of-charge and to everyone, provided individuals identify themselves and agree to uphold the GISAID sharing mechanism governed through its Database Access Agreement. GISAID calls on all users to agree to the basic premise of upholding scientific etiquette, by acknowledging the originating laboratories providing the specimen and the submitting laboratories who generate the sequence data, ensuring fair exploitation of results derived from the data, and that all users agree that no restrictions shall be attached to data submitted to GISAID, to promote collaboration among researchers on the basis of open sharing of data and respect for all rights and interests.
BEI Resources was established by the National Institute of Allergy and Infectious Diseases (NIAID) to provide reagents, tools and information for studying Category A, B, and C priority pathogens, emerging infectious disease agents, non-pathogenic microbes and other microbiological materials of relevance to the research community. BEI Resources acquires authenticates, and produces reagents that scientists need to carry out basic research and develop improved diagnostic tests, vaccines, and therapies. By centralizing these functions within BEI Resources, access to and use of these materials in the scientific community is monitored and quality control of the reagents is assured
ArrayExpress is one of the major international repositories for high-throughput functional genomics data from both microarray and high-throughput sequencing studies, many of which are supported by peer-reviewed publications. Data sets are submitted directly to ArrayExpress and curated by a team of specialist biological curators. In the past (until 2018) datasets from the NCBI Gene Expression Omnibus database were imported on a weekly basis. Data is collected to MIAME and MINSEQE standards.
ETH Data Archive is ETH Zurich's long-term preservation solution for digital information such as research data, digitised content, archival records, or images. It serves as the backbone of data curation and for most of its content, it is a “dark archive” without public access. In this capacity, the ETH Data Archive also archives the content of ETH Zurich’s Research Collection which is the primary repository for members of the university and the first point of contact for publication of data at ETH Zurich. All data that was produced in the context of research at the ETH Zurich, can be published and archived in the Research Collection. An automated connection to the ETH Data Archive in the background ensures the medium to long-term preservation of all publications and research data. Direct access to the ETH Data Archive is intended only for customers who need to deposit software source code within the framework of ETH transfer Software Registration. Open Source code packages and other content from legacy workflows can be accessed via ETH Library @ swisscovery (https://library.ethz.ch/en/).
The Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) Data and Specimen Hub (DASH) is a centralized resource that allows researchers to share and access de-identified data from studies funded by NICHD. DASH also serves as a portal for requesting biospecimens from selected DASH studies.
Country
CEEHRC represents a multi-stage funding commitment by the Canadian Institutes of Health Research (CIHR) and multiple Canadian and international partners. The overall aim is to position Canada at the forefront of international efforts to translate new discoveries in the field of epigenetics into improved human health. The two sites will focus on sequencing human reference epigenomes and developing new technologies and protocols; they will also serve as platforms for other CEEHRC funding initiatives, such as catalyst and team grants. The complementary reference epigenome mapping efforts of the two sites will focus on a range of common human diseases. The Vancouver group will focus on the role of epigenetics in the development of cancer, including lymphoma and cancers of the ovary, colon, breast, and thyroid. The Montreal team will focus on autoimmune / inflammatory, cardio-metabolic, and neuropsychiatric diseases, using studies of identical twins as well as animal models of human disease.
The Immunology Database and Analysis Portal (ImmPort) archives clinical study and trial data generated by NIAID/DAIT-funded investigators. Data types housed in ImmPort include subject assessments i.e., medical history, concomitant medications and adverse events as well as mechanistic assay data such as flow cytometry, ELISA, ELISPOT, etc. --- You won't need an ImmPort account to search for compelling studies, peruse study demographics, interventions and mechanistic assays. But why stop there? What you really want to do is download the study, look at each experiment in detail including individual ELISA results and flow cytometry files. Perhaps you want to take those flow cytometry files for a test drive using FLOCK in the ImmPort flow cytometry module. To download all that interesting data you will need to register for ImmPort access.
The MMRRC is the nation’s premier national public repository system for mutant mice. Funded by the NIH continuously since 1999, the MMRRC archives and distributes scientifically valuable spontaneous and induced mutant mouse strains and ES cell lines for use by the biomedical research community. The MMRRC consists of a national network of breeding and distribution repositories and an Informatics Coordination and Service Center located at 4 major academic centers across the nation. The MMRRC is committed to upholding the highest standards of experimental design and quality control to optimize the reproducibility of research studies using mutant mice.
Synapse is an open source software platform that clinical and biological data scientists can use to carry out, track, and communicate their research in real time. Synapse enables co-location of scientific content (data, code, results) and narrative descriptions of that work.
The Open Science Framework (OSF) is part network of research materials, part version control system, and part collaboration software. The purpose of the software is to support the scientist's workflow and help increase the alignment between scientific values and scientific practices. Document and archive studies. Move the organization and management of study materials from the desktop into the cloud. Labs can organize, share, and archive study materials among team members. Web-based project management reduces the likelihood of losing study materials due to computer malfunction, changing personnel, or just forgetting where you put the damn thing. Share and find materials. With a click, make study materials public so that other researchers can find, use and cite them. Find materials by other researchers to avoid reinventing something that already exists. Detail individual contribution. Assign citable, contributor credit to any research material - tools, analysis scripts, methods, measures, data. Increase transparency. Make as much of the scientific workflow public as desired - as it is developed or after publication of reports. Find public projects here. Registration. Registering materials can certify what was done in advance of data analysis, or confirm the exact state of the project at important points of the lifecycle such as manuscript submission or at the onset of data collection. Discover public registrations here. Manage scientific workflow. A structured, flexible system can provide efficiency gain to workflow and clarity to project objectives, as pictured.
Patient Reported Outcomes Following Initial treatment and Long term Evaluation of Survivorship (PROFILES)’ is a registry for the study of the physical and psychosocial impact of cancer and its treatment from a dynamic, growing population-based cohort of both short and long-term cancer survivors. Researchers from the Netherlands Comprehensive Cancer Centre and Tilburg University in Tilburg, The Netherlands, work together with medical specialists from national hospitals in order to setup different PROFILES studies, collect the necessary data, and present the results in scientific journals and (inter)national conferences.
The DNA Bank Network was established in spring 2007 and was funded until 2011 by the German Research Foundation (DFG). The network was initiated by GBIF Germany (Global Biodiversity Information Facility). It offers a worldwide unique concept. DNA bank databases of all partners are linked and are accessible via a central web portal, providing DNA samples of complementary collections (microorganisms, protists, plants, algae, fungi and animals). The DNA Bank Network was one of the founders of the Global Genome Biodiversity Network (GGBN) and is fully merged with GGBN today. GGBN agreed on using the data model proposed by the DNA Bank Network. The Botanic Garden and Botanical Museum Berlin-Dahlem (BGBM) hosts the technical secretariat of GGBN and its virtual infrastructure. The main focus of the DNA Bank Network is to enhance taxonomic, systematic, genetic, conservation and evolutionary studies by providing: • high quality, long-term storage of DNA material on which molecular studies have been performed, so that results can be verified, extended, and complemented, • complete on-line documentation of each sample, including the provenance of the original material, the place of voucher deposit, information about DNA quality and extraction methodology, digital images of vouchers and links to published molecular data if available.
It is an interactive website offering access to genome sequence data from a variety of vertebrate and invertebrate species and major model organisms, integrated with a large collection of aligned annotations. The Browser is a graphical viewer optimized to support fast interactive performance and is an open-source, web-based tool suite built on top of a MySQL database for rapid visualization, examination, and querying of the data at many levels.