Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 154 result(s)
The Open Science Framework (OSF) is part network of research materials, part version control system, and part collaboration software. The purpose of the software is to support the scientist's workflow and help increase the alignment between scientific values and scientific practices. Document and archive studies. Move the organization and management of study materials from the desktop into the cloud. Labs can organize, share, and archive study materials among team members. Web-based project management reduces the likelihood of losing study materials due to computer malfunction, changing personnel, or just forgetting where you put the damn thing. Share and find materials. With a click, make study materials public so that other researchers can find, use and cite them. Find materials by other researchers to avoid reinventing something that already exists. Detail individual contribution. Assign citable, contributor credit to any research material - tools, analysis scripts, methods, measures, data. Increase transparency. Make as much of the scientific workflow public as desired - as it is developed or after publication of reports. Find public projects here. Registration. Registering materials can certify what was done in advance of data analysis, or confirm the exact state of the project at important points of the lifecycle such as manuscript submission or at the onset of data collection. Discover public registrations here. Manage scientific workflow. A structured, flexible system can provide efficiency gain to workflow and clarity to project objectives, as pictured.
The EZRC at KIT houses the largest experimental fish facility in Europe with a capacity of more than 300,000 fish. Zebrafish stocks are maintained mostly as frozen sperm. Frequently requested lines are also kept alive as well as a selection of wildtype strains. Several thousand mutations in protein coding genes generated by TILLING in the Stemple lab of the Sanger Centre, Hinxton, UK and lines generated by ENU mutagenesis by the Nüsslein-Volhard lab in addition to transgenic lines and mutants generated by KIT groups or brought in through collaborations. We also accept submissions on an individual basis and ship fish upon request to PIs in Europe and elsewhere. EZRC also provides screening services and technologies such as imaging and high-throughput sequencing. Key areas include automation of embryo handling and automated image acquisition and processing. Our platform also involves the development of novel microscopy techniques (e.g. SPIM, DSLM, robotic macroscope) to permit high-resolution, real-time imaging in 4D. By association with the ComPlat platform, we can support also chemical screens and offer libraries with up to 20,000 compounds in total for external users. As another service to the community the EZRC provides plasmids (cDNAs, transgenes, Talen, Crispr/cas9) maintained by the Helmholtz repository of Bioparts (HERBI) to the scientific community. In addition the fish facility keeps a range of medaka stocks, maintained by the Loosli group.
The Avian Knowledge Network (AKN) is an international network of governmental and non-governmental institutions and individuals linking avian conservation, monitoring and science through efficient data management and coordinated development of useful solutions using best-science practices based on the data.
This is a database for vegetation data from West Africa, i.e. phytosociological and dendrometric relevés as well as floristic inventories. The West African Vegetation Database has been developed in the framework of the projects “SUN - Sustainable Use of Natural Vegetation in West Africa” and “Biodiversity Transect Analysis in Africa” (BIOTA, https://www.biota-africa.org/).
Country
Contains data on patients who have been tested for COVID-19 (whether positive or negative) in participating health institutions in Brazil. This initiative makes available three kinds of pseudonymized data: demographics (gender, year of birth, and region of residency), clinical and laboratory exams. Additional hospitalization information - such as data on transfers and outcomes - is provided when available. Clinical, lab, and hospitalization information is not limited to COVID-19 data, but covers all health events for these individuals, starting November 1st 2019, to allow for comorbidity studies. Data are deposited periodically, so that health information for a given individual is continuously updated to time of new version upload.
BioSimulations is a web application for sharing and re-using biomodels, simulations, and visualizations of simulations results. BioSimulations supports a wide range of modeling frameworks (e.g., kinetic, constraint-based, and logical modeling), model formats (e.g., BNGL, CellML, SBML), and simulation tools (e.g., COPASI, libRoadRunner/tellurium, NFSim, VCell). BioSimulations aims to help researchers discover published models that might be useful for their research and quickly try them via a simple web-based interface.
The NCBI Nucleotide database collects sequences from such sources as GenBank, RefSeq, TPA, and PDB. Sequences collected relate to genome, gene, and transcript sequence data, and provide a foundation for research related to the biomedical field.
The European Vitis Database is being meintained since 2007 by the Julius-Kühn-Institut to ensure the long-term and efficient use of grape genetic resources.
Country
The China National GeneBank database (CNGBdb) is a unified platform for biological big data sharing and application services. CNGBdb has now integrated a large amount of internal and external biological data from resources such as CNGB, NCBI, and the EBI. There are several sub-databases in CNGBdb, including literature, variation, gene, genome, protein, sequence, organism, project, sample, experiment, run, and assembly. Based on underlying big data and cloud computing technologies, it provides various data services, including archive, analysis, knowledge search, and management authorization of biological data. CNGBdb adopts data structures and standards of international omics, health, and medicine, such as The International Nucleotide Sequence Database Collaboration (INSDC), The Global Alliance for Genomics and Health GA4GH (GA4GH), Global Genome Biodiversity Network (GGBN), American College of Medical Genetics and Genomics (ACMG), and constructs standardized data and structures with wide compatibility. All public data and services provided by CNGBdb are freely available to all users worldwide. CNGB Sequence Archive (CNSA) is the bionomics data repository of CNGBdb. CNGB Sequence Archive (CNSA) is a convenient and efficient archiving system of multi-omics data in life science, which provides archiving services for raw sequencing reads and further analyzed results. CNSA follows the international data standards for omics data, and supports online and batch submission of multiple data types such as Project, Sample, Experiment/Run, Assembly, Variation, Metabolism, Single cell, and Sequence. Moreover, CNSA has achieved the correlation of sample entities, sample information, and analyzed data on some projects. Its data submission service can be used as a supplement to the literature publishing process to support early data sharing.CNGB Sequence Archive (CNSA) is a convenient and efficient archiving system of multi-omics data in the life science of CNGBdb, which provides archiving services for raw sequencing reads and further analyzed results. CNSA follows the international data standards for omics data, and supports online and batch submission of multiple data types such as Project, Sample, Experiment/Run, Assembly, Variation, Metabolism, Single cell, Sequence. Its data submission service can be used as a supplement to the literature publishing process to support early data sharing.
WFCC Global Catalogue of Microorganisms (GCM) is expected to be a robust, reliable and user-friendly system to help culture collections to manage, disseminate and share the information related to their holdings. It also provides a uniform interface for the scientific and industrial communities to access the comprehensive microbial resource information.
The Deep Blue Data repository is a means for University of Michigan researchers to make their research data openly accessible to anyone in the world, provided they meet collections criteria. Submitted data sets undergo a curation review by librarians to support discovery, understanding, and reuse of the data.
A database for plant breeders and researchers to combine, visualize, and interrogate the wealth of phenotype and genotype data generated by the Triticeae Coordinated Agricultural Project (TCAP).
<<<!!!<<< The NCI CBIIT instance of the NBIA application was retired in March 2022. All data in the application has been transferred to The Cancer Image Archive https://www.re3data.org/repository/r3d100011559 and is available via the Access the Data > Search Radiology Portal menu item. The NBIA software is now maintained on GitHub, and can be built and deployed with the latest improvements and fixes that have been completed for TCIA. >>>!!!>>>
The Connectome Coordination Facility (CCF) houses and distributes public research data for a series of studies that focus on the connections within the human brain. These are known as Human Connectome Projects. he Connectome Coordination Facility (CCF) was chartered to help coordinate myriad research projects, harmonize their data, and facilitate the dissemination of results.
Country
Welcome to the National Yang Ming Chiao Tung University Dataverse research data knowledge management website, where you can learn how to obtain, upload, cite and explore research data in the National Yang Ming Chiao Tung University Dataverse.
NeuGRID is a secure data archiving and HPC processing system. The neuGRID platform uses a robust infrastructure to provide researchers with a simple interface for analysing, searching, retrieving and disseminating their biomedical data. With hundreds of investigators across the globe and more than 10 million of downloadable attributes, neuGRID aims to become a widespread resource for brain analyses. NeuGRID platform guarantees reliability with a fault-tolerant network to prevent system failure.
<<<!!!<<<The repository is no longer available> >>!!!<<< Data is archived at ChemSpider https://www.chemspider.com/Search.aspx?dsn=UsefulChem and https://www.chemspider.com/Search.aspx?dsn=Usefulchem Group Bradley Lab see more information at the Standards tab at 'Remarks'
Country
The Genome Warehouse (GWH) is a public repository housing genome-scale data for a wide range of species and delivering a series of web services for genome data submission, storage, release and sharing.
Country
DDBJ Sequence Read Archive (DRA) is the public archive of high throughput sequencing data. DRA stores raw sequencing data and alignment information to enhance reproducibility and facilitate new discoveries through data analysis. DRA is a member of the International Nucleotide Sequence Database Collaboration (INSDC) and archiving the data in a close collaboration with NCBI Sequence Read Archive (SRA) and EBI Sequence Read Archive (ERA).
Country
DataDOI is an institutional research data repository managed by University of Tartu Library. DataDOI gathers all fields of research data and stands for encouraging open science and FAIR (Findable, Accessible, Interoperable, Reusable) principles. DataDOI is made for long-term preservation of research data. Each dataset is given a DOI (Digital Object Identifier) through DataCite Estonia Concortium.
>>>!!!<<< Sorry.we are no longer in operation >>>!!!<<< The Beta Cell Biology Consortium (BCBC) was a team science initiative that was established by the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK). It was initially funded in 2001 (RFA DK-01-014), and competitively continued both in 2005 (RFAs DK-01-17, DK-01-18) and in 2009 (RFA DK-09-011). Funding for the BCBC came to an end on August 1, 2015, and with it so did our ability to maintain active websites.!!! One of the many goals of the BCBC was to develop and maintain databases of useful research resources. A total of 813 different scientific resources were generated and submitted by BCBC investigators over the 14 years it existed. Information pertaining to 495 selected resources, judged to be the most scientifically-useful, has been converted into a static catalog, as shown below. In addition, the metadata for these 495 resources have been transferred to dkNET in the form of RDF descriptors, and all genomics data have been deposited to either ArrayExpress or GEO. Please direct questions or comments to the NIDDK Division of Diabetes, Endocrinology & Metabolic Diseases (DEM).