Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 36 result(s)
Chempound is a new generation repository architecture based on RDF, semantic dictionaries and linked data. It has been developed to hold any type of chemical object expressible in CML and is exemplified by crystallographic experiments and computational chemistry calculations. In both examples, the repository can hold >50k entries which can be searched by SPARQL endpoints and pre-indexing of key fields. The Chempound architecture is general and adaptable to other fields of data-rich science. The Chempound software is hosted at http://bitbucket.org/chempound and is available under the Apache License, Version 2.0
Pubchem contains 3 databases. 1. PubChem BioAssay: The PubChem BioAssay Database contains bioactivity screens of chemical substances described in PubChem Substance. It provides searchable descriptions of each bioassay, including descriptions of the conditions and readouts specific to that screening procedure. 2. PubChem Compound: The PubChem Compound Database contains validated chemical depiction information provided to describe substances in PubChem Substance. Structures stored within PubChem Compounds are pre-clustered and cross-referenced by identity and similarity groups. 3. PubChem Substance. The PubChem Substance Database contains descriptions of samples, from a variety of sources, and links to biological screening results that are available in PubChem BioAssay. If the chemical contents of a sample are known, the description includes links to PubChem Compound.
dbEST is a division of GenBank that contains sequence data and other information on "single-pass" cDNA sequences, or "Expressed Sequence Tags", from a number of organisms. Expressed Sequence Tags (ESTs) are short (usually about 300-500 bp), single-pass sequence reads from mRNA (cDNA). Typically they are produced in large batches. They represent a snapshot of genes expressed in a given tissue and/or at a given developmental stage. They are tags (some coding, others not) of expression for a given cDNA library. Most EST projects develop large numbers of sequences. These are commonly submitted to GenBank and dbEST as batches of dozens to thousands of entries, with a great deal of redundancy in the citation, submitter and library information. To improve the efficiency of the submission process for this type of data, we have designed a special streamlined submission process and data format. dbEST also includes sequences that are longer than the traditional ESTs, or are produced as single sequences or in small batches. Among these sequences are products of differential display experiments and RACE experiments. The thing that these sequences have in common with traditional ESTs, regardless of length, quality, or quantity, is that there is little information that can be annotated in the record. If a sequence is later characterized and annotated with biological features such as a coding region, 5'UTR, or 3'UTR, it should be submitted through the regular GenBank submissions procedure (via BankIt or Sequin), even if part of the sequence is already in dbEST. dbEST is reserved for single-pass reads. Assembled sequences should not be submitted to dbEST. GenBank will accept assembled EST submissions for the forthcoming TSA (Transcriptome Shotgun Assembly) division. The individual reads which make up the assembly should be submitted to dbEST, the Trace archive or the Short Read Archive (SRA) prior to the submission of the assemblies.
STRING is a database of known and predicted protein interactions. The interactions include direct (physical) and indirect (functional) associations; they are derived from four sources: - Genomic Context - High-throughput Experiments - (Conserved) Coexpression - Previous Knowledge STRING quantitatively integrates interaction data from these sources for a large number of organisms, and transfers information between these organisms where applicable.
Country
ArachnoServer is a manually curated database containing information on the sequence, three-dimensional structure, and biological activity of protein toxins derived from spider venom. Spiders are the largest group of venomous animals and they are predicted to contain by far the largest number of pharmacologically active peptide toxins (Escoubas et al., 2006). ArachnoServer has been custom-built so that a wide range of biological scientists, including neuroscientists, pharmacologists, and toxinologists, can readily access key data relevant to their discipline without being overwhelmed by extraneous information.
<<<!!!<<< This MultiDark application is now integrated into CosmoSim (https://www.cosmosim.org/ , all data and much more is available there. The old MultiDark server is no longer available. >>>!!!>>> The MultiDark database provides results from cosmological simulations performed within the MultiDark project. This database can be queried by entering SQL statements directly into the Query Form. The access to that form and thus access to the public & private databases is password protected.
Country
MetaCrop is a database that summarizes diverse information about metabolic pathways in crop plants and allows automatic export of information for the creation of detailed metabolic models. MetaCrop is a database that contains manually curated, highly detailed information about metabolic pathways in crop plants, including location information, transport processes and reaction kinetics.
M-CSA is a database of enzyme reaction mechanisms. It provides annotation on the protein, catalytic residues, cofactors, and the reaction mechanisms of hundreds of enzymes. There are two kinds of entries in M-CSA. 'Detailed mechanism' entries are more complete and show the individual chemical steps of the mechanism as schemes with electron flow arrows. 'Catalytic Site' entries annotate the catalytic residues necessary for the reaction, but do not show the mechanism. The M-CSA (Mechanism and Catalytic Site Atlas) represents a unified resource that combines the data in both MACiE and the CSA
Protectedplanet.net combines crowd sourcing and authoritative sources to enrich and provide data for protected areas around the world. Data are provided in partnership with the World Database on Protected Areas (WDPA). The data include the location, designation type, status year, and size of the protected areas, as well as species information.
>>>>!!!!<<<< The Cancer Genomics Hub mission is now completed. The Cancer Genomics Hub was established in August 2011 to provide a repository to The Cancer Genome Atlas, the childhood cancer initiative Therapeutically Applicable Research to Generate Effective Treatments and the Cancer Genome Characterization Initiative. CGHub rapidly grew to be the largest database of cancer genomes in the world, storing more than 2.5 petabytes of data and serving downloads of nearly 3 petabytes per month. As the central repository for the foundational genome files, CGHub streamlined team science efforts as data became as easy to obtain as downloading from a hard drive. The convenient access to Big Data, and the collaborations that CGHub made possible, are now essential to cancer research. That work continues at the NCI's Genomic Data Commons. All files previously stored at CGHub can be found there. The Website for the Genomic Data Commons is here: https://gdc.nci.nih.gov/ >>>>!!!!<<<< The Cancer Genomics Hub (CGHub) is a secure repository for storing, cataloging, and accessing cancer genome sequences, alignments, and mutation information from the Cancer Genome Atlas (TCGA) consortium and related projects. Access to CGHub Data: All researchers using CGHub must meet the access and use criteria established by the National Institutes of Health (NIH) to ensure the privacy, security, and integrity of participant data. CGHub also hosts some publicly available data, in particular data from the Cancer Cell Line Encyclopedia. All metadata is publicly available and the catalog of metadata and associated BAMs can be explored using the CGHub Data Browser.
Country
The public MorpheusML model repository collects, curates, documents and tests computational models for multi-scale and multicellular biological systems. Model must be encoded in the model description language MorpheusML. Subsections of the repository distinguish published models from contributed non-published and example models. New models are simulated in Morpheus or Artistoo independently from the authors and results are compared to published results. Successful reproduction is documented on the model's webpage. Models in this repository are included into the CI and test pipelines for each release of the model simulator Morpheus to check and guarantee reproducibility of results across future simulator updates. The model’s webpage provides a History-link to all past model versions and edits that are automatically tracked via Git. Each model is registered with a unique and persistent ID of the format M..... The model description page (incl. the biological context and key results of that model), the model’s XML file, the associated paper, and all further files (often simulation result videos) connected with that model can be retrieved via a persistent URL of the format https://identifiers.org/morpheus/M..... - for technical details on the citable ModelID please see https://registry.identifiers.org/registry/morpheus - for the model definition standard MorpheusML please see https://doi.org/10.25504/FAIRsharing.78b6a6 - for the model simulator Morpheus please see https://morpheus.gitlab.io - for the model simulator Artistoo please see https://artistoo.net/converter.html
Country
BRENDA is the main collection of enzyme functional data available to the scientific community worldwide. The enzymes are classified according to the Enzyme Commission list of enzymes. It is available free of charge for via the internet (http://www.brenda-enzymes.org/) and as an in-house database for commercial users (requests to our distributor Biobase). The enzymes are classified according to the Enzyme Commission list of enzymes. Some 5000 "different" enzymes are covered. Frequently enzymes with very different properties are included under the same EC number. BRENDA includes biochemical and molecular information on classification, nomenclature, reaction, specificity, functional parameters, occurrence, enzyme structure, application, engineering, stability, disease, isolation, and preparation. The database also provides additional information on ligands, which function as natural or in vitro substrates/products, inhibitors, activating compounds, cofactors, bound metals, and other attributes.
ArrayExpress is one of the major international repositories for high-throughput functional genomics data from both microarray and high-throughput sequencing studies, many of which are supported by peer-reviewed publications. Data sets are submitted directly to ArrayExpress and curated by a team of specialist biological curators. In the past (until 2018) datasets from the NCBI Gene Expression Omnibus database were imported on a weekly basis. Data is collected to MIAME and MINSEQE standards.
ZFIN serves as the zebrafish model organism database. The long term goals for ZFIN are a) to be the community database resource for the laboratory use of zebrafish, b) to develop and support integrated zebrafish genetic, genomic and developmental information, c) to maintain the definitive reference data sets of zebrafish research information, d) to link this information extensively to corresponding data in other model organism and human databases, e) to facilitate the use of zebrafish as a model for human biology and f) to serve the needs of the research community. ZIRC is the Zebrafish International Resource Center, an independent NIH-funded facility providing a wide range of zebrafish lines, probes and health services. ZFIN works closely with ZIRC to connect our genetic data with available probes and fish lines.
DDBJ; DNA Data Bank of Japan is the sole nucleotide sequence data bank in Asia, which is officially certified to collect nucleotide sequences from researchers and to issue the internationally recognized accession number to data submitters.Since we exchange the collected data with EMBL-Bank/EBI; European Bioinformatics Institute and GenBank/NCBI; National Center for Biotechnology Information on a daily basis, the three data banks share virtually the same data at any given time. The virtually unified database is called "INSD; International Nucleotide Sequence Database DDBJ collects sequence data mainly from Japanese researchers, but of course accepts data and issue the accession number to researchers in any other countries.
Country
The National Cryosphere Desert Data Center (hereinafter referred to as NCDC) is supported by the Institute of environment and Engineering in the cold and dry areas of the Chinese Academy of Sciences, in cooperation with Xinjiang Institute of ecology and geography of the Chinese Academy of Sciences, Chengdu Institute of mountain land disaster and environment of the Ministry of water resources of the Chinese Academy of Sciences, Qinghai Salt Lake Research Institute of the Chinese Academy of Sciences and Qinghai Gao of the Chinese Academy of Sciences The Institute of protobiology and other units were jointly established. The supporting units of glacier permafrost desert data center have formed a scientific research and support system of seven research laboratories and three research systems, highlighting the research characteristics of glacier, permafrost, desert, atmosphere, water and soil, ecology, environment, resources, engineering and sustainable development in the dry areas of cold regions.
The MMRRC is the nation’s premier national public repository system for mutant mice. Funded by the NIH continuously since 1999, the MMRRC archives and distributes scientifically valuable spontaneous and induced mutant mouse strains and ES cell lines for use by the biomedical research community. The MMRRC consists of a national network of breeding and distribution repositories and an Informatics Coordination and Service Center located at 4 major academic centers across the nation. The MMRRC is committed to upholding the highest standards of experimental design and quality control to optimize the reproducibility of research studies using mutant mice.
The main goal of the CLUES-project is to provide constrained simulations of the local universe designed to be used as a numerical laboratory of the current paradigm. The simulations will be used for unprecedented analysis of the complex dark matter and gasdynamical processes which govern the formation of galaxies. The predictions of these experiments can be easily compared with the detailed observations of our galactic neighborhood. Some of the CLUES data is now publicly available via the CosmoSim database (https://www.cosmosim.org/). This includes AHF halo catalogues from the Box 64, WMAP3 resimulations of the Local Group with 40963 particle resolution.
OpenKIM is an online suite of open source tools for molecular simulation of materials. These tools help to make molecular simulation more accessible and more reliable. Within OpenKIM, you will find an online resource for standardized testing and long-term warehousing of interatomic models and data, and an application programming interface (API) standard for coupling atomistic simulation codes and interatomic potential subroutines.
Brain Analysis Library of Spatial maps and Atlases (BALSA) is a database for hosting and sharing neuroimaging and neuroanatomical datasets for human and primate species. BALSA houses curated, user-created Study datasets, extensively analyzed neuroimaging data associated with published figures and Reference datasets mapped to brain atlas surfaces and volumes in human and nonhuman primates as a general resource (e.g., published cortical parcellations).