Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 21 result(s)
Chempound is a new generation repository architecture based on RDF, semantic dictionaries and linked data. It has been developed to hold any type of chemical object expressible in CML and is exemplified by crystallographic experiments and computational chemistry calculations. In both examples, the repository can hold >50k entries which can be searched by SPARQL endpoints and pre-indexing of key fields. The Chempound architecture is general and adaptable to other fields of data-rich science. The Chempound software is hosted at http://bitbucket.org/chempound and is available under the Apache License, Version 2.0
The Plant Metabolic Network (PMN) provides a broad network of plant metabolic pathway databases that contain curated information from the literature and computational analyses about the genes, enzymes, compounds, reactions, and pathways involved in primary and secondary metabolism in plants. The PMN currently houses one multi-species reference database called PlantCyc and 22 species/taxon-specific databases.
The Structure database provides three-dimensional structures of macromolecules for a variety of research purposes and allows the user to retrieve structures for specific molecule types as well as structures for genes and proteins of interest. Three main databases comprise Structure-The Molecular Modeling Database; Conserved Domains and Protein Classification; and the BioSystems Database. Structure also links to the PubChem databases to connect biological activity data to the macromolecular structures. Users can locate structural templates for proteins and interactively view structures and sequence data to closely examine sequence-structure relationships.
The GOES Space Environment Monitor archive is an important component of the National Space Weather Program --a interagency program to provide timely and reliable space environment observations and forecasts. GOES satellites carry onboard a Space Environment Monitor subsystem that measures X-rays, Energetic Particles and Magnetic Field at the Spacecraft.
Pubchem contains 3 databases. 1. PubChem BioAssay: The PubChem BioAssay Database contains bioactivity screens of chemical substances described in PubChem Substance. It provides searchable descriptions of each bioassay, including descriptions of the conditions and readouts specific to that screening procedure. 2. PubChem Compound: The PubChem Compound Database contains validated chemical depiction information provided to describe substances in PubChem Substance. Structures stored within PubChem Compounds are pre-clustered and cross-referenced by identity and similarity groups. 3. PubChem Substance. The PubChem Substance Database contains descriptions of samples, from a variety of sources, and links to biological screening results that are available in PubChem BioAssay. If the chemical contents of a sample are known, the description includes links to PubChem Compound.
A place where researchers can publicly store and share unthresholded statistical maps, parcellations, and atlases produced by MRI and PET studies.
The Saccharomyces Genome Database (SGD) provides comprehensive integrated biological information for the budding yeast Saccharomyces cerevisiae along with search and analysis tools to explore these data, enabling the discovery of functional relationships between sequence and gene products in fungi and higher organisms.
>>>>!!!!<<<< The Cancer Genomics Hub mission is now completed. The Cancer Genomics Hub was established in August 2011 to provide a repository to The Cancer Genome Atlas, the childhood cancer initiative Therapeutically Applicable Research to Generate Effective Treatments and the Cancer Genome Characterization Initiative. CGHub rapidly grew to be the largest database of cancer genomes in the world, storing more than 2.5 petabytes of data and serving downloads of nearly 3 petabytes per month. As the central repository for the foundational genome files, CGHub streamlined team science efforts as data became as easy to obtain as downloading from a hard drive. The convenient access to Big Data, and the collaborations that CGHub made possible, are now essential to cancer research. That work continues at the NCI's Genomic Data Commons. All files previously stored at CGHub can be found there. The Website for the Genomic Data Commons is here: https://gdc.nci.nih.gov/ >>>>!!!!<<<< The Cancer Genomics Hub (CGHub) is a secure repository for storing, cataloging, and accessing cancer genome sequences, alignments, and mutation information from the Cancer Genome Atlas (TCGA) consortium and related projects. Access to CGHub Data: All researchers using CGHub must meet the access and use criteria established by the National Institutes of Health (NIH) to ensure the privacy, security, and integrity of participant data. CGHub also hosts some publicly available data, in particular data from the Cancer Cell Line Encyclopedia. All metadata is publicly available and the catalog of metadata and associated BAMs can be explored using the CGHub Data Browser.
The NSF-supported Program serves the international scientific community through research, infrastructure, data, and models. We focus on how components of the Critical Zone interact, shape Earth's surface, and support life. ARCHIVED CONTENT: In December 2020, the CZO program was succeeded by the Critical Zone Collaborative Network (CZ Net) https://criticalzone.org/
The Argo observational network consists of a fleet of 3000+ profiling autonomous floats deployed by about a dozen teams worldwide. WHOI has built about 10% of the global fleet. The mission lifetime of each float is about 4 years. During a typical mission, each float reports a profile of the upper ocean every 10 days. The sensors onboard record fundamental physical properties of the ocean: temperature and conductivity (a measure of salinity) as a function of pressure. The depth range of the observed profile depends on the local stratification and the float's mechanical ability to adjust it's buoyancy. The majority of Argo floats report profiles between 1-2 km depth. At each surfacing, measurements of temperature and salinity are relayed back to shore via satellite. Telemetry is usually received every 10 days, but floats at high-latitudes which are iced-over accumulate their data and transmit the entire record the next time satellite contact is established. With current battery technology, the best performing floats last 6+ years and record over 200 profiles.
ArrayExpress is one of the major international repositories for high-throughput functional genomics data from both microarray and high-throughput sequencing studies, many of which are supported by peer-reviewed publications. Data sets are submitted directly to ArrayExpress and curated by a team of specialist biological curators. In the past (until 2018) datasets from the NCBI Gene Expression Omnibus database were imported on a weekly basis. Data is collected to MIAME and MINSEQE standards.
The NCEAS Data Repository contains information about the research data sets collected and collated as part of NCEAS' funded activities. Information in the NCEAS Data Repository is concurrently available through the Knowledge Network for Biocomplexity (KNB), an international data repository. A number of the data sets were synthesized from multiple data sources that originated from the efforts of many contributors, while others originated from a single. Datasets can be found at KNB repository https://knb.ecoinformatics.org/data , creator=NCEAS
The University of Waterloo Dataverse is a data repository for research outputs of our faculty, students, and staff. Files are held in a secure environment on Canadian servers. Researchers can choose to make content available to the public, to specific individuals, or to keep it private.
LINCS Data Portal provides access to LINCS data from various sources. The program has six Data and Signature Generation Centers: Drug Toxicity Signature Generation Center, HMS LINCS Center, LINCS Center for Transcriptomics, LINCS Proteomic Characterization Center for Signaling and Epigenetics, MEP LINCS Center, and NeuroLINCS Center.
The Astromaterials Data System (AstroMat) is a data infrastructure to store, curate, and provide access to laboratory data acquired on samples curated in the Astromaterials Collection of the Johnson Space Center. AstroMat is developed and operated at the Lamont-Doherty Earth Observatory of Columbia University and funded by NASA.
The CCHDO provides access to standard, well-described datasets from reference-quality repeat hydrography expeditions. It curates high quality full water column Conductivity-Temperature-Depth (CTD), hydrographic, carbon and tracer data from over 2,500 cruises from ~30 countries. It is the official data center for CTD and water sample profile data from the Global Ocean Ship-Based Hydrographic Investigations Program (GO-SHIP), as well as for WOCE, US Hydro, and other high quality repeat hydrography lines (e.g. SOCCOM, HOT, BATS, WOCE, CARINA.)
The eyeGENE® Research Resource is open for approved research studies. Application details here Researchers and clinicians are actively developing gene-based therapies to treat ophthalmic genetic diseases that were once considered untreatable.
Ag Data Commons provides access to a wide variety of open data relevant to agricultural research. We are a centralized repository for data already on the web, as well as for new data being published for the first time. While compliance with the U.S. Federal public access and open data directives is important, we aim to surpass them. Our goal is to foster innovative data re-use, integration, and visualization to support bigger, better science and policy.