Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 22 result(s)
The Deep Blue Data repository is a means for University of Michigan researchers to make their research data openly accessible to anyone in the world, provided they meet collections criteria. Submitted data sets undergo a curation review by librarians to support discovery, understanding, and reuse of the data.
The Comprehensive Epidemiologic Data Resource (CEDR) is the U.S. Department of Energy (DOE) electronic database comprised of health studies of DOE contract workers and environmental studies of areas surrounding DOE facilities. DOE recognizes the benefits of data sharing and supports the public's right to know about worker and community health risks. CEDR provides independent researchers and educators with access to de-identified data collected since the Department's early production years. Current CEDR holdings include more than 76 studies of over 1 million workers at 31 DOE sites. Access to these data is at no cost to the user.
The VDC is a public, web-based search engine for accessing worldwide earthquake strong ground motion data. While the primary focus of the VDC is on data of engineering interest, it is also an interactive resource for scientific research and government and emergency response professionals.
Country
The Norwegian Marine Data Centre (NMD) at the Institute of Marine Research was established as a national data centre dedicated to the professional processing and long-term storage of marine environmental and fisheries data and production of data products. The Institute of Marine Research continuously collects large amounts of data from all Norwegian seas. Data are collected using vessels, observation buoys, manual measurements, gliders – amongst others. NMD maintains the largest collection of marine environmental and fisheries data in Norway.
The PeptideAtlas validates expressed proteins to provide eukaryotic genome data. Peptide Atlas provides data to advance biological discoveries in humans. The PeptideAtlas accepts proteomic data from high-throughput processes and encourages data submission.
Country
TopFIND is a protein-centric database for the annotation of protein termini currently in its third version. Non-canonical protein termini can be the result of multiple different biological processes, including pre-translational processes such as alternative splicing and alternative translation initiation or post-translational protein processing by proteases that cleave proteases as part of protein maturation or as a regulatory modification. Accordingly, protein termini evidence in TopFIND is inferred from other databases such as ENSEMBL transcripts, TISdb for alternative translation initiation, MEROPS for protein cleavage by proteases, and UniProt for canonical and protein isoform start sites.
Scripps Institute of Oceanography (SIO) Explorer includes five federated collections: SIO Cruises, SIO Historic Photographs, the Seamounts, Marine Geological Samples, and the Educator’s Collection, all part of the US National Science Digital Library (NSDL). Each collection represents a unique resource of irreplaceable scientific research. The effort is collaboration among researchers at Scripps, computer scientists from the San Diego Supercomputer Center (SDSC), and archivists and librarians from the UCSD Libraries. In 2005 SIOExplorer was extended to the Woods Hole Oceanographic Institution with the Multi-Institution Scalable Digital Archiving project, funded through the joint NSF/Library of Congress digital archiving and preservation program, creating a harvesting methodology and a prototype collection of cruises, Alvin submersible dives and Jason ROV lowerings.
Country
MTD is focused on mammalian transcriptomes with a current version that contains data from humans, mice, rats and pigs. Regarding the core features, the MTD browses genes based on their neighboring genomic coordinates or joint KEGG pathway and provides expression information on exons, transcripts, and genes by integrating them into a genome browser. We developed a novel nomenclature for each transcript that considers its genomic position and transcriptional features.
Country
GSA is a data repository specialized for archiving raw sequence reads. It supports data generated from a variety of sequencing platforms ranging from Sanger sequencing machines to single-cell sequencing machines and provides data storing and sharing services free of charge for worldwide scientific communities. In addition to raw sequencing data, GSA also accommodates secondary analyzed files in acceptable formats (like BAM, VCF). Its user-friendly web interfaces simplify data entry and submitted data are roughly organized as two parts, viz., Metadata and File, where the former can be further assorted into BioProject, BioSample, Experiment and Run, and the latter contains raw sequence reads.
ArrayExpress is one of the major international repositories for high-throughput functional genomics data from both microarray and high-throughput sequencing studies, many of which are supported by peer-reviewed publications. Data sets are submitted directly to ArrayExpress and curated by a team of specialist biological curators. In the past (until 2018) datasets from the NCBI Gene Expression Omnibus database were imported on a weekly basis. Data is collected to MIAME and MINSEQE standards.
The Pennsieve platform is a cloud-based scientific data management platform focused on integrating complex datasets, fostering collaboration and publishing scientific data according to all FAIR principles of data sharing. The platform is developed to enable individual labs, consortiums, or inter-institutional projects to manage, share and curate data in a secure cloud-based environment and to integrate complex metadata associated with scientific files into a high-quality interconnected data ecosystem. The platform is used as the backend for a number of public repositories including the NIH SPARC Portal and Pennsieve Discover repositories. It supports flexible metadata schemas and a large number of scientific file-formats and modalities.
Earthdata powered by EOSDIS (Earth Observing System Data and Information System) is a key core capability in NASA’s Earth Science Data Systems Program. It provides end-to-end capabilities for managing NASA’s Earth science data from various sources – satellites, aircraft, field measurements, and various other programs. EOSDIS uses the metadata and service discovery tool Earthdata Search https://search.earthdata.nasa.gov/search. The capabilities of EOSDIS constituting the EOSDIS Science Operations are managed by NASA's Earth Science Data and Information System (ESDIS) Project. The capabilities include: generation of higher level (Level 1-4) science data products for several satellite missions; archiving and distribution of data products from Earth observation satellite missions, as well as aircraft and field measurement campaigns. The EOSDIS science operations are performed within a distributed system of many interconnected nodes - Science Investigator-led Processing Systems (SIPS), and distributed, discipline-specific, Earth science Distributed Active Archive Centers (DAACs) with specific responsibilities for production, archiving, and distribution of Earth science data products. The DAACs serve a large and diverse user community by providing capabilities to search and access science data products and specialized services.
IMGT/GENE-DB is the IMGT genome database for IG and TR genes from human, mouse and other vertebrates. IMGT/GENE-DB provides a full characterization of the genes and of their alleles: IMGT gene name and definition, chromosomal localization, number of alleles, and for each allele, the IMGT allele functionality, and the IMGT reference sequences and other sequences from the literature. IMGT/GENE-DB allele reference sequences are available in FASTA format (nucleotide and amino acid sequences with IMGT gaps according to the IMGT unique numbering, or without gaps).
IMGT/mAb-DB provides a unique expertised resource on monoclonal antibodies (mAbs) with diagnostic or therapeutic indications, fusion proteins for immune applications (FPIA), composite proteins for clinical applications (CPCA) and relative proteins of the immune system (RPI) with clinical indications.
CottonGen is a new cotton community genomics, genetics and breeding database being developed to enable basic, translational and applied research in cotton. It is being built using the open-source Tripal database infrastructure. CottonGen consolidates and expands the data from CottonDB and the Cotton Marker Database, providing enhanced tools for easy querying, visualizing and downloading research data.
The Rolling Deck to Repository (R2R) Program provides a comprehensive shore-side data management program for a suite of routine underway geophysical, water column, and atmospheric sensor data collected on vessels of the academic research fleet. R2R also ensures data are submitted to the NOAA National Centers for Environmental Information for long-term preservation.
The NSIDC Distributed Active Archive Center (DAAC) processes, archives, documents, and distributes data from NASA's past and current Earth Observing System (EOS) satellites and field measurement programs. The NSIDC DAAC focuses on the study of the cryosphere. The NSIDC DAAC is one of NASA's Earth Observing System Data and Information System (EOSDIS) Data Centers.
The International Center for Global Earth Models collects and distributes historical and actual global gravity field models of the Earth and offers calculation service for derived quantities. In particular the tasks include: collecting and archiving of all existing global gravity field models, web interface for getting access to global gravity field models, web based visualization of the gravity field models their differences and their time variation, web based service for calculating different functionals of the gravity field models, web site for tutorials on spherical harmonics and the theory of the calculation service. As new service since 2016, ICGEM is providing a Digital Object Identifier (DOI) for the data set of the model (the coefficients).
GNPS is a web-based mass spectrometry ecosystem that aims to be an open-access knowledge base for community-wide organization and sharing of raw, processed or identified tandem mass (MS/MS) spectrometry data. GNPS aids in identification and discovery throughout the entire life cycle of data; from initial data acquisition/analysis to post publication.