Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 36 result(s)
The Deep Blue Data repository is a means for University of Michigan researchers to make their research data openly accessible to anyone in the world, provided they meet collections criteria. Submitted data sets undergo a curation review by librarians to support discovery, understanding, and reuse of the data.
Country
The Open Archive for Miscellaneous Data (OMIX) database is a data repository developed and maintained by the National Genomics Data Center (NGDC). The database specializes in descriptions of biological studies, including genomic, proteomic, and metabolomic, as well as data that do not fit in the structured archives at other databases in NGDC. It can accept various types of studies described via a simple format and enables researchers to upload supplementary information and link to it from the publication.
Brain Image Library (BIL) is an NIH-funded public resource serving the neuroscience community by providing a persistent centralized repository for brain microscopy data. Data scope of the BIL archive includes whole brain microscopy image datasets and their accompanying secondary data such as neuron morphologies, targeted microscope-enabled experiments including connectivity between cells and spatial transcriptomics, and other historical collections of value to the community. The BIL Analysis Ecosystem provides an integrated computational and visualization system to explore, visualize, and access BIL data without having to download it.
The mission of the GO Consortium is to develop a comprehensive, computational model of biological systems, ranging from the molecular to the organism level, across the multiplicity of species in the tree of life. The Gene Ontology (GO) knowledgebase is the world’s largest source of information on the functions of genes. This knowledge is both human-readable and machine-readable, and is a foundation for computational analysis of large-scale molecular biology and genetics experiments in biomedical research.
A community platform to Share Data, Publish Data with a DOI, and get Citations. Advancing Spinal Cord Injury research through sharing of data from basic and clinical research.
STRING is a database of known and predicted protein interactions. The interactions include direct (physical) and indirect (functional) associations; they are derived from four sources: - Genomic Context - High-throughput Experiments - (Conserved) Coexpression - Previous Knowledge STRING quantitatively integrates interaction data from these sources for a large number of organisms, and transfers information between these organisms where applicable.
Country
SSHADE is an interoperable Solid Spectroscopy database infrastructure (www.sshade.eu) providing spectral and photometric data obtained by various spectroscopic techniques over the whole electromagnetic spectrum from gamma to radio wavelengths, through X, UV, Vis, IR, and mm ranges. The measured samples include ices, minerals, rocks, organic and carbonaceous materials... and also liquids. They are either synthesized in the laboratory, natural terrestrial analogs collected or measured in the field, or extraterrestrial samples collected on Earth or on planetary bodies: (micro-)meteorites, IDPs, lunar soils... SSHADE contains a set of specialized databases from various research groups, mostly from Europe. It is developed under the H2020 European programs* "Europlanet 2020 RI" and now "Europlanet 2024 RI" with the help of OSUG, CNRS/INSU, IPAG, and CNES. It is hosted by the OSUG data center / Université Grenoble Alpes, France. It can also be searched through the Virtual European Solar and Planetary Access (VESPA) virtual observatory.
The Universal Protein Resource (UniProt) is a comprehensive resource for protein sequence and annotation data. The UniProt databases are the UniProt Knowledgebase (UniProtKB), the UniProt Reference Clusters (UniRef), and the UniProt Archive (UniParc).
The information in the Mitelman Database of Chromosome Aberrations and Gene Fusions in Cancer relates cytogenetic changes and their genomic consequences, in particular gene fusions, to tumor characteristics, based either on individual cases or associations. All the data have been manually culled from the literature by Felix Mitelman in collaboration with Bertil Johansson and Fredrik Mertens.
Content type(s)
Country
The information system Graffiti in Germany (INGRID) is a cooperation project between the linguistics department at the University of Paderborn and the art history department at the Karlsruhe Institute of Technology (KIT). As part of the joint project, graffiti image collections will be compiled, stored in an image database and made available for scientific use. At present, more than 100,000 graffiti from the years 1983 to 2018 from major German cities are recorded, including Cologne, Mannheim and Munich.
The Perovskite Database Project aims at making all perovskite device data, both past and future, available in a form adherent to the FAIR data principles, i.e. findable, accessible, interoperable, and reusable.
A research data repository for the education and developmental sciences.
The Energy Data Centre holds information relating to energy research, focused on the UK. It has a research data catalogue, information on publications resulting from the UK Energy Research Centre and the Energy Technologies Institute and energy research related grants.
GENCODE is a scientific project in genome research and part of the ENCODE (ENCyclopedia Of DNA Elements) scale-up project. The GENCODE consortium was initially formed as part of the pilot phase of the ENCODE project to identify and map all protein-coding genes within the ENCODE regions (approx. 1% of Human genome). Given the initial success of the project, GENCODE now aims to build an “Encyclopedia of genes and genes variants” by identifying all gene features in the human and mouse genome using a combination of computational analysis, manual annotation, and experimental validation, and annotating all evidence-based gene features in the entire human genome at a high accuracy.
Country
MTD is focused on mammalian transcriptomes with a current version that contains data from humans, mice, rats and pigs. Regarding the core features, the MTD browses genes based on their neighboring genomic coordinates or joint KEGG pathway and provides expression information on exons, transcripts, and genes by integrating them into a genome browser. We developed a novel nomenclature for each transcript that considers its genomic position and transcriptional features.
The mission of World Data Center for Climate (WDCC) is to provide central support for the German and European climate research community. The WDCC is member of the ISC's World Data System. Emphasis is on development and implementation of best practice methods for Earth System data management. Data for and from climate research are collected, stored and disseminated. The WDCC is restricted to data products. Cooperations exist with thematically corresponding data centres of, e.g., earth observation, meteorology, oceanography, paleo climate and environmental sciences. The services of WDCC are also available to external users at cost price. A special service for the direct integration of research data in scientific publications has been developed. The editorial process at WDCC ensures the quality of metadata and research data in collaboration with the data producers. A citation code and a digital identifier (DOI) are provided and registered together with citation information at the DOI registration agency DataCite.
The Agricultural and Environmental Data Archive (AEDA) is the direct result of a project managed by the Freshwater Biological Association in partnership with the Centre for e-Research at King's College London, and funded by the Department for the Environment, Food & Rural Affairs (Defra). This project ran from January 2011 until December 2014 and was called the DTC Archive Project, because it was initially related to the Demonstration Test Catchments Platform developed by Defra. The archive was also designed to hold data from the GHG R&D Platform (www.ghgplatform.org.uk). After the DTC Archive Project was completed the finished archive was renamed as AEDA to reflect it's broader remit to archive data from any and all agricultural and environmental research activities.
Country
The National Genomics Data Center (NGDC), part of the China National Center for Bioinformation (CNCB), advances life & health sciences by providing open access to a suite of resources, with the aim to translate big data into big discoveries and support worldwide activities in both academia and industry.
Harmonized, indexed, searchable large-scale human FG data collection with extensive metadata. Provides scalable, unified way to easily access massive functional genomics (FG) and annotation data collections curated from large-scale genomic studies. Direct integration (API) with custom / high-throughput genetic and genomic analysis workflows.
SESAR, the System for Earth Sample Registration, is a global registry for specimens (rocks, sediments, minerals, fossils, fluids, gas) and related sampling features from our natural environment. SESAR's objective is to overcome the problem of ambiguous sample naming in the Earth Sciences. SESAR maintains a database of sample records that are contributed by its users. Each sample that is registered with SESAR is assigned an International Geo Sample Number IGSN to ensure its global unique identification.
Codex Sinaiticus is one of the most important books in the world. Handwritten well over 1600 years ago, the manuscript contains the Christian Bible in Greek, including the oldest complete copy of the New Testament. The Codex Sinaiticus Project is an international collaboration to reunite the entire manuscript in digital form and make it accessible to a global audience for the first time. Drawing on the expertise of leading scholars, conservators and curators, the Project gives everyone the opportunity to connect directly with this famous manuscript.
The CLARIN Centre at the University of Copenhagen, Denmark, hosts and manages a data repository (CLARIN-DK-UCPH Repository), which is part of a research infrastructure for humanities and social sciences financed by the University of Copenhagen. The CLARIN-DK-UCPH Repository provides easy and sustainable access for scholars in the humanities and social sciences to digital language data (in written, spoken, video or multimodal form) and provides advanced tools for discovering, exploring, exploiting, annotating, and analyzing data. CLARIN-DK also shares knowledge on Danish language technology and resources and is the Danish node in the European Research Infrastructure Consortium, CLARIN ERIC.
The World Glacier Monitoring Service (WGMS) collects standardized observations on changes in mass, volume, area and length of glaciers with time (glacier fluctuations), as well as statistical information on the distribution of perennial surface ice in space (glacier inventories). Such glacier fluctuation and inventory data are high priority key variables in climate system monitoring; they form a basis for hydrological modelling with respect to possible effects of atmospheric warming, and provide fundamental information in glaciology, glacial geomorphology and quaternary geology. The highest information density is found for the Alps and Scandinavia, where long and uninterrupted records are available. As a contribution to the Global Terrestrial/Climate Observing System (GTOS, GCOS), the Division of Early Warning and Assessment and the Global Environment Outlook of UNEP, and the International Hydrological Programme of UNESCO, the WGMS collects and publishes worldwide standardized glacier data.
Provides quick, uncluttered access to information about Heliophysics research data that have been described with SPASE resource descriptions.