Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 128 result(s)
MEMENTO aims to become a valuable tool for identifying regions of the world ocean that should be targeted in future work to improve the quality of air-sea flux estimates.
The mission of World Data Center for Climate (WDCC) is to provide central support for the German and European climate research community. The WDCC is member of the ISC's World Data System. Emphasis is on development and implementation of best practice methods for Earth System data management. Data for and from climate research are collected, stored and disseminated. The WDCC is restricted to data products. Cooperations exist with thematically corresponding data centres of, e.g., earth observation, meteorology, oceanography, paleo climate and environmental sciences. The services of WDCC are also available to external users at cost price. A special service for the direct integration of research data in scientific publications has been developed. The editorial process at WDCC ensures the quality of metadata and research data in collaboration with the data producers. A citation code and a digital identifier (DOI) are provided and registered together with citation information at the DOI registration agency DataCite.
The NCBI Short Genetic Variations database, commonly known as dbSNP, catalogs short variations in nucleotide sequences from a wide range of organisms. These variations include single nucleotide variations, short nucleotide insertions and deletions, short tandem repeats and microsatellites. Short Genetic Variations may be common, thus representing true polymorphisms, or they may be rare. Some rare human entries have additional information associated withthem, including disease associations, genotype information and allele origin, as some variations are somatic rather than germline events. ***NCBI will phase out support for non-human organism data in dbSNP and dbVar beginning on September 1, 2017***
The Agricultural and Environmental Data Archive (AEDA) is the direct result of a project managed by the Freshwater Biological Association in partnership with the Centre for e-Research at King's College London, and funded by the Department for the Environment, Food & Rural Affairs (Defra). This project ran from January 2011 until December 2014 and was called the DTC Archive Project, because it was initially related to the Demonstration Test Catchments Platform developed by Defra. The archive was also designed to hold data from the GHG R&D Platform (www.ghgplatform.org.uk). After the DTC Archive Project was completed the finished archive was renamed as AEDA to reflect it's broader remit to archive data from any and all agricultural and environmental research activities.
GeneWeaver combines cross-species data and gene entity integration, scalable hierarchical analysis of user data with a community-built and curated data archive of gene sets and gene networks, and tools for data driven comparison of user-defined biological, behavioral and disease concepts. Gene Weaver allows users to integrate gene sets across species, tissue and experimental platform. It differs from conventional gene set over-representation analysis tools in that it allows users to evaluate intersections among all combinations of a collection of gene sets, including, but not limited to annotations to controlled vocabularies. There are numerous applications of this approach. Sets can be stored, shared and compared privately, among user defined groups of investigators, and across all users.
Country
The Indian Census is the largest single source of a variety of statistical information on different characteristics of the people of India. With a history of more than 130 years, this reliable, time tested exercise has been bringing out a veritable wealth of statistics every 10 years, beginning from 1872 when the first census was conducted in India non-synchronously in different parts. To scholars and researchers in demography, economics, anthropology, sociology, statistics and many other disciplines, the Indian Census has been a fascinating source of data. The rich diversity of the people of India is truly brought out by the decennial census which has become one of the tools to understand and study India The responsibility of conducting the decennial Census rests with the Office of the Registrar General and Census Commissioner, India under Ministry of Home Affairs, Government of India. It may be of historical interest that though the population census of India is a major administrative function; the Census Organisation was set up on an ad-hoc basis for each Census till the 1951 Census. The Census Act was enacted in 1948 to provide for the scheme of conducting population census with duties and responsibilities of census officers. The Government of India decided in May 1949 to initiate steps for developing systematic collection of statistics on the size of population, its growth, etc., and established an organisation in the Ministry of Home Affairs under Registrar General and ex-Officio Census Commissioner, India. This organisation was made responsible for generating data on population statistics including Vital Statistics and Census. Later, this office was also entrusted with the responsibility of implementation of Registration of Births and Deaths Act, 1969 in the country.
Country
The National Genomics Data Center (NGDC), part of the China National Center for Bioinformation (CNCB), advances life & health sciences by providing open access to a suite of resources, with the aim to translate big data into big discoveries and support worldwide activities in both academia and industry.
Provides quick, uncluttered access to information about Heliophysics research data that have been described with SPASE resource descriptions.
Country
The Northwest Territories (NWT) Species Infobase is a searchable catalogue of referenced information on NWT species. It includes the information on habitat, distribution, population numbers, trends and threats used to rank the general status of NWT species.
Harmonized, indexed, searchable large-scale human FG data collection with extensive metadata. Provides scalable, unified way to easily access massive functional genomics (FG) and annotation data collections curated from large-scale genomic studies. Direct integration (API) with custom / high-throughput genetic and genomic analysis workflows.
Country
GESIS preserves (mainly quantitative) social research data to make it available to the scientific research community. The data is described in a standardized way, secured for the long term, provided with a permanent identifier (DOI), and can be easily found and reused through browser-optimized catalogs (https://search.gesis.org/).
As 3D and reality capture strategies for heritage documentation become more widespread and available, there has emerged a growing need to assist with guiding and facilitating accessibility to data, while maintaining scientific rigor, cultural and ethical sensitivity, discoverability, and archival standards. In response to these areas of need, The Open Heritage 3D Alliance (OHA) has developed as an advisory group governing the Open Heritage 3D initiative. This collaborative advisory group are among some of the earliest adopters of 3D heritage documentation technologies, and offer first-hand guidance for best practices in data management, sharing, and dissemination approaches for 3D cultural heritage projects. The founding members of the OHA, consist of experts and organizational leaders from CyArk, Historic Environment Scotland, and the University of South Florida Libraries, who together have significant repositories of legacy and on-going 3D research and documentation projects. These groups offer unique insight into not only the best practices for 3D data capture and sharing, but also have come together around concerns dealing with standards, formats, approach, ethics, and archive commitment. Together, the OHA has begun the journey to provide open access to cultural heritage 3D data, while maintaining integrity, security, and standards relating to discoverable dissemination. Together, the OHA will work to provide democratized access to primary heritage 3D data submitted from donors and organizations, and will help to facilitate an operation platform, archive, and organization of resources into the future.
MicrosporidiaDB belongs to the EuPathDB family of databases and is an integrated genomic and functional genomic database for the phylum Microsporidia. In its first iteration (released in early 2010), MicrosporidiaDB contains the genomes of two Encephalitozoon species (see below). MicrosporidiaDB integrates whole genome sequence and annotation and will rapidly expand to include experimental data and environmental isolate sequences provided by community researchers. The database includes supplemental bioinformatics analyses and a web interface for data-mining.
CryptoDB is an integrated genomic and functional genomic database for the parasite Cryptosporidium and other related genera. CryptoDB integrates whole genome sequence and annotation along with experimental data and environmental isolate sequences provided by community researchers. The database includes supplemental bioinformatics analyses and a web interface for data-mining.
SESAR, the System for Earth Sample Registration, is a global registry for specimens (rocks, sediments, minerals, fossils, fluids, gas) and related sampling features from our natural environment. SESAR's objective is to overcome the problem of ambiguous sample naming in the Earth Sciences. SESAR maintains a database of sample records that are contributed by its users. Each sample that is registered with SESAR is assigned an International Geo Sample Number IGSN to ensure its global unique identification.
NeuroMorpho.Org is a centrally curated inventory of digitally reconstructed neurons associated with peer-reviewed publications. It contains contributions from over 80 laboratories worldwide and is continuously updated as new morphological reconstructions are collected, published, and shared. To date, NeuroMorpho.Org is the largest collection of publicly accessible 3D neuronal reconstructions and associated metadata which can be used for detailed single cell simulations.
ArrayExpress is one of the major international repositories for high-throughput functional genomics data from both microarray and high-throughput sequencing studies, many of which are supported by peer-reviewed publications. Data sets are submitted directly to ArrayExpress and curated by a team of specialist biological curators. In the past (until 2018) datasets from the NCBI Gene Expression Omnibus database were imported on a weekly basis. Data is collected to MIAME and MINSEQE standards.
Codex Sinaiticus is one of the most important books in the world. Handwritten well over 1600 years ago, the manuscript contains the Christian Bible in Greek, including the oldest complete copy of the New Testament. The Codex Sinaiticus Project is an international collaboration to reunite the entire manuscript in digital form and make it accessible to a global audience for the first time. Drawing on the expertise of leading scholars, conservators and curators, the Project gives everyone the opportunity to connect directly with this famous manuscript.
The Neuroscience Information Framework is a dynamic index of data, materials, and tools. Please note, we do not accept direct data deposits, but if you wish to make your data repository or database available through our search, please contact us. An initiative of the NIH Blueprint for Neuroscience Research, NIF advances neuroscience research by enabling discovery and access to public research data and tools worldwide through an open source, networked environment.
The CLARIN Centre at the University of Copenhagen, Denmark, hosts and manages a data repository (CLARIN-DK-UCPH Repository), which is part of a research infrastructure for humanities and social sciences financed by the University of Copenhagen. The CLARIN-DK-UCPH Repository provides easy and sustainable access for scholars in the humanities and social sciences to digital language data (in written, spoken, video or multimodal form) and provides advanced tools for discovering, exploring, exploiting, annotating, and analyzing data. CLARIN-DK also shares knowledge on Danish language technology and resources and is the Danish node in the European Research Infrastructure Consortium, CLARIN ERIC.
The Electron Microscopy Data Bank (EMDB) is a public repository for electron microscopy density maps of macromolecular complexes and subcellular structures. It covers a variety of techniques, including single-particle analysis, electron tomography, and electron (2D) crystallography.
The World Glacier Monitoring Service (WGMS) collects standardized observations on changes in mass, volume, area and length of glaciers with time (glacier fluctuations), as well as statistical information on the distribution of perennial surface ice in space (glacier inventories). Such glacier fluctuation and inventory data are high priority key variables in climate system monitoring; they form a basis for hydrological modelling with respect to possible effects of atmospheric warming, and provide fundamental information in glaciology, glacial geomorphology and quaternary geology. The highest information density is found for the Alps and Scandinavia, where long and uninterrupted records are available. As a contribution to the Global Terrestrial/Climate Observing System (GTOS, GCOS), the Division of Early Warning and Assessment and the Global Environment Outlook of UNEP, and the International Hydrological Programme of UNESCO, the WGMS collects and publishes worldwide standardized glacier data.
NatureServe and its network of member programs are a leading source for reliable scientific information about species and ecosystems of the Western Hemisphere. This site serves as a portal for accessing several types of publicly available biodiversity data.
IEDB offers easy searching of experimental data characterizing antibody and T cell epitopes studied in humans, non-human primates, and other animal species. Epitopes involved in infectious disease, allergy, autoimmunity, and transplant are included. The IEDB also hosts tools to assist in the prediction and analysis of B cell and T cell epitopes.