Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 43 result(s)
The Carleton University Data Repository Dataverse is the research data repository for Carleton University. It is managed by the Data Services in the MacOdrum Library. The repository also houses the MacOdrum Library Dataverse Collection which contains numerous public opinion polls.
IsoArcH is an open access isotope web-database for bioarchaeological samples from prehistoric and historical periods all over the world. With 40,000+ isotope related data obtained on 13,000+ specimens (i.e., humans, animals, plants and organic residues) coming from 500+ archaeological sites, IsoArcH is now one of the world's largest repositories for isotopic data and metadata deriving from archaeological contexts. IsoArcH allows to initiate big data initiatives but also highlights research lacks in certain regions or time periods. Among others, it supports the creation of sound baselines, the undertaking of multi-scale analysis, and the realization of extensive studies and syntheses on various research issues such as paleodiet, food production, resource management, migrations, paleoclimate and paleoenvironmental changes.
Galaxies, made up of billions of stars like our Sun, are the beacons that light up the structure of even the most distant regions in space. Not all galaxies are alike, however. They come in very different shapes and have very different properties; they may be large or small, old or young, red or blue, regular or confused, luminous or faint, dusty or gas-poor, rotating or static, round or disky, and they live either in splendid isolation or in clusters. In other words, the universe contains a very colourful and diverse zoo of galaxies. For almost a century, astronomers have been discussing how galaxies should be classified and how they relate to each other in an attempt to attack the big question of how galaxies form. Galaxy Zoo (Lintott et al. 2008, 2011) pioneered a novel method for performing large-scale visual classifications of survey datasets. This webpage allows anyone to download the resulting GZ classifications of galaxies in the project.
Genomic Expression Archive (GEA) is a public database of functional genomics data such as gene expression, epigenetics and genotyping SNP array. Both microarray- and sequence-based data are accepted in the MAGE-TAB format in compliance with MIAME and MINSEQE guidelines, respectively. GEA issues accession numbers, E-GEAD-n to experiment and A-GEAD-n to array design. Data exchange between GEA and EBI ArrayExpress is planned.
UCLA Library is adopting Dataverse, the open source web application designed for sharing, preserving and using research data. UCLA Dataverse will allow data, text, software, scripts, data visualizations, etc., created from research projects at UCLA to be made publicly available, widely discoverable, linkable, and ultimately, reusable
The International Human Epigenome Consortium (IHEC) makes available comprehensive sets of reference epigenomes relevant to health and disease. The IHEC Data Portal can be used to view, search and download the data already released by the different IHEC-associated projects.
University of Alberta Dataverse is a service provided by the University of Albert Library to help researchers publish, analyze, distribute, and preserve data and datasets. Open for University of Alberta-affiliated researchers to deposit data.
SHARE - Stations at High Altitude for Research on the Environment - is an integrated Project for environmental monitoring and research in the mountain areas of Europe, Asia, Africa and South America responding to the call for improving environmental research and policies for adaptation to the effects of climate changes, as requested by International and Intergovernmental institutions.
GeneCards is a searchable, integrative database that provides comprehensive, user-friendly information on all annotated and predicted human genes. It automatically integrates gene-centric data from ~125 web sources, including genomic, transcriptomic, proteomic, genetic, clinical and functional information.
The Humanitarian Data Exchange (HDX) is an open platform for sharing data across crises and organisations. Launched in July 2014, the goal of HDX is to make humanitarian data easy to find and use for analysis. HDX is managed by OCHA's Centre for Humanitarian Data, which is located in The Hague. OCHA is part of the United Nations Secretariat and is responsible for bringing together humanitarian actors to ensure a coherent response to emergencies. The HDX team includes OCHA staff and a number of consultants who are based in North America, Europe and Africa.
The mission of World Data Center for Climate (WDCC) is to provide central support for the German and European climate research community. The WDCC is member of the ISC's World Data System. Emphasis is on development and implementation of best practice methods for Earth System data management. Data for and from climate research are collected, stored and disseminated. The WDCC is restricted to data products. Cooperations exist with thematically corresponding data centres of, e.g., earth observation, meteorology, oceanography, paleo climate and environmental sciences. The services of WDCC are also available to external users at cost price. A special service for the direct integration of research data in scientific publications has been developed. The editorial process at WDCC ensures the quality of metadata and research data in collaboration with the data producers. A citation code and a digital identifier (DOI) are provided and registered together with citation information at the DOI registration agency DataCite.
The Bremen Core Repository - BCR, for International Ocean Discovery Program (IODP), Integrated Ocean Discovery Program (IODP), Ocean Drilling Program (ODP), and Deep Sea Drilling Project (DSDP) cores from the Atlantic Ocean, Mediterranean and Black Seas and Arctic Ocean is operated at University of Bremen within the framework of the German participation in IODP. It is one of three IODP repositories (beside Gulf Coast Repository (GCR) in College Station, TX, and Kochi Core Center (KCC), Japan). One of the scientific goals of IODP is to research the deep biosphere and the subseafloor ocean. IODP has deep-frozen microbiological samples from the subseafloor available for interested researchers and will continue to collect and preserve geomicrobiology samples for future research.
The Biodiversity Research Program (PPBio) was created in 2004 with the aims of furthering biodiversity studies in Brazil, decentralizing scientific production from already-developed academic centers, integrating research activities and disseminating results across a variety of purposes, including environmental management and education. PPBio contributes its data to the DataONE network as a member node: https://search.dataone.org/#profile/PPBIO
The Analytical Geomagnetic Data Center of the Trans-Regional INTERMAGNET Segment is operated by the Geophysical Center of the Russian Academy of Sciences (GC RAS). Geomagnetic data are transmitted from observatories and stations located in Russia and near-abroad countries. The Center also provides access to spaceborne data products. The MAGNUS hardware-software system underlies the operation of the Center. Its particular feature is the automated real-time recognition of artificial (anthropogenic) disturbances in incoming data. Being based on fuzzy logic approach, this quality control service facilitates the preparation of the definitive magnetograms from preliminary records carried out by data experts manually. The MAGNUS system also performs on-the-fly multi-criteria estimation of geomagnetic activity using several indicators and provides online tools for modeling electromagnetic parameters in the near-Earth space. The collected geomagnetic data are stored using relational database management system. The geomagnetic database is intended for storing both 1-minute and 1-second data. The results of anthropogenic and natural disturbance recognition are also stored in the database.
STRENDA DB is a storage and search platform supported by the Beilstein-Institut that incorporates the STRENDA Guidelines in a user-friendly, web-based system. If you are an author who is preparing a manuscript containing functional enzymology data, STRENDA DB provides you the means to ensure that your data sets are complete and valid before you submit them as part of a publication to a journal. Data entered in the STRENDA DB submission form are automatically checked for compliance with the STRENDA Guidelines; users receive warnings informing them when necessary information is missing.
The Australian National University undertake work to collect and publish metadata about research data held by ANU, and in the case of four discipline areas, Earth Sciences, Astronomy, Phenomics and Digital Humanities to develop pipelines and tools to enable the publication of research data using a common and repeatable approach. Aims and outcomes: To identify and describe research data held at ANU, to develop a consistent approach to the publication of metadata on the University's data holdings: Identification and curation of significant orphan data sets that might otherwise be lost or inadvertently destroyed, to develop a culture of data data sharing and data re-use.
The projects include airborne, ground-based and ocean measurements, social science surveys, satellite data use, modelling studies and value-added product development. Therefore, the BAOBAB data portal enables to access a great amount and a large variety of data: - 250 local observation datasets, that have been collected by operational networks since 1850, long term monitoring research networks and intensive scientific campaigns; - 1350 outputs of a socio-economics questionnaire; - 60 operational satellite products and several research products; - 10 output sets of meteorological and ocean operational models and 15 of research simulations. Data documentation complies with metadata international standards, and data are delivered into standard formats. The data request interface takes full advantage of the database relational structure and enables users to elaborate multicriteria requests (period, area, property…).
EMSC collects real time parametric data (source parmaters and phase pickings) provided by 65 seismological networks of the Euro-Med region. These data are provided to the EMSC either by email or via QWIDS (Quake Watch Information Distribution System, developped by ISTI). The collected data are automatically archived in a database, made available via an autoDRM, and displayed on the web site. The collected data are automatically merged to produce automatic locations which are sent to several seismological institutes in order to perform quick moment tensors determination.
The ProteomeXchange consortium has been set up to provide a single point of submission of MS proteomics data to the main existing proteomics repositories, and to encourage the data exchange between them for optimal data dissemination. Current members accepting submissions are: The PRIDE PRoteomics IDEntifications database at the European Bioinformatics Institute focusing mainly on shotgun mass spectrometry proteomics data PeptideAtlas/PASSEL focusing on SRM/MRM datasets.
The main goal of the ECCAD project is to provide scientific and policy users with datasets of surface emissions of atmospheric compounds, and ancillary data, i.e. data required to estimate or quantify surface emissions. The supply of ancillary data - such as maps of population density, maps of fires spots, burnt areas, land cover - could help improve and encourage the development of new emissions datasets. ECCAD offers: Access to global and regional emission inventories and ancillary data, in a standardized format Quick visualization of emission and ancillary data Rationalization of the use of input data in algorithms or emission models Analysis and comparison of emissions datasets and ancillary data Tools for the evaluation of emissions and ancillary data ECCAD is a dynamical and interactive database, providing the most up to date datasets including data used within ongoing projects. Users are welcome to add their own datasets, or have their regional masks included in order to use ECCAD tools.
The Abacus Data Network is a data repository collaboration involving Libraries at Simon Fraser University (SFU), the University of British Columbia (UBC), the University of Northern British Columbia (UNBC) and the University of Victoria (UVic).
Provided by the University Libraries, KiltHub is the comprehensive institutional repository and research collaboration platform for research data and scholarly outputs produced by members of Carnegie Mellon University and their collaborators. KiltHub collects, preserves, and provides stable, long-term global open access to a wide range of research data and scholarly outputs created by faculty, staff, and student members of Carnegie Mellon University in the course of their research and teaching.
The Language Archive at the Max Planck Institute in Nijmegen provides a unique record of how people around the world use language in everyday life. It focuses on collecting spoken and signed language materials in audio and video form along with transcriptions, analyses, annotations and other types of relevant material (e.g. photos, accompanying notes).