Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 24 result(s)
<<<!!!<<< This repository is no longer available. >>>!!!>>> The programme "International Oceanographic Data and Information Exchange" (IODE) of the "Intergovernmental Oceanographic Commission" (IOC) of UNESCO was established in 1961. Its purpose is to enhance marine research, exploitation and development, by facilitating the exchange of oceanographic data and information between participating Member States, and by meeting the needs of users for data and information products.
The Research Collection is ETH Zurich's publication platform. It unites the functions of a university bibliography, an open access repository and a research data repository within one platform. Researchers who are affiliated with ETH Zurich, the Swiss Federal Institute of Technology, may deposit research data from all domains. They can publish data as a standalone publication, publish it as supplementary material for an article, dissertation or another text, share it with colleagues or a research group, or deposit it for archiving purposes. Research-data-specific features include flexible access rights settings, DOI registration and a DOI preview workflow, content previews for zip- and tar-containers, as well as download statistics and altmetrics for published data. All data uploaded to the Research Collection are also transferred to the ETH Data Archive, ETH Zurich’s long-term archive.
The Global Hydrology Resource Center (GHRC) provides both historical and current Earth science data, information, and products from satellite, airborne, and surface-based instruments. GHRC acquires basic data streams and produces derived products from many instruments spread across a variety of instrument platforms.
Country
SODHA is the federal Belgian data archive for social sciences and the digital humanities. SODHA is a new service of the State Archives of Belgium and acts as the Belgian service provider for the Consortium of European Social Science Data Archives (CESSDA).
The Information Marketplace for Policy and Analysis of Cyber-risk & Trust (IMPACT) program supports global cyber risk research & development by coordinating, enhancing and developing real world data, analytics and information sharing capabilities, tools, models, and methodologies. In order to accelerate solutions around cyber risk issues and infrastructure security, IMPACT makes these data sharing components broadly available as national and international resources to support the three-way partnership among cyber security researchers, technology developers and policymakers in academia, industry and the government.
Merritt is a curation repository for the preservation of and access to the digital research data of the ten campus University of California system and external project collaborators. Merritt is supported by the University of California Curation Center (UC3) at the California Digital Library (CDL). While Merritt itself is content agnostic, accepting digital content regardless of domain, format, or structure, it is being used for management of research data, and it forms the basis for a number of domain-specific repositories, such as the ONEShare repository for earth and environmental science and the DataShare repository for life sciences. Merritt provides persistent identifiers, storage replication, fixity audit, complete version history, REST API, a comprehensive metadata catalog for discovery, ATOM-based syndication, and curatorially-defined collections, access control rules, and data use agreements (DUAs). Merritt content upload and download may each be curatorially-designated as public or restricted. Merritt DOIs are provided by UC3's EZID service, which is integrated with DataCite. All DOIs and associated metadata are automatically registered with DataCite and are harvested by Ex Libris PRIMO and Thomson Reuters Data Citation Index (DCI) for high-level discovery. Merritt is also a member node in the DataONE network; curatorially-designated data submitted to Merritt are automatically registered with DataONE for additional replication and federated discovery through the ONEMercury search/browse interface.
Kaggle is a platform for predictive modelling and analytics competitions in which statisticians and data miners compete to produce the best models for predicting and describing the datasets uploaded by companies and users. This crowdsourcing approach relies on the fact that there are countless strategies that can be applied to any predictive modelling task and it is impossible to know beforehand which technique or analyst will be most effective.
Country
The National High Energy Physics Science Data Center (NHEPSDC) is a repository for high-energy physics. In 2019, it was designated as a scientific data center at the national level by the Ministry of Science and Technology of China (MOST). NHEPSDC is constructed and operated by the Institute of High Energy Physics (IHEP) of the Chinese Academy of Sciences (CAS). NHEPSDC consists of a main data center in Beijing, a branch center in Guangdong-Hong Kong-Macao Greater Bay Area, and a branch center in Huairou District of Beijing. The mission of NHEPSDC is to provide the services of data collection, archiving, long-term preservation, access and sharing, software tools, and data analysis. The services of NHEPSDC are mainly for high-energy physics and related scientific research activities. The data collected can be roughly divided into the following two categories: one is the raw data from large scientific facilities, and the other is data generated from general scientific and technological projects (usually supported by government funding), hereafter referred to as generic data. More than 70 people work in NHEPSDC now, with 18 in high-energy physics, 17 in computer science, 15 in software engineering, 20 in data management and some other operation engineers. NHEPSDC is equipped with a hierarchical storage system, high-performance computing power, high bandwidth domestic and international network links, and a professional service support system. In the past three years, the average data increment is about 10 PB per year. By integrating data resources with the IT environment, a state-of-art data process platform is provided to users for scientific research, the volume of data accessed every year is more than 400 PB with more than 10 million visits.
Country
GEOMAR Helmholtz Centre for Ocean Research Kiel is one of the leading marine science institutions in Europe. GEOMAR investigates the chemical, physical, biological, and geological processes in the oceans, as well as their interactions with the seafloor and the atmosphere. OceanRep is an open access digital collection containing the research output of GEOMAR staff and students. Included are journal articles, conference papers, book chapters, theses and more, - with fulltext, if available. Research data are linked to the publications entries.
Country
GFZ Data Services is a repository for research data and scientific software across the Earth System Sciences, hosted at GFZ. The curated data are archived, persistently accessible and published with digital object identifier (DOI). They range from large dynamic datasets from global monitoring networks with real-time aquisition, to international services in geodesy and geophysics, to the full suite of small and highly heterogeneous datasets collected by individual researchers or small teams ("long-tail data"). In addition to the DOI registration and data archiving itself, GFZ Data Services team offers comprehensive consultation by domain scientists and IT specialists. Among others, GFZ Data Services is data publisher for the IAG Services ICGEM, IGETS and ISG (IAG = Int. Association for Geodesy; ICGEM = Int. Center for Global Earth Models; IGETS = Int. Geodynamics and Earth Tide Service; ISG = Int. Service for the Geoid), the World Stress Map, INTERMAGNET, GEOFON, the Geophysical Instrument Pool Potsdam GIPP, TERENO, EnMAP Flight Campaigns, the Potsdam Institute for Climate Impact Research PIK, the Specialised Information Service for Solid Earth Geosciences (FID GEO) and hosts the GFZ Catalogue for the International Generic Sample Number IGSN.
This website aggregates several services that provide access to data of the INTEGRAL Mission. ESA's INTErnational Gamma-Ray Astrophysics Laboratory is detecting some of the most energetic radiation that comes from space. It is the most sensitive gamma-ray observatory ever launched. INTEGRAL is an ESA mission in cooperation with Russia and the United States
CLARIN-LV is a national node of Clarin ERIC (Common Language Resources and Technology Infrastructure). The mission of the repository is to ensure the availability and long­ term preservation of language resources. The data stored in the repository are being actively used and cited in scientific publications.
Country
GESIS preserves (mainly quantitative) social research data to make it available to the scientific research community. The data is described in a standardized way, secured for the long term, provided with a permanent identifier (DOI), and can be easily found and reused through browser-optimized catalogs (https://search.gesis.org/).
The CONP portal is a web interface for the Canadian Open Neuroscience Platform (CONP) to facilitate open science in the neuroscience community. CONP simplifies global researcher access and sharing of datasets and tools. The portal internalizes the cycle of a typical research project: starting with data acquisition, followed by processing using already existing/published tools, and ultimately publication of the obtained results including a link to the original dataset. From more information on CONP, please visit https://conp.ca
The University of Reading Research Data Archive (the Archive) is a multidisciplinary online service for the registration, preservation and publication of research datasets produced or collected at the University of Reading.
In collaboration with other centres in the Text+ consortium and in the CLARIN infrastructure, the CLARIND-UdS enables eHumanities by providing a service for hosting and processing language resources (notably corpora) for members of the research community. CLARIND-UdS centre thus contributes of lifting the fragmentation of language resources by assisting members of the research community in preparing language materials in such a way that easy discovery is ensured, interchange is facilitated and preservation is enabled by enriching such materials with meta-information, transforming them into sustainable formats and hosting them. We have an explicit mission to archive language resources especially multilingual corpora (parallel, comparable) and corpora including specific registers, both collected by associated researchers as well as researchers who are not affiliated with us.
Country
PubData is Leuphana's institu­tional research data reposi­tory for the long-term preser­vation, documen­tation and publi­cation of research data from scienti­fic projects. PubData is main­tained by Leuphana's Media and Infor­mation Centre (MIZ) and is free of charge. The service is primarily aimed at Leuphana em­ployees and additionally at re­searchers from coope­ration partners con­tractually asso­ciated with Leuphana.
The Radboud Data Repository (RDR) is an institutional repository for archiving and sharing of data collected, processed, or analyzed by researchers working at or affiliated with the Radboud University (Nijmegen, the Netherlands). The repository allows safe long-term (at least 10 years) storage of large datasets. The RDR promotes findability of datasets by providing a DOI and rich metadata fields and allows researchers to easily manage data access.
Synapse is an open source software platform that clinical and biological data scientists can use to carry out, track, and communicate their research in real time. Synapse enables co-location of scientific content (data, code, results) and narrative descriptions of that work.
BOARD (Bicocca Open Archive Research Data) is the institutional data repository of the University of Milano-Bicocca. BOARD is an open, free-to-use research data repository, which enables members of University of Milano-Bicocca to make their research data publicly available. By depositing their research data in BOARD researchers can: - Make their research data citable - Share their data privately or publicly - Ensure long-term storage for their data - Keep access to all versions - Link their article to their data
The main goal of the CLUES-project is to provide constrained simulations of the local universe designed to be used as a numerical laboratory of the current paradigm. The simulations will be used for unprecedented analysis of the complex dark matter and gasdynamical processes which govern the formation of galaxies. The predictions of these experiments can be easily compared with the detailed observations of our galactic neighborhood. Some of the CLUES data is now publicly available via the CosmoSim database (https://www.cosmosim.org/). This includes AHF halo catalogues from the Box 64, WMAP3 resimulations of the Local Group with 40963 particle resolution.
The PhenoGen website shares experimental data with a worldwide community of investigators and provides a flexible, integrated, multi-resolution repository of neuroscience transcriptomic genetic data for collaborative research on genomic disorders. The main development focus is on providing Hybrid Rat Diversity Panel transcriptomic data (sequencing, genome coverage, reconstructed totalRNA/smallRNA transcriptomes, quanification of the transcriptome, eQTLs, and WGCNA) and integrating additional tools to provide platform for visualization and analysis of HRDP transcriptome data.
LOVD portal provides LOVD software and access to a list of worldwide LOVD applications through Locus Specific Database list and List of Public LOVD installations. The LOVD installations that have indicated to be included in the global LOVD listing are included in the overall LOVD querying service, which is based on an API.
California Digital Library (CDL) seeks to be a catalyst for deeply collaborative solutions providing a rich, intuitive and seamless environment for publishing, sharing and preserving our scholars’ increasingly diverse outputs, as well as for acquiring and accessing information critical to the University of California’s scholarly enterprise. University of California Curation Center (UC3) is the digital curation program within CDL. The mission of UC3 is to provide transformative preservation, curation, and research data management systems, services, and initiatives that sustain and promote open scholarship.