Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 54 result(s)
LEPR is a database of results of published experimental studies involving liquid-solid phase equilibria relevant to natural magmatic systems. TraceDs is a database of experimental studies involving trace element distribution between liquid, solid and fluid phases.
Bitbucket is a web-based version control repository hosting service owned by Atlassian, for source code and development projects that use either Mercurial or Git revision control systems.
Country
The OPA Division deals with the development of models and methods for interdisciplinary research on marine operational forecasting, on the interactions between coastal areas and the open ocean, on the development of services and applications for all maritime economy sectors, including transport, security and management of coastal areas and marine resources, in the context of climate change adaptation problems.
<<<!!!<<< This repository is no longer available. >>>!!!>>> The programme "International Oceanographic Data and Information Exchange" (IODE) of the "Intergovernmental Oceanographic Commission" (IOC) of UNESCO was established in 1961. Its purpose is to enhance marine research, exploitation and development, by facilitating the exchange of oceanographic data and information between participating Member States, and by meeting the needs of users for data and information products.
The Spiral Digital Repository is the Imperial College London institutional open access repository. This system allows you, as an author, to make your research documents open access without incurring additional publication costs. When you self-archive a research document in Spiral it becomes free for anyone to read. You can upload copies of your publications to Spiral using Symplectic Elements. All deposited content becomes searchable online.
The Global Hydrology Resource Center (GHRC) provides both historical and current Earth science data, information, and products from satellite, airborne, and surface-based instruments. GHRC acquires basic data streams and produces derived products from many instruments spread across a variety of instrument platforms.
Country
FDAT is a research data repository hosted by the University of Tübingen, designed to facilitate long-term archiving and publication of research data. Managed by the Information, Communication and Media Center (IKM), it primarily caters to the humanities and social sciences, while welcoming researchers from all scientific disciplines at the university. Committed to high-quality data management, FDAT emphasizes the importance of adhering to the FAIR Data Principles, promoting findability, accessibility, interoperability, and reusability of the research data it contains.
The CLARIN-D Centre CEDIFOR provides a repository for long-term storage of resources and meta-data. Resources hosted in the repository stem from research of members as well as associated research projects of CEDIFOR. This includes software and web-services as well as corpora of text, lexicons, images and other data.
Merritt is a curation repository for the preservation of and access to the digital research data of the ten campus University of California system and external project collaborators. Merritt is supported by the University of California Curation Center (UC3) at the California Digital Library (CDL). While Merritt itself is content agnostic, accepting digital content regardless of domain, format, or structure, it is being used for management of research data, and it forms the basis for a number of domain-specific repositories, such as the ONEShare repository for earth and environmental science and the DataShare repository for life sciences. Merritt provides persistent identifiers, storage replication, fixity audit, complete version history, REST API, a comprehensive metadata catalog for discovery, ATOM-based syndication, and curatorially-defined collections, access control rules, and data use agreements (DUAs). Merritt content upload and download may each be curatorially-designated as public or restricted. Merritt DOIs are provided by UC3's EZID service, which is integrated with DataCite. All DOIs and associated metadata are automatically registered with DataCite and are harvested by Ex Libris PRIMO and Thomson Reuters Data Citation Index (DCI) for high-level discovery. Merritt is also a member node in the DataONE network; curatorially-designated data submitted to Merritt are automatically registered with DataONE for additional replication and federated discovery through the ONEMercury search/browse interface.
Country
Rodare is the institutional research data repository at HZDR (Helmholtz-Zentrum Dresden-Rossendorf). Rodare allows HZDR researchers to upload their research software and data and enrich those with metadata to make them findable, accessible, interoperable and retrievable (FAIR). By publishing all associated research software and data via Rodare research reproducibility can be improved. Uploads receive a Digital Object Identfier (DOI) and can be harvested via a OAI-PMH interface.
Kaggle is a platform for predictive modelling and analytics competitions in which statisticians and data miners compete to produce the best models for predicting and describing the datasets uploaded by companies and users. This crowdsourcing approach relies on the fact that there are countless strategies that can be applied to any predictive modelling task and it is impossible to know beforehand which technique or analyst will be most effective.
The range of CIRAD's research has given rise to numerous datasets and databases associating various types of data: primary (collected), secondary (analysed, aggregated, used for scientific articles, etc), qualitative and quantitative. These "collections" of research data are used for comparisons, to study processes and analyse change. They include: genetics and genomics data, data generated by trials and measurements (using laboratory instruments), data generated by modelling (interpolations, predictive models), long-term observation data (remote sensing, observatories, etc), data from surveys, cohorts, interviews with players.
This website aggregates several services that provide access to data of the INTEGRAL Mission. ESA's INTErnational Gamma-Ray Astrophysics Laboratory is detecting some of the most energetic radiation that comes from space. It is the most sensitive gamma-ray observatory ever launched. INTEGRAL is an ESA mission in cooperation with Russia and the United States
OpenWorm aims to build the first comprehensive computational model of the Caenorhabditis elegans (C. elegans), a microscopic roundworm. With only a thousand cells, it solves basic problems such as feeding, mate-finding and predator avoidance. Despite being extremely well studied in biology, this organism still eludes a deep, principled understanding of its biology. We are using a bottom-up approach, aimed at observing the worm behaviour emerge from a simulation of data derived from scientific experiments carried out over the past decade. To do so we are incorporating the data available in the scientific community into software models. We are engineering Geppetto and Sibernetic, open-source simulation platforms, to be able to run these different models in concert. We are also forging new collaborations with universities and research institutes to collect data that fill in the gaps All the code we produce in the OpenWorm project is Open Source and available on GitHub.
CLARIN-LV is a national node of Clarin ERIC (Common Language Resources and Technology Infrastructure). The mission of the repository is to ensure the availability and long­ term preservation of language resources. The data stored in the repository are being actively used and cited in scientific publications.
OLOS is a Swiss-based data management portal tailored for researchers and institutions. Powerful yet easy to use, OLOS works with most tools and formats across all scientific disciplines to help researchers safely manage, publish and preserve their data. The solution was developed as part of a larger project focusing on Data Life Cycle Management (dlcm.ch) that aims to develop various services for research data management. Thanks to its highly modular architecture, OLOS can be adapted both to small institutions that need a "turnkey" solution and to larger ones that can rely on OLOS to complement what they have already implemented. OLOS is compatible with all formats in use in the different scientific disciplines and is based on modern technology that interconnects with researchers' environments (such as Electronic Laboratory Notebooks or Laboratory Information Management Systems).
Country
ISTA Research Explorer is an online digital repository of multi-disciplinary research datasets as well as publications produced at IST Austria, hosted by the Library. ISTA researchers who have produced research data associated with an existing or forthcoming publication, or which has potential use for other researches, are invited to upload their dataset for sharing and safekeeping. A persistent identifier and suggested citation will be provided.
UltraViolet is part of a suite of repositories at New York University that provide a home for research materials, operated as a partnership of the Division of Libraries and NYU IT's Research and Instruction Technology. UltraViolet provides faculty, students, and researchers within our university community with a place to deposit scholarly materials for open access and long-term preservation. UltraViolet also houses some NYU Libraries collections, including proprietary data collections.
Country
It is a statistical system developed for collection, computerization, analysis and use of educational and allied data for planning, management, monitoring and feedback. So, DISE is an initiative of the Department of Educational Management Information System (EMIS) of NUEPA for developing and strengthening the educational management information system in India. The initiative is coordinated from district level to state and extended up to national level are being constantly collected and disseminated. It provides information on vital parameters relating to students, teachers and infrastructure at all levels of education in India. Presently DISE has three modules U-DISE, DISE, and SEMIS. DISE also provides several other derivative statistical products, such as, District Report Cards, State Report Cards, School Report Cards, Flash Statistics, Analytical Reports, Rural/Urban Statistics, etc.
Yareta is a repository service built on digital solutions for archiving, preserving and sharing research data that enable researchers and institutions of any disciplines to share and showcase their research results. The solution was developed as part of a larger project focusing on Data Life Cycle Management (dlcm.ch) that aims to develop various services for research data management. Thanks to its highly modular architecture, Yareta can be adapted both to small institutions that need a "turnkey" solution and to larger ones that can rely on Yareta to complement what they have already implemented. Yareta is compatible with all formats in use in the different scientific disciplines and is based on modern technology that interconnects with researchers' environments (such as Electronic Laboratory Notebooks or Laboratory Information Management Systems).
Country
sciencedata.dk is a research data store provided by DTU, the Danish Technical University, specifically aimed at researchers and scientists at Danish academic institutions. The service is intended for working with and sharing active research data as well as for safekeeping of large datasets. The data can be accessed and manipulated via a web interface, synchronization clients, file transfer clients or the command line. The service is built on and with open-source software from the ground up: FreeBSD, ZFS, Apache, PHP, ownCloud/Nextcloud. DTU is actively engaged in community efforts on developing research-specific functionality for data stores. Our servers are attached directly to the 10-Gigabit backbone of "Forskningsnettet" (the National Research and Education Network of Denmark) - implying that up and download speed from Danish academic institutions is in principle comparable to those of an external USB hard drive. Data store for research data allowing private sharing and sharing via links / persistent URLs.
The KNB Data Repository is an international repository intended to facilitate ecological, environmental and earth science research in the broadest senses. For scientists, the KNB Data Repository is an efficient way to share, discover, access and interpret complex ecological, environmental, earth science, and sociological data and the software used to create and manage those data. Due to rich contextual information provided with data in the KNB, scientists are able to integrate and analyze data with less effort. The data originate from a highly-distributed set of field stations, laboratories, research sites, and individual researchers. The KNB supports rich, detailed metadata to promote data discovery as well as automated and manual integration of data into new projects. The KNB supports a rich set of modern repository services, including the ability to assign Digital Object Identifiers (DOIs) so data sets can be confidently referenced in any publication, the ability to track the versions of datasets as they evolve through time, and metadata to establish the provenance relationships between source and derived data.