Reset all


Content Types


AID systems



Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type


Metadata standards

PID systems

Provider types

Quality management

Repository languages



Repository types


  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 20 result(s)
The Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC) is a team of researchers, data specialists and computer system developers who are supporting the development of a data management system to store scientific data generated by Gulf of Mexico researchers. The Master Research Agreement between BP and the Gulf of Mexico Alliance that established the Gulf of Mexico Research Initiative (GoMRI) included provisions that all data collected or generated through the agreement must be made available to the public. The Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC) is the vehicle through which GoMRI is fulfilling this requirement. The mission of GRIIDC is to ensure a data and information legacy that promotes continual scientific discovery and public awareness of the Gulf of Mexico Ecosystem.
The DOE Data Explorer (DDE) is an information tool to help you locate DOE's collections of data and non-text information and, at the same time, retrieve individual datasets within some of those collections. It includes collection citations prepared by the Office of Scientific and Technical Information, as well as citations for individual datasets submitted from DOE Data Centers and other organizations.
The UC San Diego Library Digital Collections website gathers two categories of content managed by the Library: library collections (including digitized versions of selected collections covering topics such as art, film, music, history and anthropology) and research data collections (including research data generated by UC San Diego researchers).
Edinburgh DataShare is an online digital repository of multi-disciplinary research datasets produced at the University of Edinburgh, hosted by the Data Library in Information Services. Edinburgh University researchers who have produced research data associated with an existing or forthcoming publication, or which has potential use for other researchers, are invited to upload their dataset for sharing and safekeeping. A persistent identifier and suggested citation will be provided.
The Research Collection is ETH Zurich's publication platform. It unites the functions of a university bibliography, an open access repository and a research data repository within one platform. Researchers who are affiliated with ETH Zurich, the Swiss Federal Institute of Technology, may deposit research data from all domains. They can publish data as a standalone publication, publish it as supplementary material for an article, dissertation or another text, share it with colleagues or a research group, or deposit it for archiving purposes. Research-data-specific features include flexible access rights settings, DOI registration and a DOI preview workflow, content previews for zip- and tar-containers, as well as download statistics and altmetrics for published data. All data uploaded to the Research Collection are also transferred to the ETH Data Archive, ETH Zurich’s long-term archive.
The UWA Research Repository contains research publications, research datasets and theses created by researchers and postgraduates affiliated with UWA. It is managed by the University Library and provides access to research datasets held at the University of Western Australia. The information about each dataset has been provided by UWA research groups. Dataset metadata is harvested into Research Data Australia (RDA: Language: The user interface language of the research data repository.
The PeptideAtlas validates expressed proteins to provide eukaryotic genome data. Peptide Atlas provides data to advance biological discoveries in humans. The PeptideAtlas accepts proteomic data from high-throughput processes and encourages data submission.
Edmond is the institutional repository of the Max Planck Society for public research data. It enables Max Planck scientists to create citable scientific assets by describing, enriching, sharing, exposing, linking, publishing and archiving research data of all kinds. A unique feature of Edmond is the dedicated metadata management, which supports a non-restrictive metadata schema definition, as simple as you like or as complex as your parameters require. Further on, all objects within Edmond have a unique identifier and therefore can be clearly referenced in publications or reused in other contexts.
ETH Data Archive is ETH Zurich's long-term preservation solution for digital information such as research data, documents or images. It serves as the backbone of data curation and for most of its content, it is a “dark archive” without public access. In this capacity, the ETH Data Archive also archives the content of ETH Zurich’s Research Collection which is the primary repository for members of the university and the first point of contact for publication of data at ETH Zurich. All data that was produced in the context of research at the ETH Zurich, can be published and archived in the Research Collection. In the following cases, a direct data upload into the ETH Data Archive though, has to be considered: - Upload and registration of software code according to ETH transfer’s requirements for Software Disclosure. - A substantial number of files, have to be regularly submitted for long-term archiving and/or publishing and browser-based upload is not an option: the ETH Data Archive may offer automated data and metadata transfers from source applications (e.g. from a LIMS) via API. - Files for a project on a local computer have to be collected and metadata has to be added before uploading the data to the ETH Data Archive: -- we provide you with the local file editor docuteam packer. Docuteam packer allows to structure, describe, and organise data for an upload into the ETH Data Archive and the depositor decides when submission is due.
Science3D is an Open Access project to archive and curate scientific data and make them available to everyone interested in scientific endeavours. Science3D focusses mainly on 3D tomography data from biological samples, simply because theses object make it comparably easy to understand the concepts and techniques. The data come primarily from the imaging beamlines of the Helmholtz Center Geesthacht (HZG), which make use of the uniquely bright and coherent X-rays of the Petra3 synchrotron. Petra3 - like many other photon and neutron sources in Europe and World-wide - is a fantastic instrument to investigate the microscopic detail of matter and organisms. The experiments at photon science beamlines hence provide unique insights into all kind of scientific fields, ranging from medical applications to plasma physics. The success of these experiments demands enormous efforts of the scientists and quite some investments
Apollo (previously DSpace@Cambridge) is the University of Cambridge’s institutional repository, preserving and providing access to content created by members of the University. The repository stores a range of content and provides different levels of access, but its primary focus is on providing open access to the University’s research publications.
The Protein Data Bank (PDB) is an archive of experimentally determined three-dimensional structures of biological macromolecules that serves a global community of researchers, educators, and students. The data contained in the archive include atomic coordinates, crystallographic structure factors and NMR experimental data. Aside from coordinates, each deposition also includes the names of molecules, primary and secondary structure information, sequence database references, where appropriate, and ligand and biological assembly information, details about data collection and structure solution, and bibliographic citations. The Worldwide Protein Data Bank (wwPDB) consists of organizations that act as deposition, data processing and distribution centers for PDB data. Members are: RCSB PDB (USA), PDBe (Europe) and PDBj (Japan), and BMRB (USA). The wwPDB's mission is to maintain a single PDB archive of macromolecular structural data that is freely and publicly available to the global community.
Chapman University Digital Commons is an open access digital repository and publication platform designed to collect, store, index, and provide access to the scholarly and creative output of Chapman University faculty, students, staff, and affiliates. In it are faculty research papers and books, data sets, outstanding student work, audiovisual materials, images, special collections, and more, all created by members of or owned by Chapman University. The datasets are listed in a separate collection.
The Open Science Framework (OSF) is part network of research materials, part version control system, and part collaboration software. The purpose of the software is to support the scientist's workflow and help increase the alignment between scientific values and scientific practices. Document and archive studies. Move the organization and management of study materials from the desktop into the cloud. Labs can organize, share, and archive study materials among team members. Web-based project management reduces the likelihood of losing study materials due to computer malfunction, changing personnel, or just forgetting where you put the damn thing. Share and find materials. With a click, make study materials public so that other researchers can find, use and cite them. Find materials by other researchers to avoid reinventing something that already exists. Detail individual contribution. Assign citable, contributor credit to any research material - tools, analysis scripts, methods, measures, data. Increase transparency. Make as much of the scientific workflow public as desired - as it is developed or after publication of reports. Find public projects here. Registration. Registering materials can certify what was done in advance of data analysis, or confirm the exact state of the project at important points of the lifecycle such as manuscript submission or at the onset of data collection. Discover public registrations here. Manage scientific workflow. A structured, flexible system can provide efficiency gain to workflow and clarity to project objectives, as pictured.
The KADoNiS (Karlsruhe Astrophysical Database of Nucleosynthesis in Stars) project is an online database for cross sections relevant for s-process and the p-process nucleosynthesis. Recently, the p-process part of the KADoNiS database has been extended, and now includes almost all available experimental data from reactions in or close to the respective Gamow window