Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 21 result(s)
The DARIAH-DE repository is a digital long-term archive for human and cultural-scientific research data. Each object described and stored in the DARIAH-DE Repository has a unique and lasting Persistent Identifier (DOI), with which it is permanently referenced, cited, and kept available for the long term. In addition, the DARIAH-DE Repository enables the sustainable and secure archiving of data collections. The DARIAH-DE Repository is not only to DARIAH-DE associated research projects, but also to individual researchers as well as research projects that want to save their research data persistently, referenceable and long-term archived and make it available to third parties. The main focus is the simple and user-oriented access to long-term storage of research data. To ensure its long term sustainability, the DARIAH-DE Repository is operated by the Humanities Data Centre.
The PAIN Repository is a recently funded NIH initiative, which has two components: an archive for already collected imaging data (Archived Repository), and a repository for structural and functional brain images and metadata acquired prospectively using standardized acquisition parameters (Standardized Repository) in healthy control subjects and patients with different types of chronic pain. The PAIN Repository provides the infrastructure for storage of standardized resting state functional, diffusion tensor imaging and structural brain imaging data and associated biological, physiological and behavioral metadata from multiple scanning sites, and provides tools to facilitate analysis of the resulting comprehensive data sets.
PORTULAN CLARIN Research Infrastructure for the Science and Technology of Language, belonging to the Portuguese National Roadmap of Research Infrastructures of Strategic Relevance, and part of the international research infrastructure CLARIN ERIC
Country
The “ICSSR Data Service” is culmination of signing of Memorandum of Understanding (MoU) between Indian Council of Social Science Research (ICSSR) and Ministry of Statistics and Programme Implementation (MoSPI). The MoU provides for setting-up of “ICSSR Data Service: Social Science Data Repository” and host NSS and ASI datasets generated by MoSPI. Under the initiative, social science research institutes, NGOs, individuals and others dealing with social science research are also being approached to deposit / provide their research datasets for hosting into the repository of ICSSR Data Service. The ICSSR Data Service includes social science and statistical datasets of various national-level surveys on debt & investment, domestic tourism, enterprise survey, employment and unemployment, housing condition, household consumer expenditure, health care, etc., into its repository. ICSSR Data Service aims to facilitate data sharing, preservation, accessibility and reuse of social science research data collected from entire social science community in India & abroad. The Information and Library Network (INFLIBNET) Centre, Gandhinagar has been assigned the task of setting-up the data repository.
The Polar Rock Repository is a national facility constructed adjacent to Scott Hall, home of the Byrd Polar Research Center, The Ohio State University. It is supported by the National Science Foundation's Office of Polar Programs. The repository houses rock samples from Antarctica, the Arctic, southern South America and South Africa. The polar rock collection and database includes field notes, photos, maps, cores, powder and mineral residues, thin sections, as well as microfossil mounts, microslides and residues. Rock samples may be borrowed for research by university scientists from anywhere in the world. Samples may also be borrowed for educational or museum use in the United States.
This database will provide a central location for scientists to browse uniquely observed proteoforms and to contribute their own datasets. Top-down proteomics is a method of protein identification that uses an ion trapping mass spectrometer to store an isolated protein ion for mass measurement and tandem mass spectrometry analysis.
The TextGrid Repository is a digital preservation archive for human sciences research data. It offers an extensive searchable and adaptable corpus of XML/TEI encoded texts, pictures and databases. Amongst the continuously growing corpus is the Digital Library of TextGrid, which consists of works of more than 600 authors of German fiction (prose, verse and drama), as well as nonfiction from the beginning of the printing press to the early 20th century. The files are saved in different output formats (XML, ePub, PDF), published and made searchable. Different tools e.g. viewing or quantitative text-analysis tools can be used for visualization or to further research the text. The TextGrid Repository is part of the virtual research environment TextGrid, which besides offering digital preservation also offers open-source software for collaborative creations and publications of e.g. digital editions that are based on XML/TEI.
SoyBase is a professionally curated repository for genetics, genomics and related data resources for soybean. It contains current genetic, physical and genomic sequence maps integrated with qualitative and quantitative traits. SoyBase includes annotated "Williams 82" genomic sequence and associated data mining tools. The repository maintains controlled vocabularies for soybean growth, development, and traits that are linked to more general plant ontologies.
The International Maize and Wheat Improvement Center (CIMMYT) provides a free, open access repository of research software, studies, and datasets produced and developed by CIMMYT scientists as well as the results of the Seeds of Discovery project, which makes available genetic profiles of wheat and maize, two of mankind's three major cereal crops.
The Biological and Chemical Oceanography Data Management Office (BCO-DMO) is a publicly accessible earth science data repository created to curate, publicly serve (publish), and archive digital data and information from biological, chemical and biogeochemical research conducted in coastal, marine, great lakes and laboratory environments. The BCO-DMO repository works closely with investigators funded through the NSF OCE Division’s Biological and Chemical Sections and the Division of Polar Programs Antarctic Organisms & Ecosystems. The office provides services that span the full data life cycle, from data management planning support and DOI creation, to archive with appropriate national facilities.
The Open Exoplanet Catalogue is a catalogue of all discovered extra-solar planets. It is a new kind of astronomical database. It is decentralized and completely open. We welcome contributions and corrections from both professional astronomers and the general public.
Surrey Research Insight (SRI) is an open access resource that hosts, preserves and disseminates the full text of scholarly papers produced by members of the University of Surrey. Its main purpose is to help Surrey authors make their research more widely known; their ideas and findings readily accessible; and their papers more frequently read and cited. Surrey Research Insight (formerly Surrey Scholarship Online) was developed in line with the Open Access Initiative, promoting free access to scholarship for the benefit of authors and scholars. It is one of many open access repositories around the world that operate on agreed standards to ensure wide and timely dissemination of research.
The World Glacier Monitoring Service (WGMS) collects standardized observations on changes in mass, volume, area and length of glaciers with time (glacier fluctuations), as well as statistical information on the distribution of perennial surface ice in space (glacier inventories). Such glacier fluctuation and inventory data are high priority key variables in climate system monitoring; they form a basis for hydrological modelling with respect to possible effects of atmospheric warming, and provide fundamental information in glaciology, glacial geomorphology and quaternary geology. The highest information density is found for the Alps and Scandinavia, where long and uninterrupted records are available. As a contribution to the Global Terrestrial/Climate Observing System (GTOS, GCOS), the Division of Early Warning and Assessment and the Global Environment Outlook of UNEP, and the International Hydrological Programme of UNESCO, the WGMS collects and publishes worldwide standardized glacier data.
GeoCommons is the public community of GeoIQ users who are building an open repository of data and maps for the world. The GeoIQ platform includes a large number of features that empower you to easily access, visualize and analyze your data. The GeoIQ platform powers the growing GeoCommons community of over 25,000 members actively creating and sharing hundreds of thousands of datasets and maps across the world. With GeoCommons, anyone can contribute and share open data, easily build shareable maps and collaborate with others.
The GTN-P database is an object-related database open for a diverse range of data. Because of the complexity of the PAGE21 project, data provided in the GTN-P management system are extremely diverse, ranging from active-layer thickness measurements once per year to flux measurement every second and everthing else in between. The data can be assigned to two broad categories: Quantitative data which is all data that can be measured numerically. Quantitative data comprise all in situ measurements, i.e. permafrost temperatures and active layer thickness (mechanical probing, frost/thaw tubes, soil temperature profiles). Qualitative data (knowledge products) are observations not based on measurements, such as observations on soils, vegetation, relief, etc.
Country
The Global Proteome Machine (GPM) is a protein identification database. This data repository allows users to post and compare results. GPM's data is provided by contributors like The Informatics Factory, University of Michigan, and Pacific Northwestern National Laboratories. The GPM searchable databases are: GPMDB, pSYT, SNAP, MRM, PEPTIDE and HOT.
---<<< This repository is no longer available. This record is out-dated >>>--- The ONS challenge contains open solubility data, experiments with raw data from different scientists and institutions. It is part of the The Open Notebook Science wiki community, ideally suited for community-wide collaborative research projects involving mathematical modeling and computer simulation work, as it allows researchers to document model development in a step-by-step fashion, then link model prediction to experiments that test the model, and in turn, use feeback from experiments to evolve the model. By making our laboratory notebooks public, the evolutionary process of a model can be followed in its totality by the interested reader. Researchers from laboratories around the world can now follow the progress of our research day-to-day, borrow models at various stages of development, comment or advice on model developments, discuss experiments, ask questions, provide feedback, or otherwise contribute to the progress of science in any manner possible.
ModelDB is a curated database of published models in the broad domain of computational neuroscience. It addresses the need for access to such models in order to evaluate their validity and extend their use. It can handle computational models expressed in any textual form, including procedural or declarative languages (e.g. C++, XML dialects) and source code written for any simulation environment. The model source code doesn't even have to reside inside ModelDB; it just has to be available from some publicly accessible online repository or WWW site.