Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 255 result(s)
The Tromsø Repository of Language and Linguistics (TROLLing) is a FAIR-aligned repository of linguistic data and statistical code. The archive is open access, which means that all information is available to everyone. All data are accompanied by searchable metadata that identify the researchers, the languages and linguistic phenomena involved, the statistical methods applied, and scholarly publications based on the data (where relevant). Linguists worldwide are invited to deposit data and statistical code used in their linguistic research. TROLLing is a special collection within DataverseNO (http://doi.org/10.17616/R3TV17), and C Centre within CLARIN (Common Language Resources and Technology Infrastructure, a networked federation of European data repositories; http://www.clarin.eu/), and harvested by their Virtual Language Observatory (VLO; https://vlo.clarin.eu/).
Under the World Climate Research Programme (WCRP) the Working Group on Coupled Modelling (WGCM) established the Coupled Model Intercomparison Project (CMIP) as a standard experimental protocol for studying the output of coupled atmosphere-ocean general circulation models (AOGCMs). CMIP provides a community-based infrastructure in support of climate model diagnosis, validation, intercomparison, documentation and data access. This framework enables a diverse community of scientists to analyze GCMs in a systematic fashion, a process which serves to facilitate model improvement. Virtually the entire international climate modeling community has participated in this project since its inception in 1995. The Program for Climate Model Diagnosis and Intercomparison (PCMDI) archives much of the CMIP data and provides other support for CMIP. We are now beginning the process towards the IPCC Fifth Assessment Report and with it the CMIP5 intercomparison activity. The CMIP5 (CMIP Phase 5) experiment design has been finalized with the following suites of experiments: I Decadal Hindcasts and Predictions simulations, II "long-term" simulations, III "atmosphere-only" (prescribed SST) simulations for especially computationally-demanding models. The new ESGF peer-to-peer (P2P) enterprise system (http://pcmdi9.llnl.gov) is now the official site for CMIP5 model output. The old gateway (http://pcmdi3.llnl.gov) is deprecated and now shut down permanently.
The Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC) is a team of researchers, data specialists and computer system developers who are supporting the development of a data management system to store scientific data generated by Gulf of Mexico researchers. The Master Research Agreement between BP and the Gulf of Mexico Alliance that established the Gulf of Mexico Research Initiative (GoMRI) included provisions that all data collected or generated through the agreement must be made available to the public. The Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC) is the vehicle through which GoMRI is fulfilling this requirement. The mission of GRIIDC is to ensure a data and information legacy that promotes continual scientific discovery and public awareness of the Gulf of Mexico Ecosystem.
<<<!!!<<< This repository is no longer available. SPECTRa (Submission, Preservation and Exposure of Chemistry Teaching and Research Data) was a collaboration between Cambridge University and Imperial College to research issues in the deposition of chemistry data in Open Access digital repositories. Funded by the JISC (Joint Information Systems Committee) under its Digital Repositories programme, it ran from October 2005 to March 2007. Requirements for and attitudes towards data archiving and open access publication were discovered by interview and survey. This led to the development of a set of Open Source software tools for packaging and submitting X-ray crystallography, NMR spectra and computational chemistry data to DSpace digital repositories. This collection will hold reports, presentations and papers published from the project: https://www.repository.cam.ac.uk/handle/1810/183858 >>>!!!>>>
Country
The main focus of tambora.org is Historical Climatology. Years of meticulous work in this field in research groups around the world have resulted in large data collections on climatic parameters such as temperature, precipitation, storms, floods, etc. with different regional, temporal and thematic foci. tambora.org enables researchers to collaboratively interpret the information derived from historical sources. It provides a database for original text quotations together with bibliographic references and the extracted places, dates and coded climate and environmental information.
CORE is a full-text, interdisciplinary, non-profit social repository designed to increase the impact of work in the Humanities. Commons Open Repository Exchange, a library-quality repository for sharing, discovering, retrieving, and archiving digital work. CORE provides Humanities Commons members with a permanent, open access storage facility for their scholarly output, facilitating maximum discoverability and encouraging peer feedback.
The online digital research data repository of multi-disciplinary research datasets produced at the University of Nottingham, hosted by Information Services and managed and curated by Libraries, Research & Learning Resources. University of Nottingham researchers who have produced research data associated with an existing or forthcoming publication, or which has potential use for other researchers, are invited to upload their dataset.
The Atmospheric Science Data Center (ASDC) at NASA Langley Research Center is responsible for processing, archiving, and distribution of NASA Earth science data in the areas of radiation budget, clouds, aerosols, and tropospheric chemistry.The ASDC specializes in atmospheric data important to understanding the causes and processes of global climate change and the consequences of human activities on the climate.
The Social Science Data Archive is still active and maintained as part of the UCLA Library Data Science Center. SSDA Dataverse is one of the archiving opportunities of SSDA, the others are: Data can be archived by SSDA itself or by ICPSR or by UCLA Library or by California Digital Library. The Social Science Data Archives serves the UCLA campus as an archive of faculty and graduate student survey research. We provide long term storage of data files and documentation. We ensure that the data are useable in the future by migrating files to new operating systems. We follow government standards and archival best practices. The mission of the Social Science Data Archive has been and continues to be to provide a foundation for social science research with faculty support throughout an entire research project involving original data collection or the reuse of publicly available studies. Data Archive staff and researchers work as partners throughout all stages of the research process, beginning when a hypothesis or area of study is being developed, during grant and funding activities, while data collection and/or analysis is ongoing, and finally in long term preservation of research results. Our role is to provide a collaborative environment where the focus is on understanding the nature and scope of research approach and management of research output throughout the entire life cycle of the project. Instructional support, especially support that links research with instruction is also a mainstay of operations.
The aim of the EPPO Global Database is to provide in a single portal for all pest-specific information that has been produced or collected by EPPO. The full database is available via the Internet, but when no Internet connection is available a subset of the database called ‘EPPO GD Desktop’ can be run as a software (now replacing PQR).
The Astromaterials Data System (AstroMat) is a data infrastructure to store, curate, and provide access to laboratory data acquired on samples curated in the Astromaterials Collection of the Johnson Space Center. AstroMat is developed and operated at the Lamont-Doherty Earth Observatory of Columbia University and funded by NASA.
Subject(s)
A domain-specific repository for the Life Sciences, covering the health, medical as well as the green life sciences. The repository services are primarily aimed at the Netherlands, but not exclusively.
Country
NAKALA is a repository dedicated to SSH research data in France. Given its generalist and multi-disciplinary nature, all types of data are accepted, although certain formats are recommended to ensure longterm data preservation. It has been developed and is hosted by Huma-Num, the French national research infrastructure for digital humanities.
Country
The NCI National Research Data Collection is Australia’s largest collection of research data, encompassing more than 10 PB of nationally and internationally significant datasets.
Country
The Research Data Center Qualiservice provides services for archiving and reusing qualitative research data from the social sciences. We advise and accompany research projects in the process of long-term data archiving and data sharing. Data curation is conducted by experts for the social sciences. We also provide research data and relevant context information for reuse in scientific research and teaching. Internationally interoperable metadata ensure that data sets are searchable and findable. Persistent identifiers (DOI) ensure that data and study contexts are citable. Qualiservice was accredited by the German Data Forum (RatSWD) in 2019 and adheres to its quality assurance criteria. Qualiservice is committed to the German Research Foundation’s (DFG) Guidelines for Safeguarding Good Scientific Practice and takes into account the FAIR Guiding Principles for scientific data management and stewardship as well as the OECD Principles and Guidelines for Access to Research Data from Public Funding. Qualiservice coordinates the networking and further development of scientific infrastructures for archiving and secondary use of qualitative data from social research within the framework of the National Research Data Infrastructure.
The UCI Machine Learning Repository is a collection of databases, domain theories, and data generators that are used by the machine learning community for the empirical analysis of machine learning algorithms. It is used by students, educators, and researchers all over the world as a primary source of machine learning data sets. As an indication of the impact of the archive, it has been cited over 1000 times.
Country
The Marine Data Archive (MDA) is an online repository specifically developed to independently archive data files in a fully documented manner. The MDA can serve individuals, consortia, working groups and institutes to manage data files and file versions for a specific context (project, report, analysis, monitoring campaign), as a personal or institutional archive or back-up system and as an open repository for data publication.
This hub supports the geospatial modeling, data analysis and visualization needs of the broad research and education communities through hosting of groups, datasets, tools, training materials, and educational contents.
The DesignSafe Data Depot Repository (DDR) is the platform for curation and publication of datasets generated in the course of natural hazards research. The DDR is an open access data repository that enables data producers to safely store, share, organize, and describe research data, towards permanent publication, distribution, and impact evaluation. The DDR allows data consumers to discover, search for, access, and reuse published data in an effort to accelerate research discovery. It is a component of the DesignSafe cyberinfrastructure, which represents a comprehensive research environment that provides cloud-based tools to manage, analyze, curate, and publish critical data for research to understand the impacts of natural hazards. DesignSafe is part of the NSF-supported Natural Hazards Engineering Research Infrastructure (NHERI), and aligns with its mission to provide the natural hazards research community with open access, shared-use scholarship, education, and community resources aimed at supporting civil and social infrastructure prior to, during, and following natural disasters. It serves a broad national and international audience of natural hazard researchers (both engineers and social scientists), students, practitioners, policy makers, as well as the general public. It has been in operation since 2016, and also provides access to legacy data dating from about 2005. These legacy data were generated as part of the NSF-supported Network for Earthquake Engineering Simulation (NEES), a predecessor to NHERI. Legacy data and metadata belonging to NEES were transferred to the DDR for continuous preservation and access.
The Open Science Framework (OSF) is part network of research materials, part version control system, and part collaboration software. The purpose of the software is to support the scientist's workflow and help increase the alignment between scientific values and scientific practices. Document and archive studies. Move the organization and management of study materials from the desktop into the cloud. Labs can organize, share, and archive study materials among team members. Web-based project management reduces the likelihood of losing study materials due to computer malfunction, changing personnel, or just forgetting where you put the damn thing. Share and find materials. With a click, make study materials public so that other researchers can find, use and cite them. Find materials by other researchers to avoid reinventing something that already exists. Detail individual contribution. Assign citable, contributor credit to any research material - tools, analysis scripts, methods, measures, data. Increase transparency. Make as much of the scientific workflow public as desired - as it is developed or after publication of reports. Find public projects here. Registration. Registering materials can certify what was done in advance of data analysis, or confirm the exact state of the project at important points of the lifecycle such as manuscript submission or at the onset of data collection. Discover public registrations here. Manage scientific workflow. A structured, flexible system can provide efficiency gain to workflow and clarity to project objectives, as pictured.
ScholarSphere is an institutional repository managed by Penn State University Libraries. Anyone with a Penn State Access ID can deposit materials relating to the University’s teaching, learning, and research mission to ScholarSphere. All types of scholarly materials, including publications, instructional materials, creative works, and research data are accepted. ScholarSphere supports Penn State’s commitment to open access and open science. Researchers at Penn State can use ScholarSphere to satisfy open access and data availability requirements from funding agencies and publishers.
Country
The National Tibetan Plateau/Third Pole Environment Data Center (TPDC) is one of a first group of 20 national data centers approved by the Ministry of Science and Technology of China in 2019. It possesses the most comprehensive scientific data on the Tibetan Plateau and surrounding regions of any data centers in China. TPDC provides online and offline data download services according to TPDC data Sharing Protocol with bilingual of Chinese and English (https://data.tpdc.ac.cn/). There are more than 2400 datasets, covering geography, atmospheric science, cryospheric science, hydrology, ecology, geology, geophysics, natural resource science, social economy, and other fields. There are more than 30000 registered users. TPDC complies with the principle of “Findable, Accessible, Interoperable, and Reusable (FAIR)”, and has adopted a series of measures to protect the intellectual property by giving credit to data providers. Digital Object Identifiers (DOI) are used for scientific data access, tracking, and citation. The Creative Commons 4.0 protocol is used for data re-distribution and re-use. Data users are required to cite the datasets and provide necessary acknowledgement in order to give credit to data authors as journal papers. The data citation references are provided on the TPDC landing page of each dataset.