Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 40 result(s)
Country
The Health Atlas is an alliance of medical ontologists, medical systems biologists and clinical trials groups to design and implement a multi-functional and quality-assured atlas. It provides models, data and metadata on specific use cases from medical research projects from the partner institutions.
The Research Collection is ETH Zurich's publication platform. It unites the functions of a university bibliography, an open access repository and a research data repository within one platform. Researchers who are affiliated with ETH Zurich, the Swiss Federal Institute of Technology, may deposit research data from all domains. They can publish data as a standalone publication, publish it as supplementary material for an article, dissertation or another text, share it with colleagues or a research group, or deposit it for archiving purposes. Research-data-specific features include flexible access rights settings, DOI registration and a DOI preview workflow, content previews for zip- and tar-containers, as well as download statistics and altmetrics for published data. All data uploaded to the Research Collection are also transferred to the ETH Data Archive, ETH Zurich’s long-term archive.
Country
The German Neuroinformatics Node's data infrastructure (GIN) services provide a platform for comprehensive and reproducible management and sharing of neuroscience data. Building on well established versioning technology, GIN offers the power of a web based repository management service combined with a distributed file storage. The service addresses the range of research data workflows starting from data analysis on the local workstation to remote collaboration and data publication.
Neuroimaging Tools and Resources Collaboratory (NITRC) is currently a free one-stop-shop environment for science researchers that need resources such as neuroimaging analysis software, publicly available data sets, and computing power. Since its debut in 2007, NITRC has helped the neuroscience community to use software and data produced from research that, before NITRC, was routinely lost or disregarded, to make further discoveries. NITRC provides free access to data and enables pay-per-use cloud-based access to unlimited computing power, enabling worldwide scientific collaboration with minimal startup and cost. With NITRC and its components—the Resources Registry (NITRC-R), Image Repository (NITRC-IR), and Computational Environment (NITRC-CE)—a researcher can obtain pilot or proof-of-concept data to validate a hypothesis for a few dollars.
Country
Swedish National Data Service (SND) is a research data infrastructure designed to assist researchers in preserving, maintaining, and disseminating research data in a secure and sustainable manner. The SND Search function makes it easy to find, use, and cite research data from a variety of scientific disciplines. Together with an extensive network of almost 40 Swedish higher education institutions and other research organisations, SND works for increased access to research data, nationally as well as internationally.
Country
The Austrian NeuroCloud (ANC) is a FAIR-enabling platform for sustainable research data management in Cognitive Neuroscience. Most of the offered research data is restricted, the publicly available datasets can be seen under https://data.anc.plus.ac.at/explore The ANC offers tools and services to archive, manage, and share neurocognitive data flexibly and according to community standards. Scientists have full control over what they share (e.g., full original datasets or data derivatives), how they share it (by choosing from a selection of licensing models), and with whom (e.g., by using the ANC’s adjustable User Agreement templates). The ANC provides persistent DOIs for data releases and operates in accordance with European GDPR. Moreover, the ANC fully supports the mission of the EOSC and is committed to the EU’s open science policy, legal standards, and best open science practices. Accordingly, the ANC aspires to facilitate FAIR data operations along the entire data lifecycle, actively supporting the ongoing shift in research culture towards increased transparency, data reusability, and result reproducibility.
Country
DataverseNO is a curated, FAIR-aligned national generic repository for open research data from all academic disciplines. DataverseNO commits to facilitate that published data remain accessible and (re)usable in a long-term perspective. The repository is owned and operated by UiT The Arctic University of Norway. DataverseNO accepts submissions from researchers primarily from Norwegian research institutions. Datasets in DataverseNO are grouped into institutional collections as well as special collections. The technical infrastructure of the repository is based on the open source application Dataverse (https://dataverse.org), which is developed by an international developer and user community led by Harvard University.
Reference anatomies of the brain and corresponding atlases play a central role in experimental neuroimaging workflows and are the foundation for reporting standardized results. The choice of such references —i.e., templates— and atlases is one relevant source of methodological variability across studies, which has recently been brought to attention as an important challenge to reproducibility in neuroscience. TemplateFlow is a publicly available framework for human and nonhuman brain models. The framework combines an open database with software for access, management, and vetting, allowing scientists to distribute their resources under FAIR —findable, accessible, interoperable, reusable— principles. TemplateFlow supports a multifaceted insight into brains across species, and enables multiverse analyses testing whether results generalize across standard references, scales, and in the long term, species, thereby contributing to increasing the reliability of neuroimaging results.
Country
MDM-Portal (Medical Data Models) is a meta-data registry for creating, analyzing, sharing and reusing medical forms. It serves as an infrastructure for academic (non-commercial) medical research to contribute a solution to this problem. It contains forms in the system-independent CDISC Operational Data Model (ODM) format with more than 500,000 data-elements. The Portal provides numerous core data sets, common data elements or data standards, code lists and value sets. This enables researchers to view, discuss, download and export forms in most common technical formats such as PDF, CSV, Excel, SQL, SPSS, R, etc.
The CONP portal is a web interface for the Canadian Open Neuroscience Platform (CONP) to facilitate open science in the neuroscience community. CONP simplifies global researcher access and sharing of datasets and tools. The portal internalizes the cycle of a typical research project: starting with data acquisition, followed by processing using already existing/published tools, and ultimately publication of the obtained results including a link to the original dataset. From more information on CONP, please visit https://conp.ca
Country
Sikt archives research data on people and society to make sure the data can be shared and is made available for reuse. We continuously enrich our data collections to provide a richer basis for research. Sikt’s main focus is quantitative data matrices on individuals, organisations, administrative, political, and geographical actors. The archive specialise in survey data, which undergoes extensive curation at the variable level and detailed metadata is produced and published in Norwegian and English.
Funded by the National Science Foundation (NSF) and proudly operated by Battelle, the National Ecological Observatory Network (NEON) program provides open, continental-scale data across the United States that characterize and quantify complex, rapidly changing ecological processes. The Observatory’s comprehensive design supports greater understanding of ecological change and enables forecasting of future ecological conditions. NEON collects and processes data from field sites located across the continental U.S., Puerto Rico, and Hawaii over a 30-year timeframe. NEON provides free and open data that characterize plants, animals, soil, nutrients, freshwater, and the atmosphere. These data may be combined with external datasets or data collected by individual researchers to support the study of continental-scale ecological change.
GigaDB primarily serves as a repository to host data and tools associated with articles published by GigaScience Press; GigaScience and GigaByte (both are online, open-access journals). GigaDB defines a dataset as a group of files (e.g., sequencing data, analyses, imaging files, software programs) that are related to and support a unit-of-work (article or study). GigaDB allows the integration of manuscript publication with supporting data and tools.
ETH Data Archive is ETH Zurich's long-term preservation solution for digital information such as research data, digitised content, archival records, or images. It serves as the backbone of data curation and for most of its content, it is a “dark archive” without public access. In this capacity, the ETH Data Archive also archives the content of ETH Zurich’s Research Collection which is the primary repository for members of the university and the first point of contact for publication of data at ETH Zurich. All data that was produced in the context of research at the ETH Zurich, can be published and archived in the Research Collection. An automated connection to the ETH Data Archive in the background ensures the medium to long-term preservation of all publications and research data. Direct access to the ETH Data Archive is intended only for customers who need to deposit software source code within the framework of ETH transfer Software Registration. Open Source code packages and other content from legacy workflows can be accessed via ETH Library @ swisscovery (https://library.ethz.ch/en/).
The KNB Data Repository is an international repository intended to facilitate ecological, environmental and earth science research in the broadest senses. For scientists, the KNB Data Repository is an efficient way to share, discover, access and interpret complex ecological, environmental, earth science, and sociological data and the software used to create and manage those data. Due to rich contextual information provided with data in the KNB, scientists are able to integrate and analyze data with less effort. The data originate from a highly-distributed set of field stations, laboratories, research sites, and individual researchers. The KNB supports rich, detailed metadata to promote data discovery as well as automated and manual integration of data into new projects. The KNB supports a rich set of modern repository services, including the ability to assign Digital Object Identifiers (DOIs) so data sets can be confidently referenced in any publication, the ability to track the versions of datasets as they evolve through time, and metadata to establish the provenance relationships between source and derived data.
San Raffaele Open Research Data Repository (ORDR) is an institutional platform which allows to safely store, preserve and share research data. ORDR is endowed with the essential characteristics of trusted repositories, as it ensures: a) open or restricted access to contents, with persistent unique identifiers to enable referencing and citation; b) a comprehensive set of Metadata fields to enable discovery and reuse; c) provisions to safeguard integrity, authenticity and long-term preservation of deposited data.