Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 37 result(s)
The N3C Data Enclave is a secure portal containing a very large and extensive set of harmonized COVID-19 clinical electronic health record (EHR) data. The data can be accessed through a secure cloud Enclave hosted by NCATS and cannot be downloaded due to regulatory control. Broad access is available to investigators at institutions that sign a Data Use Agreements and via Data Use Requests by investigators. The N3C is a unique open, reproducible, transparent, collaborative team science initiative to leverage sensitive clinical data to expedite COVID-19 discoveries and improve health outcomes.
Content type(s)
Country
National Human Brain Bank for Development and Function was originally established in 2012 by the Institute of Basic Medical Sciences, Chinese Academy of Medical Sciences as a public interest institution dedicated to the preservation and research of human brain tissues based on the volunteer donor station of Peking Union Medical College. In 2019, it was officially recognised by the Ministry of Science and Technology as a national science and technology resource platform: National Human Brain Bank for Development and Function. Since its establishment, the Concordia Brain Bank has accepted and preserved more than two hundred and seventy whole brain tissue samples. While conducting its own research on the standardisation of brain banks, neuropathology and various histologies related to human brain ageing and dementia, it has also developed and published the Standardised Operational Protocol for Human Brain Tissue Banks in China for more than ten universities in China, and has provided valuable human brain tissue samples for a number of research groups in our own institutions and other units in China, which has strongly supported brain science and brain disease research in China. As a national resource platform, we will continue to aim to support and lead brain science research in China and make positive contributions to maintaining brain health and defeating brain diseases.
The PAIN Repository is a recently funded NIH initiative, which has two components: an archive for already collected imaging data (Archived Repository), and a repository for structural and functional brain images and metadata acquired prospectively using standardized acquisition parameters (Standardized Repository) in healthy control subjects and patients with different types of chronic pain. The PAIN Repository provides the infrastructure for storage of standardized resting state functional, diffusion tensor imaging and structural brain imaging data and associated biological, physiological and behavioral metadata from multiple scanning sites, and provides tools to facilitate analysis of the resulting comprehensive data sets.
METLIN represents the largest MS/MS collection of data with the database generated at multiple collision energies and in positive and negative ionization modes. The data is generated on multiple instrument types including SCIEX, Agilent, Bruker and Waters QTOF mass spectrometers.
All ADNI data are shared without embargo through the LONI Image and Data Archive (IDA), a secure research data repository. Interested scientists may obtain access to ADNI imaging, clinical, genomic, and biomarker data for the purposes of scientific investigation, teaching, or planning clinical research studies. "The Alzheimer’s Disease Neuroimaging Initiative (ADNI) unites researchers with study data as they work to define the progression of Alzheimer’s disease (AD). ADNI researchers collect, validate and utilize data, including MRI and PET images, genetics, cognitive tests, CSF and blood biomarkers as predictors of the disease. Study resources and data from the North American ADNI study are available through this website, including Alzheimer’s disease patients, mild cognitive impairment subjects, and elderly controls. "
Project Achilles is a systematic effort aimed at identifying and cataloging genetic vulnerabilities across hundreds of genomically characterized cancer cell lines. The project uses genome-wide genetic perturbation reagents (shRNAs or Cas9/sgRNAs) to silence or knock-out individual genes and identify those genes that affect cell survival. Large-scale functional screening of cancer cell lines provides a complementary approach to those studies that aim to characterize the molecular alterations (e.g. mutations, copy number alterations) of primary tumors, such as The Cancer Genome Atlas (TCGA). The overall goal of the project is to identify cancer genetic dependencies and link them to molecular characteristics in order to prioritize targets for therapeutic development and identify the patient population that might benefit from such targets. Project Achilles data is hosted on the Cancer Dependency Map Portal (DepMap) where it has been harmonized with our genomics and cellular models data. You can access the latest and all past datasets here: https://depmap.org/portal/download/all/
Explore, search, and download data and metadata from your experiments and from public Open Data. The ESRF data repository is intended to store and archive data from photon science experiments done at the ESRF and to store digital material like documents and scientific results which need a DOI and long term preservation. Data are made public after an embargo period of maximum 3 years.
The Virtual Research Environment (VRE) is an open-source data management platform that enables medical researchers to store, process and share data in compliance with the European Union (EU) General Data Protection Regulation (GDPR). The VRE addresses the present lack of digital research data infrastructures fulfilling the need for (a) data protection for sensitive data, (b) capability to process complex data such as radiologic imaging, (c) flexibility for creating own processing workflows, (d) access to high performance computing. The platform promotes FAIR data principles and reduces barriers to biomedical research and innovation. The VRE offers a web portal with graphical and command-line interfaces, segregated data zones and organizational measures for lawful data onboarding, isolated computing environments where large teams can collaboratively process sensitive data privately, analytics workbench tools for processing, analyzing, and visualizing large datasets, automated ingestion of hospital data sources, project-specific data warehouses for structured storage and retrieval, graph databases to capture and query ontology-based metadata, provenance tracking, version control, and support for automated data extraction and indexing. The VRE is based on a modular and extendable state-of-the art cloud computing framework, a RESTful API, open developer meetings, hackathons, and comprehensive documentation for users, developers, and administrators. The VRE with its concerted technical and organizational measures can be adopted by other research communities and thus facilitates the development of a co-evolving interoperable platform ecosystem with an active research community.
Content type(s)
Country
The GISAID Initiative promotes the international sharing of all influenza virus sequences, related clinical and epidemiological data associated with human viruses, and geographical as well as species-specific data associated with avian and other animal viruses, to help researchers understand how the viruses evolve, spread and potentially become pandemics. *** GISAID does so by overcoming disincentives/hurdles or restrictions, which discourage or prevented sharing of influenza data prior to formal publication. *** The Initiative ensures that open access to data in GISAID is provided free-of-charge and to everyone, provided individuals identify themselves and agree to uphold the GISAID sharing mechanism governed through its Database Access Agreement. GISAID calls on all users to agree to the basic premise of upholding scientific etiquette, by acknowledging the originating laboratories providing the specimen and the submitting laboratories who generate the sequence data, ensuring fair exploitation of results derived from the data, and that all users agree that no restrictions shall be attached to data submitted to GISAID, to promote collaboration among researchers on the basis of open sharing of data and respect for all rights and interests.
Country
It is a platform (designed and developed by the National Informatics Centre (NIC), Government of India) for supporting Open Data initiative of Surat Municipal Corporation, intended to publish government datasets for public use. The portal has been created under Software as A Service (SaaS) model of Open Government Data (OGD) Platform, thus gives avenues for resuing datasets of the City in different perspective. This Portal has numerious modules; (a) Data Management System (DMS) for contributing data catalogs by various departments for making those available on the front end website after a due approval process through a defined workflow; (b) Content Management System (CMS) for managing and updating various functionalities and content types of Open Government Data Portal of Surat City; (c) Visitor Relationship Management (VRM) for collating and disseminating viewer feedback on various data catalogs; and (d) Communities module for community users to interact and share their zeal and views with others, who share common interests as that of theirs.
In 2003, the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) at NIH established Data, Biosample, and Genetic Repositories to increase the impact of current and previously funded NIDDK studies by making their data and biospecimens available to the broader scientific community. These Repositories enable scientists not involved in the original study to test new hypotheses without any new data or biospecimen collection, and they provide the opportunity to pool data across several studies to increase the power of statistical analyses. In addition, most NIDDK-funded studies are collecting genetic biospecimens and carrying out high-throughput genotyping making it possible for other scientists to use Repository resources to match genotypes to phenotypes and to perform informative genetic analyses.
The Common Cold Project began in 2011 with the aim of creating, documenting, and archiving a database that combines final research data from 5 prospective viral-challenge studies that were conducted over the preceding 25 years: the British Cold Study (BCS); the three Pittsburgh Cold Studies (PCS1, PCS2, and PCS3); and the Pittsburgh Mind-Body Center Cold Study (PMBC). These unique studies assessed predictor (and hypothesized mediating) variables in healthy adults aged 18 to 55 years, experimentally exposed them to a virus that causes the common cold, and then monitored them for development of infection and signs and symptoms of illness.
The Central Neuroimaging Data Archive (CNDA) allows for sharing of complex imaging data to investigators around the world, through a simple web portal. The CNDA is an imaging informatics platform that provides secure data management services for Washington University investigators, including source DICOM imaging data sharing to external investigators through a web portal, cnda.wustl.edu. The CNDA’s services include automated archiving of imaging studies from all of the University’s research scanners, automated quality control and image processing routines, and secure web-based access to acquired and post-processed data for data sharing, in compliance with NIH data sharing guidelines. The CNDA is currently accepting datasets only from Washington University affiliated investigators. Through this platform, the data is available for broad sharing with researchers both internal and external to Washington University.. The CNDA overlaps with data in oasis-brains.org https://www.re3data.org/repository/r3d100012182, but CNDA is a larger data set.
ETH Data Archive is ETH Zurich's long-term preservation solution for digital information such as research data, digitised content, archival records, or images. It serves as the backbone of data curation and for most of its content, it is a “dark archive” without public access. In this capacity, the ETH Data Archive also archives the content of ETH Zurich’s Research Collection which is the primary repository for members of the university and the first point of contact for publication of data at ETH Zurich. All data that was produced in the context of research at the ETH Zurich, can be published and archived in the Research Collection. An automated connection to the ETH Data Archive in the background ensures the medium to long-term preservation of all publications and research data. Direct access to the ETH Data Archive is intended only for customers who need to deposit software source code within the framework of ETH transfer Software Registration. Open Source code packages and other content from legacy workflows can be accessed via ETH Library @ swisscovery (https://library.ethz.ch/en/).
Country
sciencedata.dk is a research data store provided by DTU, the Danish Technical University, specifically aimed at researchers and scientists at Danish academic institutions. The service is intended for working with and sharing active research data as well as for safekeeping of large datasets. The data can be accessed and manipulated via a web interface, synchronization clients, file transfer clients or the command line. The service is built on and with open-source software from the ground up: FreeBSD, ZFS, Apache, PHP, ownCloud/Nextcloud. DTU is actively engaged in community efforts on developing research-specific functionality for data stores. Our servers are attached directly to the 10-Gigabit backbone of "Forskningsnettet" (the National Research and Education Network of Denmark) - implying that up and download speed from Danish academic institutions is in principle comparable to those of an external USB hard drive. Data store for research data allowing private sharing and sharing via links / persistent URLs.
Country
The Population Health Research Data Repository housed at MCHP is a comprehensive collection of administrative, registry, survey, and other data primarily relating to residents of Manitoba. It was developed to describe and explain patterns of health care and profiles of health and illness, facilitating inter-sectoral research in areas such as health care, education, and social services.
LONI’s Image and Data Archive (IDA) is a secure data archiving system. The IDA uses a robust infrastructure to provide researchers with a flexible and simple interface for de-identifying, searching, retrieving, converting, and disseminating their biomedical data. With thousands of investigators across the globe and more than 21 million data downloads to data, the IDA guarantees reliability with a fault-tolerant network comprising multiple switches, routers, and Internet connections to prevent system failure.
Country
The team have established the CardiacAI Data Repository that brings large amounts of Australian healthcare data together in a secure environment with strict conditions for use of these data with an appropriate level of oversight of research activities. The CardiacAI Data Repository collects de-identified EMR data about cardiovascular patients who are admitted to a group of urban and regional hospitals in NSW and links this with state-wide hospital and emergency deparment visit and mortality data and mobile-health remote monitoring data.