Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 31 result(s)
The Scholarly Database (SDB) at Indiana University aims to serve researchers and practitioners interested in the analysis, modeling, and visualization of large-scale scholarly datasets. The online interface provides access to six datasets: MEDLINE papers, registered Clinical Trials, U.S. Patent and Trademark Office patents (USPTO), National Science Foundation (NSF) funding, National Institutes of Health (NIH) funding, and National Endowment for the Humanities funding – over 26 million records in total.
This interactive database provides complete access to statistics on seasonal cotton supply and use for each country and each region in the world, from 1920/21 to date. This project is part of ICAC’s efforts to improve the transparency of world cotton statistics.
Country
The Canadian Laboratory Initiative on Pediatric Reference Intervals (CALIPER) is a nation-wide health initiative to improve the diagnosis and monitoring of children and adolescents with medical concerns. Our main objective is to establish a comprehensive database of reference intervals for blood test results in children and adolescents. CALIPER is designed to fill the gaps that currently exist in accurately interpreting blood test results with the ultimate goal of improving the care of children at SickKids and other children’s hospitals around the world.
Country
One of the world’s largest banks of biological, psychosocial and clinical data on people suffering from mental health problems. The Signature center systematically collects biological, psychosocial and clinical indicators from patients admitted to the psychiatric emergency and at four points throughout their journey in the hospital: upon arrival to the emergency room (state of crisis), at the end of their hospital stay, as well as at the beginning and the end of outpatient treatment. For all hospital clients who agree to participate, blood specimens are collected for the purpose of measuring metabolic, genetic, toxic and infectious biomarkers, while saliva samples are collected to measure sex hormones and hair samples are collected to measure stress hormones. Questionnaire has been selected to cover important dimensional aspects of mental illness such as Behaviour and Cognition (Psychosis, Depression, Anxiety, Impulsiveness, Aggression, Suicide, Addiction, Sleep),Socio-demographic Profile (Spiritual beliefs, Social functioning, Childhood experiences, Demographic, Family background) and Medical Data (Medication, Diagnosis, Long-term health, RAMQ data). On 2016, May there are more than 1150 participants and 400 for the longitudinal Follow-Up
The PAIN Repository is a recently funded NIH initiative, which has two components: an archive for already collected imaging data (Archived Repository), and a repository for structural and functional brain images and metadata acquired prospectively using standardized acquisition parameters (Standardized Repository) in healthy control subjects and patients with different types of chronic pain. The PAIN Repository provides the infrastructure for storage of standardized resting state functional, diffusion tensor imaging and structural brain imaging data and associated biological, physiological and behavioral metadata from multiple scanning sites, and provides tools to facilitate analysis of the resulting comprehensive data sets.
METLIN represents the largest MS/MS collection of data with the database generated at multiple collision energies and in positive and negative ionization modes. The data is generated on multiple instrument types including SCIEX, Agilent, Bruker and Waters QTOF mass spectrometers.
ForestPlots.net is a web-accessible secure repository for forest plot inventories in South America, Africa and Asia. The database includes plot geographical information; location, taxonomic information and diameter measurements of trees inside each plot; and participants in plot establishment and re-measurement, including principal investigators, field assistants, students.
The N3C Data Enclave is a secure portal containing a very large and extensive set of harmonized COVID-19 clinical electronic health record (EHR) data. The data can be accessed through a secure cloud Enclave hosted by NCATS and cannot be downloaded due to regulatory control. Broad access is available to investigators at institutions that sign a Data Use Agreements and via Data Use Requests by investigators. The N3C is a unique open, reproducible, transparent, collaborative team science initiative to leverage sensitive clinical data to expedite COVID-19 discoveries and improve health outcomes.
The Virtual Research Environment (VRE) is an open-source data management platform that enables medical researchers to store, process and share data in compliance with the European Union (EU) General Data Protection Regulation (GDPR). The VRE addresses the present lack of digital research data infrastructures fulfilling the need for (a) data protection for sensitive data, (b) capability to process complex data such as radiologic imaging, (c) flexibility for creating own processing workflows, (d) access to high performance computing. The platform promotes FAIR data principles and reduces barriers to biomedical research and innovation. The VRE offers a web portal with graphical and command-line interfaces, segregated data zones and organizational measures for lawful data onboarding, isolated computing environments where large teams can collaboratively process sensitive data privately, analytics workbench tools for processing, analyzing, and visualizing large datasets, automated ingestion of hospital data sources, project-specific data warehouses for structured storage and retrieval, graph databases to capture and query ontology-based metadata, provenance tracking, version control, and support for automated data extraction and indexing. The VRE is based on a modular and extendable state-of-the art cloud computing framework, a RESTful API, open developer meetings, hackathons, and comprehensive documentation for users, developers, and administrators. The VRE with its concerted technical and organizational measures can be adopted by other research communities and thus facilitates the development of a co-evolving interoperable platform ecosystem with an active research community.
CERIC Data Portal allows users to consult and manage data related to experiments carried out at CERIC (Central European Research Infrastructure Consortium) partner facilities. Data made available includes scientific datasets collected during experiments, experiment proposals, samples used and publications if any. Users can search for data based on related metadata (both their own data and other peoples' public data).
The Africa Health Research Institute (AHRI) has published its updated analytical datasets for 2016. The datasets cover socio-economic, education and employment information for individuals and households in AHRI’s population research area in rural northern KwaZulu-Natal. The datasets also include details on the migration patterns of the individuals and households who migrated into and out of the surveillance area as well as data on probable causes of death for individuals who passed away. Data collection for the 2016 individual interviews – which involves a dried blood spot sample being taken – is still in progress, and therefore datasets on HIV status and General Health only go up to 2015 for now. Over the past 16 years researchers have developed an extensive longitudinal database of demographic, social, economic, clinical and laboratory information about people over the age of 15 living in the AHRI population research area. During this time researchers have followed more than 160 000 people, of which 92 000 are still in the programme.
The Central Neuroimaging Data Archive (CNDA) allows for sharing of complex imaging data to investigators around the world, through a simple web portal. The CNDA is an imaging informatics platform that provides secure data management services for Washington University investigators, including source DICOM imaging data sharing to external investigators through a web portal, cnda.wustl.edu. The CNDA’s services include automated archiving of imaging studies from all of the University’s research scanners, automated quality control and image processing routines, and secure web-based access to acquired and post-processed data for data sharing, in compliance with NIH data sharing guidelines. The CNDA is currently accepting datasets only from Washington University affiliated investigators. Through this platform, the data is available for broad sharing with researchers both internal and external to Washington University.. The CNDA overlaps with data in oasis-brains.org https://www.re3data.org/repository/r3d100012182, but CNDA is a larger data set.
Country
From April 2020 to March 2023, the Covid-19 Immunity Task Force (CITF) supported 120 studies to generate knowledge about immunity to SARS-CoV-2. The subjects addressed by these studies include the extent of SARS-CoV-2 infection in Canada, the nature of immunity, vaccine effectiveness and safety, and the need for booster shots among different communities and priority populations in Canada. The CITF Databank was developed to further enhance the impact of CITF funded studies by allowing additional research using the data collected from CITF-supported studies. The CITF Databank centralizes and harmonizes individual-level data from CITF-funded studies that have met all ethical requirements to deposit data in the CITF Databank and have completed a data sharing agreement. The CITF Databank is an internationally unique resource for sharing epidemiological and laboratory data from studies about SARS-CoV-2 immunity in different populations. The types of research that are possible with data from the CITF Databank include observational epidemiological studies, mathematical modelling research, and comparative evaluation of surveillance and laboratory methods.
ETH Data Archive is ETH Zurich's long-term preservation solution for digital information such as research data, digitised content, archival records, or images. It serves as the backbone of data curation and for most of its content, it is a “dark archive” without public access. In this capacity, the ETH Data Archive also archives the content of ETH Zurich’s Research Collection which is the primary repository for members of the university and the first point of contact for publication of data at ETH Zurich. All data that was produced in the context of research at the ETH Zurich, can be published and archived in the Research Collection. An automated connection to the ETH Data Archive in the background ensures the medium to long-term preservation of all publications and research data. Direct access to the ETH Data Archive is intended only for customers who need to deposit software source code within the framework of ETH transfer Software Registration. Open Source code packages and other content from legacy workflows can be accessed via ETH Library @ swisscovery (https://library.ethz.ch/en/).
Country
sciencedata.dk is a research data store provided by DTU, the Danish Technical University, specifically aimed at researchers and scientists at Danish academic institutions. The service is intended for working with and sharing active research data as well as for safekeeping of large datasets. The data can be accessed and manipulated via a web interface, synchronization clients, file transfer clients or the command line. The service is built on and with open-source software from the ground up: FreeBSD, ZFS, Apache, PHP, ownCloud/Nextcloud. DTU is actively engaged in community efforts on developing research-specific functionality for data stores. Our servers are attached directly to the 10-Gigabit backbone of "Forskningsnettet" (the National Research and Education Network of Denmark) - implying that up and download speed from Danish academic institutions is in principle comparable to those of an external USB hard drive. Data store for research data allowing private sharing and sharing via links / persistent URLs.
INDI was formed as a next generation FCP effort. INDI aims to provide a model for the broader imaging community while simultaneously creating a public dataset capable of dwarfing those that most groups could obtain individually.
The Survey of Health, Ageing and Retirement in Europe (SHARE) is a multidisciplinary and cross-national panel database of micro data on health, socio-economic status and social and family networks of more than 140,000 individuals (approximately 530,000 interviews) aged 50 or over from 28 European countries and Israel.
The Cancer Cell Line Encyclopedia project is a collaboration between the Broad Institute, and the Novartis Institutes for Biomedical Research and its Genomics Institute of the Novartis Research Foundation to conduct a detailed genetic and pharmacologic characterization of a large panel of human cancer models, to develop integrated computational analyses that link distinct pharmacologic vulnerabilities to genomic patterns and to translate cell line integrative genomics into cancer patient stratification. The CCLE provides public access to genomic data, analysis and visualization for about 1000 cell lines.
The FREEBIRD website aims to facilitate data sharing in the area of injury and emergency research in a timely and responsible manner. It has been launched by providing open access to anonymised data on over 30,000 injured patients (the CRASH-1 and CRASH-2 trials).
Project Tycho is a repository for global health, particularly disease surveillance data. Project Tycho currently includes data for 92 notifiable disease conditions in the US, and up to three dengue-related conditions for 99 countries. Project Tycho has compiled data from reputable sources such as the US Centers for Disease Control, the World Health Organization, and National health agencies for countries around the world. Project Tycho datasets are highly standardized and have rich metadata to improve access, interoperability, and reuse of global health data for research and innovation.