Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 224 result(s)
WFCC Global Catalogue of Microorganisms (GCM) is expected to be a robust, reliable and user-friendly system to help culture collections to manage, disseminate and share the information related to their holdings. It also provides a uniform interface for the scientific and industrial communities to access the comprehensive microbial resource information.
The CancerData site is an effort of the Medical Informatics and Knowledge Engineering team (MIKE for short) of Maastro Clinic, Maastricht, The Netherlands. Our activities in the field of medical image analysis and data modelling are visible in a number of projects we are running. CancerData is offering several datasets. They are grouped in collections and can be public or private. You can search for public datasets in the NBIA (National Biomedical Imaging Archive) image archives without logging in.
ICD serves as the international standard for diagnostic classification for all general epidemiological, many health management purposes and clinical use. The ICD's resources include the analysis of different population groups' general health situations, monitoring of the incidence and prevalence of diseases in relation to the characteristics of the individuals affected, reimbursement, resource allocation, quality, and guidelines. The records provide the basis for the compilation of national mortality and morbidity statistics, and enable the storage and retrieval of diagnostic information for clinical epidemiological and quality purposes.
The Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC) is a team of researchers, data specialists and computer system developers who are supporting the development of a data management system to store scientific data generated by Gulf of Mexico researchers. The Master Research Agreement between BP and the Gulf of Mexico Alliance that established the Gulf of Mexico Research Initiative (GoMRI) included provisions that all data collected or generated through the agreement must be made available to the public. The Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC) is the vehicle through which GoMRI is fulfilling this requirement. The mission of GRIIDC is to ensure a data and information legacy that promotes continual scientific discovery and public awareness of the Gulf of Mexico Ecosystem.
Country
Research Data Centres offer a secure access to detailed microdata from Statistics Canada's surveys, and to Canadian censuses' data, as well as to an increasing number of administrative data sets. The search engine was designed to help you find out more easily which dataset among all the surveys available in the RDCs best suits your research needs.
The PLANKTON*NET data provider at the Alfred Wegener Institute for Polar and Marine Research is an open access repository for plankton-related information. It covers all types of phytoplankton and zooplankton from marine and freshwater areas. PLANKTON*NET's greatest strength is its comprehensiveness as for the different taxa image information as well as taxonomic descriptions can be archived. PLANKTON*NET also contains a glossary with accompanying images to illustrate the term definitions. PLANKTON*NET therefore presents a vital tool for the preservation of historic data sets as well as the archival of current research results. Because interoperability with international biodiversity data providers (e.g. GBIF) is one of our aims, the architecture behind the new planktonnet@awi repository is observation centric and allows for mulitple assignment of assets (images, references, animations, etc) to any given observation. In addition, images can be grouped in sets and/or assigned tags to satisfy user-specific needs . Sets (and respective images) of relevance to the scientific community and/or general public have been assigned a persistant digital object identifier (DOI) for the purpose of long-term preservation (e.g. set ""Plankton*Net celebrates 50 years of Roman Treaties"", handle: 10013/de.awi.planktonnet.set.495)"
INDEPTH is a global network of research centres that conduct longitudinal health and demographic evaluation of populations in low- and middle-income countries (LMICs). INDEPTH aims to strengthen global capacity for Health and Demographic Surveillance Systems (HDSSs), and to mount multi-site research to guide health priorities and policies in LMICs, based on up-to-date scientific evidence. The data collected by the INDEPTH Network members constitute a valuable resource of population and health data for LMIC countries. This repository aims to make well documented anonymised longitudinal microdata from these Centres available to data users.
The tree of life links all biodiversity through a shared evolutionary history. This project will produce the first online, comprehensive first-draft tree of all 1.8 million named species, accessible to both the public and scientific communities. Assembly of the tree will incorporate previously-published results, with strong collaborations between computational and empirical biologists to develop, test and improve methods of data synthesis. This initial tree of life will not be static; instead, we will develop tools for scientists to update and revise the tree as new data come in. Early release of the tree and tools will motivate data sharing and facilitate ongoing synthesis of knowledge.
The Cognitive Function and Ageing Studies (CFAS) are population based studies of individuals aged 65 years and over living in the community, including institutions, which is the only large multi-centred population-based study in the UK that has reached sufficient maturity. There are three main studies within the CFAS group. MRC CFAS, the original study began in 1989, with three of its sites providing a parent subset for the comparison two decades later with CFAS II (2008 onwards). Subsequently another CFAS study, CFAS Wales began in 2011.
Country
Exposures in the period from conception to early childhood - including fetal growth, cell division, and organ functioning - may have long-lasting impact on health and disease susceptibility. To investigate these issues the Danish National Birth Cohort (Better health in generations) was established. A large cohort of pregnant women with long-term follow-up of the offspring was the obvious choice because many of the exposures of interest cannot be reconstructed with suffcient validity back in time. The study needed to be large, and the aim was to recruit 100,000 women early in pregnancy, and to continue follow-up for decades. Exposure information was collected by computer-assisted telephone interviews with the women twice during pregnancy and when their children were six and 18 months old. Participants were also asked to fill in a self-administered food frequency questionnaire in mid-pregnancy. Furthermore, a biological bank has been set up with blood taken from the mother twice during pregnancy and blood from theumbilical cord taken shortly after birth.
The Diabetes Study of Northern California (DISTANCE) conducts epidemiological and health services research in diabetes among a large, multiethnic cohort of patients in a large, integrated health care delivery system.
Intrepid Bioinformatics serves as a community for genetic researchers and scientific programmers who need to achieve meaningful use of their genetic research data – but can’t spend tremendous amounts of time or money in the process. The Intrepid Bioinformatics system automates time consuming manual processes, shortens workflow, and eliminates the threat of lost data in a faster, cheaper, and better environment than existing solutions. The system also provides the functionality and community features needed to analyze the large volumes of Next Generation Sequencing and Single Nucleotide Polymorphism data, which is generated for a wide range of purposes from disease tracking and animal breeding to medical diagnosis and treatment.
The objective of this Research Coordination Network project is to develop an international network of researchers who use genetic methodologies to study the ecology and evolution of marine organisms in the Indo-Pacific to share data, ideas and methods. The tropical Indian and Pacific Oceans encompass the largest biogeographic region on the planet, the Indo-Pacific
The PAIN Repository is a recently funded NIH initiative, which has two components: an archive for already collected imaging data (Archived Repository), and a repository for structural and functional brain images and metadata acquired prospectively using standardized acquisition parameters (Standardized Repository) in healthy control subjects and patients with different types of chronic pain. The PAIN Repository provides the infrastructure for storage of standardized resting state functional, diffusion tensor imaging and structural brain imaging data and associated biological, physiological and behavioral metadata from multiple scanning sites, and provides tools to facilitate analysis of the resulting comprehensive data sets.
The WorldWide Antimalarial Resistance Network (WWARN) is a collaborative platform generating innovative resources and reliable evidence to inform the malaria community on the factors affecting the efficacy of antimalarial medicines. Access to data is provided through diverse Tools and Resources: WWARN Explorer, Molecular Surveyor K13 Methodology, Molecular Surveyor pfmdr1 & pfcrt, Molecular Surveyor dhfr & dhps.
The Breast Cancer Surveillance Consortium (BCSC) is a research resource for studies designed to assess the delivery and quality of breast cancer screening and related patient outcomes in the United States. The BCSC is a collaborative network of seven mammography registries with linkages to tumor and/or pathology registries. The network is supported by a central Statistical Coordinating Center.
The National Sleep Research Resource (NSRR) offers free web access to large collections of de-identified physiological signals and clinical data elements collected in well-characterized research cohorts and clinical trials.
The National Cancer Data Base (NCDB), a joint program of the Commission on Cancer (CoC) of the American College of Surgeons (ACoS) and the American Cancer Society (ACS), is a nationwide oncology outcomes database for more than 1,500 Commission-accredited cancer programs in the United States and Puerto Rico. Some 70 percent of all newly diagnosed cases of cancer in the United States are captured at the institutional level and reported to the NCDB. The NCDB, begun in 1989, now contains approximately 29 million records from hospital cancer registries across the United States. Data on all types of cancer are tracked and analyzed. These data are used to explore trends in cancer care, to create regional and state benchmarks for participating hospitals, and to serve as the basis for quality improvement.
The data in the U of M’s Clinical Data Repository comes from the electronic health records (EHRs) of more than 2 million patients seen at 8 hospitals and more than 40 clinics. For each patient, data is available regarding the patient's demographics (age, gender, language, etc.), medical history, problem list, allergies, immunizations, outpatient vitals, diagnoses, procedures, medications, lab tests, visit locations, providers, provider specialties, and more.
Country
The taxonomically broad EST database TBestDB serves as a repository for EST data from a wide range of eukaryotes, many of which have previously not been thoroughly investigated. Most of the data contained in TBestDB has been generated by the labs of the Protist EST Program located in six universities across Canada. PEP is a large interdisciplinaryresearch project, involving six Canadian universities. PEP aims at the exploration of the diversity of eukaryotic genomes in a systematic, comprehensive and integrated way. The focus is on unicellular microbial eukaryotes, known as protists. Protistan eukaryotes comprise more than a dozen major lineages that, together, encompass more evolutionary, ecological and probably biochemical diversity than the multicellular kingdoms of animals, plants and fungi combined. PEP is a unique endeavor in that it is the first phylogenetically-broad genomic investigation of protists.
The FREEBIRD website aims to facilitate data sharing in the area of injury and emergency research in a timely and responsible manner. It has been launched by providing open access to anonymised data on over 30,000 injured patients (the CRASH-1 and CRASH-2 trials).
Country
The RAMEDIS system is a platform independent, web-based information system for rare metabolic diseases based on filed case reports. It was developed in close cooperation with clinical partners to allow them to collect information on rare metabolic diseases with extensive details, e.g. about occurring symptoms, laboratory findings, therapy and molecular data.
The CARMEN pilot project seeks to create a virtual laboratory for experimental neurophysiology, enabling the sharing and collaborative exploitation of data, analysis code and expertise. This study by the DCC contributes to an understanding of the data curation requirements of the eScience community, through its extended observation of the CARMEN neurophysiology community’s specification and selection of solutions for the organisation, access and curation of digital research output.
GESDB is a platform for sharing simulation data and discussion of simulation techniques for human genetic studies. The database contains simulation scripts, simulated data, and documentations from published manuscripts. The forum provides a platform for Q&A for the simulated data and exchanging simulation ideas. GESDB aims to promote transparency and efficiency in simulation studies for human genetic studies.
SSDA Dataverse is one of the archiving opportunities of SSDA, the others are: Data can be archived by SSDA itself (http://dataarchives.ss.ucla.edu/index.html) or by ICPSR or by UCLA Library or by California Digital Library. The Social Science Data Archives serves the UCLA campus as an archive of faculty and graduate student survey research. We provide long term storage of data files and documentation. We ensure that the data are useable in the future by migrating files to new operating systems. We follow government standards and archival best practices. The mission of the Social Science Data Archive has been and continues to be to provide a foundation for social science research with faculty support throughout an entire research project involving original data collection or the reuse of publicly available studies. Data Archive staff and researchers work as partners throughout all stages of the research process, beginning when a hypothesis or area of study is being developed, during grant and funding activities, while data collection and/or analysis is ongoing, and finally in long term preservation of research results. Our role is to provide a collaborative environment where the focus is on understanding the nature and scope of research approach and management of research output throughout the entire life cycle of the project. Instructional support, especially support that links research with instruction is also a mainstay of operations.