Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 105 result(s)
The CDAWeb data system enables improved display and coordinated analysis of multi-instrument, multimission data bases of the kind whose analysis is critical to meeting the science objectives of the ISTP program and the InterAgency Consultative Group (IACG) Solar-Terrestrial Science Initiative. The system combines the client-server user interface technology of the World Wide Web with a powerful set of customized IDL routines to leverage the data format standards (CDF) and guidelines for implementation adopted by ISTP and the IACG. The system can be used with any collection of data granules following the extended set of ISTP/IACG standards. CDAWeb is being used both to support coordinated analysis of public and proprietary data and better functional access to specific public data such as the ISTP-precursor CDAW 9 data base that is formatted to the ISTP/IACG standards. Many data sets are available through the Coordinated Data Analysis Web (CDAWeb) service and the data coverage continues to grow. These are largely, but not exclusively, magnetospheric data and nearby solar wind data of the ISTP era (1992-present) at time resolutions of approximately a minute. The CDAWeb service provides graphical browsing, data subsetting, screen listings, file creations and downloads (ASCII or CDF). Public data from current (1992-present) space physics missions (including Cluster, IMAGE, ISTP, FAST, IMP-8, SAMPEX and others). Public data from missions before 1992 (including IMP-8, ISIS1/2, Alouette2, Hawkeye and others). Public data from all current and past space physics missions. CDAWeb ist part of "Space Physics Data Facility" (https://www.re3data.org/repository/r3d100010168).
Content type(s)
A genome database for the genus Piroplasma. PiroplasmaDB is a member of pathogen-databases that are housed under the NIAID-funded EuPathDB Bioinformatics Resource Center (BRC) umbrella.
Content type(s)
The CEBS database houses data of interest to environmental health scientists. CEBS is a public resource, and has received depositions of data from academic, industrial and governmental laboratories. CEBS is designed to display data in the context of biology and study design, and to permit data integration across studies for novel meta analysis.
Patients-derived tumor xenograft (PDX) mouse models are an important oncology research platform to study tumor evolution, drug response and personalised medicine approaches. We have expanded to organoids and cell lines and are now called CancerModels.Org
The CCHDO provides access to standard, well-described datasets from reference-quality repeat hydrography expeditions. It curates high quality full water column Conductivity-Temperature-Depth (CTD), hydrographic, carbon and tracer data from over 2,500 cruises from ~30 countries. It is the official data center for CTD and water sample profile data from the Global Ocean Ship-Based Hydrographic Investigations Program (GO-SHIP), as well as for WOCE, US Hydro, and other high quality repeat hydrography lines (e.g. SOCCOM, HOT, BATS, WOCE, CARINA.)
The UCI Machine Learning Repository is a collection of databases, domain theories, and data generators that are used by the machine learning community for the empirical analysis of machine learning algorithms. It is used by students, educators, and researchers all over the world as a primary source of machine learning data sets. As an indication of the impact of the archive, it has been cited over 1000 times.
Earth System Research Laboratory (ESRL) Global Monitoring Division (GMD) provides data relating to climate change forces and models, ozone depletion and rehabilitation, and baseline air quality. Data are freely available so the public, policy makers, and scientists stay current with long-term atmospheric trends.
Brain Analysis Library of Spatial maps and Atlases (BALSA) is a database for hosting and sharing neuroimaging and neuroanatomical datasets for human and primate species. BALSA houses curated, user-created Study datasets, extensively analyzed neuroimaging data associated with published figures and Reference datasets mapped to brain atlas surfaces and volumes in human and nonhuman primates as a general resource (e.g., published cortical parcellations).
Content type(s)
The Lamont-Doherty Core Repository (LDCR) contains one of the worldā€™s most unique and important collection of scientific samples from the deep sea. Sediment cores from every major ocean and sea are archived at the Core Repository. The collection contains approximately 72,000 meters of core composed of 9,700 piston cores; 7,000 trigger weight cores; and 2,000 other cores such as box, kasten, and large diameter gravity cores. We also hold 4,000 dredge and grab samples, including a large collection of manganese nodules, many of which were recovered by submersibles. Over 100,000 residues are stored and are available for sampling where core material is expended. In addition to physical samples, a database of the Lamont core collection has been maintained for nearly 50 years and contains information on the geographic location of each collection site, core length, mineralogy and paleontology, lithology, and structure, and more recently, the full text of megascopic descriptions.
The Materials Data Facility (MDF) is set of data services built specifically to support materials science researchers. MDF consists of two synergistic services, data publication and data discovery (in development). The production-ready data publication service offers a scalable repository where materials scientists can publish, preserve, and share research data. The repository provides a focal point for the materials community, enabling publication and discovery of materials data of all sizes.
!!! >>> merged with https://www.re3data.org/repository/r3d100012653 <<< !!! RDoCdb is an informatics platform for the sharing of human subjects data generated by investigators as part of the NIMH's Research Domain Criteria initiative, and to support this initiative's aims. It also accepts and shares appropriate data related to mental health from other sources.
CottonGen is a new cotton community genomics, genetics and breeding database being developed to enable basic, translational and applied research in cotton. It is being built using the open-source Tripal database infrastructure. CottonGen consolidates and expands the data from CottonDB and the Cotton Marker Database, providing enhanced tools for easy querying, visualizing and downloading research data.
The Deep Carbon Observatory (DCO) is a global community of multi-disciplinary scientists unlocking the inner secrets of Earth through investigations into life, energy, and the fundamentally unique chemistry of carbon. Deep Carbon Observatory Digital Object Registry (ā€œDCO-VIVOā€) is a centrally-managed digital object identification, object registration and metadata management service for the DCO. Digital object registration includes DCO-ID generation based on the global Handle System infrastructure and metadata collection using VIVO. Users will be able to deposit their data into the DCO Data Repository and have that data discoverable and accessible by others.
The Rolling Deck to Repository (R2R) Program provides a comprehensive shore-side data management program for a suite of routine underway geophysical, water column, and atmospheric sensor data collected on vessels of the academic research fleet. R2R also ensures data are submitted to the NOAA National Centers for Environmental Information for long-term preservation.
Content type(s)
TrichDB integrated genomic resources for the eukaryotic protist pathogens Trichomonas vaginalis.
The National Trauma Data BankĀ® (NTDB) is the largest aggregation of trauma registry data ever assembled. The goal of the NTDB is to inform the medical community, the public, and decision makers about a wide variety of issues that characterize the current state of care for injured persons. Registry data that is collected from the NTDB is compiled annually and disseminated in the forms of hospital benchmark reports, data quality reports, and research data sets. Research data sets that can be used by researchers. To gain access to NTDB data, researchers must submit requests through our online application process
XNAT CENTRAL is a publicly accessible datasharing portal at Washinton University Medical School using XNAT software. XNAT provides neuroimaging data through a web interface and a customizable open source platform. XNAT facilitates data uploads and downloads for data sharing, processing and organization. NOTICE: Central XNAT will be decommissioned on October 15, 2023. New project creation is no longer permitted.
AmphibiaWeb is an online system enabling any user to search and retrieve information relating to amphibian biology and conservation. This site was motivated by the global declines of amphibians, the study of which has been hindered by the lack of multidisplinary studies and a lack of coordination in monitoring, in field studies, and in lab studies. We hope AmphibiaWeb will encourage a shared vision to collaboratively face the challenge of global amphibian declines and the conservation of remaining amphibians and their habitats.
The overall vision for the SPARC Portal is to accelerate autonomic neuroscience research and device development by providing access to digital resources that can be shared, cited, visualized, computed, and used for virtual experimentation.