Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 36 result(s)
The Scholarly Database (SDB) at Indiana University aims to serve researchers and practitioners interested in the analysis, modeling, and visualization of large-scale scholarly datasets. The online interface provides access to six datasets: MEDLINE papers, registered Clinical Trials, U.S. Patent and Trademark Office patents (USPTO), National Science Foundation (NSF) funding, National Institutes of Health (NIH) funding, and National Endowment for the Humanities funding – over 26 million records in total.
!!! >>> intrepidbio.com expired <<< !!!! Intrepid Bioinformatics serves as a community for genetic researchers and scientific programmers who need to achieve meaningful use of their genetic research data – but can’t spend tremendous amounts of time or money in the process. The Intrepid Bioinformatics system automates time consuming manual processes, shortens workflow, and eliminates the threat of lost data in a faster, cheaper, and better environment than existing solutions. The system also provides the functionality and community features needed to analyze the large volumes of Next Generation Sequencing and Single Nucleotide Polymorphism data, which is generated for a wide range of purposes from disease tracking and animal breeding to medical diagnosis and treatment.
Brain Image Library (BIL) is an NIH-funded public resource serving the neuroscience community by providing a persistent centralized repository for brain microscopy data. Data scope of the BIL archive includes whole brain microscopy image datasets and their accompanying secondary data such as neuron morphologies, targeted microscope-enabled experiments including connectivity between cells and spatial transcriptomics, and other historical collections of value to the community. The BIL Analysis Ecosystem provides an integrated computational and visualization system to explore, visualize, and access BIL data without having to download it.
The Digital Archaeological Record (tDAR) is an international digital repository for the digital records of archaeological investigations. tDAR’s use, development, and maintenance are governed by Digital Antiquity, an organization dedicated to ensuring the long-term preservation of irreplaceable archaeological data and to broadening the access to these data.
<<<!!!<<< This repository is no longer available. >>>!!!>>> The programme "International Oceanographic Data and Information Exchange" (IODE) of the "Intergovernmental Oceanographic Commission" (IOC) of UNESCO was established in 1961. Its purpose is to enhance marine research, exploitation and development, by facilitating the exchange of oceanographic data and information between participating Member States, and by meeting the needs of users for data and information products.
The Plant Metabolic Network (PMN) provides a broad network of plant metabolic pathway databases that contain curated information from the literature and computational analyses about the genes, enzymes, compounds, reactions, and pathways involved in primary and secondary metabolism in plants. The PMN currently houses one multi-species reference database called PlantCyc and 22 species/taxon-specific databases.
The CancerData site is an effort of the Medical Informatics and Knowledge Engineering team (MIKE for short) of Maastro Clinic, Maastricht, The Netherlands. Our activities in the field of medical image analysis and data modelling are visible in a number of projects we are running. CancerData is offering several datasets. They are grouped in collections and can be public or private. You can search for public datasets in the NBIA (National Biomedical Imaging Archive) image archives without logging in.
<<!! checked 20.03.2017 SumsDB was offline; for more information and archive see http://brainvis.wustl.edu/sumsdb/ >> SumsDB (the Surface Management System DataBase) is a repository of brain-mapping data (surfaces & volumes; structural & functional data) from many laboratories.
>>>!!!<<< On June 1, 2020, the Academic Seismic Portal repositories at UTIG were merged into a single collection hosted at Lamont-Doherty Earth Observatory. Content here was removed July 1, 2020. Visit the Academic Seismic Portal @LDEO! https://www.marine-geo.org/collections/#!/collection/Seismic#summary (https://www.re3data.org/repository/r3d100010644) >>>!!!<<<
THIN is a medical data collection scheme that collects anonymised patient data from its members through the healthcare software Vision. The UK Primary Care database contains longitudinal patient records for approximately 6% of the UK Population. The anonymised data collection, which goes back to 1994, is nationally representative of the UK population.
Country
LIAG's Geophysics Information System (FIS GP) serves for the storage and supply of geophysical measurements and evaluations of LIAG and its partners. The architecture of the overall system intends a subdivision into an universal part (superstructure) and into several subsystems dedicated to geophysical methods (borehole geophysics, gravimetry, magnetics, 1D/2D geoelectrics, underground temperatures, seismics, VSP, helicopter geophysics and rock physics. The building of more subsystems is planned.
Country
The Research Data Centre (Forschungsdatenzentrum, FDZ) at the Institute for Educational Quality Improvement (Institut zur Qualitätsentwicklung im Bildungswesen, IQB) archives and documents data sets resulting from national and international assessment studies (such as DESI, PIRLS, PISA, IQB-Bildungstrends). Moreover, the FDZ makes these data sets available for re- and secondary analysis. Members of the scientific community can apply for access to the data sets archived at the FDZ.
Country
The Tropical Data Hub (TDH) Research Data repository makes data collections and datasets generated by James Cook University researchers searchable and accessible. This increases their visibility and facilitates sharing and collaboration both within JCU and externally. Services provided include archival storage, access controls (open access preferred), metadata review and DOI minting.
EMSC collects real time parametric data (source parmaters and phase pickings) provided by 65 seismological networks of the Euro-Med region. These data are provided to the EMSC either by email or via QWIDS (Quake Watch Information Distribution System, developped by ISTI). The collected data are automatically archived in a database, made available via an autoDRM, and displayed on the web site. The collected data are automatically merged to produce automatic locations which are sent to several seismological institutes in order to perform quick moment tensors determination.
<<<!!!<<< All user content from this site has been deleted. Visit SeedMeLab (https://seedmelab.org/) project as a new option for data hosting. >>>!!!>>> SeedMe is a result of a decade of onerous experience in preparing and sharing visualization results from supercomputing simulations with many researchers at different geographic locations using different operating systems. It’s been a labor–intensive process, unsupported by useful tools and procedures for sharing information. SeedMe provides a secure and easy-to-use functionality for efficiently and conveniently sharing results that aims to create transformative impact across many scientific domains.
The NCEAS Data Repository contains information about the research data sets collected and collated as part of NCEAS' funded activities. Information in the NCEAS Data Repository is concurrently available through the Knowledge Network for Biocomplexity (KNB), an international data repository. A number of the data sets were synthesized from multiple data sources that originated from the efforts of many contributors, while others originated from a single. Datasets can be found at KNB repository https://knb.ecoinformatics.org/data , creator=NCEAS
ETH Data Archive is ETH Zurich's long-term preservation solution for digital information such as research data, digitised content, archival records, or images. It serves as the backbone of data curation and for most of its content, it is a “dark archive” without public access. In this capacity, the ETH Data Archive also archives the content of ETH Zurich’s Research Collection which is the primary repository for members of the university and the first point of contact for publication of data at ETH Zurich. All data that was produced in the context of research at the ETH Zurich, can be published and archived in the Research Collection. An automated connection to the ETH Data Archive in the background ensures the medium to long-term preservation of all publications and research data. Direct access to the ETH Data Archive is intended only for customers who need to deposit software source code within the framework of ETH transfer Software Registration. Open Source code packages and other content from legacy workflows can be accessed via ETH Library @ swisscovery (https://library.ethz.ch/en/).
The KNB Data Repository is an international repository intended to facilitate ecological, environmental and earth science research in the broadest senses. For scientists, the KNB Data Repository is an efficient way to share, discover, access and interpret complex ecological, environmental, earth science, and sociological data and the software used to create and manage those data. Due to rich contextual information provided with data in the KNB, scientists are able to integrate and analyze data with less effort. The data originate from a highly-distributed set of field stations, laboratories, research sites, and individual researchers. The KNB supports rich, detailed metadata to promote data discovery as well as automated and manual integration of data into new projects. The KNB supports a rich set of modern repository services, including the ability to assign Digital Object Identifiers (DOIs) so data sets can be confidently referenced in any publication, the ability to track the versions of datasets as they evolve through time, and metadata to establish the provenance relationships between source and derived data.
Nuclear Data Services contains atomic, molecular and nuclear data sets for the development and maintenance of nuclear technologies. It includes energy-dependent reaction probabilities (cross sections), the energy and angular distributions of reaction products for many combinations of target and projectile, and the atomic and nuclear properties of excited states, and their radioactive decay data. Their main concern is providing data required to design a modern nuclear reactor for electricity production. Approximately 11.5 million nuclear data points have been measured and compiled into computerized form.
Under the World Climate Research Programme (WCRP) the Working Group on Coupled Modelling (WGCM) established the Coupled Model Intercomparison Project (CMIP) as a standard experimental protocol for studying the output of coupled atmosphere-ocean general circulation models (AOGCMs). CMIP provides a community-based infrastructure in support of climate model diagnosis, validation, intercomparison, documentation and data access. This framework enables a diverse community of scientists to analyze GCMs in a systematic fashion, a process which serves to facilitate model improvement. Virtually the entire international climate modeling community has participated in this project since its inception in 1995. The Program for Climate Model Diagnosis and Intercomparison (PCMDI) archives much of the CMIP data and provides other support for CMIP. We are now beginning the process towards the IPCC Fifth Assessment Report and with it the CMIP5 intercomparison activity. The CMIP5 (CMIP Phase 5) experiment design has been finalized with the following suites of experiments: I Decadal Hindcasts and Predictions simulations, II "long-term" simulations, III "atmosphere-only" (prescribed SST) simulations for especially computationally-demanding models. The new ESGF peer-to-peer (P2P) enterprise system (http://pcmdi9.llnl.gov) is now the official site for CMIP5 model output. The old gateway (http://pcmdi3.llnl.gov) is deprecated and now shut down permanently.