Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 199 result(s)
Chempound is a new generation repository architecture based on RDF, semantic dictionaries and linked data. It has been developed to hold any type of chemical object expressible in CML and is exemplified by crystallographic experiments and computational chemistry calculations. In both examples, the repository can hold >50k entries which can be searched by SPARQL endpoints and pre-indexing of key fields. The Chempound architecture is general and adaptable to other fields of data-rich science. The Chempound software is hosted at http://bitbucket.org/chempound and is available under the Apache License, Version 2.0
The RRUFF Project is creating a complete set of high quality spectral data from well characterized minerals and is developing the technology to share this information with the world. The collected data provides a standard for mineralogists, geoscientists, gemologists and the general public for the identification of minerals both on earth and for planetary exploration.Electron microprobe analysis is used to determine the chemistry of each mineral.
Country
The Ningaloo Atlas was created in response to the need for more comprehensive and accessible information on environmental and socio-economic data on the greater Ningaloo region. As such, the Ningaloo Atlas is a web portal to not only access and share information, but to celebrate and promote the biodiversity, heritage, value, and way of life of the greater Ningaloo region.
The Northern California Earthquake Data Center (NCEDC) is a permanent archive and distribution center primarily for multiple types of digital data relating to earthquakes in central and northern California. The NCEDC is located at the Berkeley Seismological Laboratory, and has been accessible to users via the Internet since mid-1992. The NCEDC was formed as a joint project of the Berkeley Seismological Laboratory (BSL) and the U.S. Geological Survey (USGS) at Menlo Park in 1991, and current USGS funding is provided under a cooperative agreement for seismic network operations.
>>>!!!<<< The repository is no longer available. >>>!!!<<< Here you will find a collection of atomic microstructures that have been built by the atomic modeling community. Feel free to download any of these and use them in your own scientific explorations.The focus of this cyberinfrastructure is to advance the field of atomic-scale modeling of materials by acting as a forum for disseminating new atomistic scale methodologies, educating non-experts and the next generation of computational materials scientists, and serving as a bridge between the atomistic and complementary (electronic structure, mesoscale) modeling communities.
The OpenMadrigal project seeks to develop and support an on-line database for geospace data. The project has been led by MIT Haystack Observatory since 1980, but now has active support from Jicamarca Observatory and other community members. Madrigal is a robust, World Wide Web based system capable of managing and serving archival and real-time data, in a variety of formats, from a wide range of ground-based instruments. Madrigal is installed at a number of sites around the world. Data at each Madrigal site is locally controlled and can be updated at any time, but shared metadata between Madrigal sites allow searching of all Madrigal sites at once from any Madrigal site. Data is local; metadata is shared.
Country
In the digital collections, you can take a look at the digitized prints from the holdings of the ULB Düsseldorf free of cost. In special collections, the ULB unites rare, valuable and unique parts of holdings that are installed as an ensemble. Deposita, unpublished works, donations, acquisition of rare books etc. were and are an important source for the constant growth of the library. These treasures and specialties - beyond their academic value - also contribute substantially to the profile of the ULB.
The Durham High Energy Physics Database (HEPData), formerly: the Durham HEPData Project, has been built up over the past four decades as a unique open-access repository for scattering data from experimental particle physics. It currently comprises the data points from plots and tables related to several thousand publications including those from the Large Hadron Collider (LHC). The Durham HepData Project has for more than 25 years compiled the Reactions Database containing what can be loosly described as cross sections from HEP scattering experiments. The data comprise total and differential cross sections, structure functions, fragmentation functions, distributions of jet measures, polarisations, etc... from a wide range of interactions. In the new HEPData site (hepdata.net), you can explore new functionalities for data providers and data consumers, as well as the submission interface. HEPData is operated by CERN and IPPP at Durham University and is based on the digital library framework Invenio.
We present the MUSE-Wide survey, a blind, 3D spectroscopic survey in the CANDELS/GOODS-S and CANDELS/COSMOS regions. Each MUSE-Wide pointing has a depth of 1 hour and hence targets more extreme and more luminous objects over 10 times the area of the MUSE-Deep fields (Bacon et al. 2017). The legacy value of MUSE-Wide lies in providing "spectroscopy of everything" without photometric pre-selection. We describe the data reduction, post-processing and PSF characterization of the first 44 CANDELS/GOODS-S MUSE-Wide pointings released with this publication. Using a 3D matched filtering approach we detected 1,602 emission line sources, including 479 Lyman-α (Lya) emitting galaxies with redshifts 2.9≲z≲6.3. We cross-match the emission line sources to existing photometric catalogs, finding almost complete agreement in redshifts and stellar masses for our low redshift (z < 1.5) emitters. At high redshift, we only find ~55% matches to photometric catalogs. We encounter a higher outlier rate and a systematic offset of Δz≃0.2 when comparing our MUSE redshifts with photometric redshifts. Cross-matching the emission line sources with X-ray catalogs from the Chandra Deep Field South, we find 127 matches, including 10 objects with no prior spectroscopic identification. Stacking X-ray images centered on our Lya emitters yielded no signal; the Lya population is not dominated by even low luminosity AGN. A total of 9,205 photometrically selected objects from the CANDELS survey lie in the MUSE-Wide footprint, which we provide optimally extracted 1D spectra of. We are able to determine the spectroscopic redshift of 98% of 772 photometrically selected galaxies brighter than 24th F775W magnitude. All the data in the first data release - datacubes, catalogs, extracted spectra, maps - are available at the website.
Yoda publishes research data on behalf of researchers that are affiliated with Utrecht University, its research institutes and consortia where it acts as a coordinating body. Data packages are not limited to a particular field of research or license. Yoda publishes data packages via Datacite. To find data publications use: https://public.yoda.uu.nl/ , or the Datacite search engine: https://search.datacite.org/repositories/delft.uu
Country
TIB’s core task is to provide science and industry with both elementary and highly technical specialist and researchinformation. TIB has globally unique collections in the subject areas of science and technology, as well as architecture,chemistry, computer science, mathematics and physics. Besides textual materials, the library’s collections also includeknowledge objects such as research data, 3D models and audiovisual media. The TIB has assumed responsibility for the long-term preservation and availability of the digital materials it collects and documents, as well as their interpretability for use by different target groups. To this end, it has created the necessary infrastructure and guarantees the permanent provision of both material and human resources. Search for research data search at: https://www.tib.eu/en/search-discover/research-data
Country
GEOFON seeks to facilitate cooperation in seismological research and earthquake and tsunami hazard mitigation by providing rapid transnational access to seismological data and source parameters of large earthquakes, and keeping these data accessible in the long term. It pursues these aims by operating and maintaining a global network of permanent broadband stations in cooperation with local partners, facilitating real time access to data from this network and those of many partner networks and plate boundary observatories, providing a permanent and secure archive for seismological data. It also archives and makes accessible data from temporary experiments carried out by scientists at German universities and institutions, thereby fostering cooperation and encouraging the full exploitation of all acquired data and serving as the permanent archive for the Geophysical Instrument Pool at Potsdam (GIPP). It also organises the data exchange of real-time and archived data with partner institutions and international centres.
Sharing and preserving data are central to protecting the integrity of science. DataHub, a Research Computing endeavor, provides tools and services to meet scientific data challenges at Pacific Northwest National Laboratory (PNNL). DataHub helps researchers address the full data life cycle for their institutional projects and provides a path to creating findable, accessible, interoperable, and reusable (FAIR) data products. Although open science data is a crucial focus of DataHub’s core services, we are interested in working with evidence-based data throughout the PNNL research community.
NIAID’s TB Portals Program is a multi-national collaboration for TB data sharing and analysis to advance TB research. As a global consortium of clinicians, scientists, and IT professionals from 40 sites in 16 countries throughout eastern Europe, Asia, and sub-Saharan Africa, the TB Portals Program is a web-based, open-access repository of multi-domain TB data and tools for its analysis. Researchers can find linked socioeconomic/geographic, clinical, laboratory, radiological, and genomic data from over 7,500 international published TB patient cases with an emphasis on drug-resistant tuberculosis.
OBIS strives to document the ocean's diversity, distribution and abundance of life. Created by the Census of Marine Life, OBIS is now part of the Intergovernmental Oceanographic Commission (IOC) of UNESCO, under its International Oceanographic Data and Information Exchange (IODE) programme
Country
The NORPERM permafrost database provides information on ground temperatures from boreholes and from the near-surface using miniloggers (MTDs). The database was established during the International Polar Year as one of the main goals of the project TSP Norway - "A Contribution to the Thermal State of Permafrost in Norway and Svalbard".
Research data from University of Pretoria. This data repository facilitates data publishing, sharing and collaboration of academic research, allowing UP to manage and in some cases showcase its data to the wider research community. Previously UPSpace (https://repository.up.ac.za/) was used for both datasets and research outputs. Now UP Research Data Repository is dedicated for datasets.
The Department of Energy (DOE) Joint Genome Institute (JGI) is a national user facility with massive-scale DNA sequencing and analysis capabilities dedicated to advancing genomics for bioenergy and environmental applications. Beyond generating tens of trillions of DNA bases annually, the Institute develops and maintains data management systems and specialized analytical capabilities to manage and interpret complex genomic data sets, and to enable an expanding community of users around the world to analyze these data in different contexts over the web. The JGI Genome Portal provides a unified access point to all JGI genomic databases and analytical tools. A user can find all DOE JGI sequencing projects and their status, search for and download assemblies and annotations of sequenced genomes, and interactively explore those genomes and compare them with other sequenced microbes, fungi, plants or metagenomes using specialized systems tailored to each particular class of organisms. Databases: Genome Online Database (GOLD), Integrated Microbial Genomes (IGM), MycoCosm, Phytozome
<<<!!!<<< This repository is no longer available. >>>!!!>>> The programme "International Oceanographic Data and Information Exchange" (IODE) of the "Intergovernmental Oceanographic Commission" (IOC) of UNESCO was established in 1961. Its purpose is to enhance marine research, exploitation and development, by facilitating the exchange of oceanographic data and information between participating Member States, and by meeting the needs of users for data and information products.
For datasets big and small; Store your research data online. Quickly and easily upload files of any type and we will host your research data for you. Your experimental research data will have a permanent home on the web that you can refer to.
The European Monitoring and Evaluation Programme (EMEP) is a scientifically based and policy driven programme under the Convention on Long-range Transboundary Air Pollution (CLRTAP) for international co-operation to solve transboundary air pollution problems.
Country
ICES is an intergovernmental organization whose main objective is to increase the scientific knowledge of the marine environment and its living resources and to use this knowledge to provide unbiased, non-political advice to competent authorities.
HyperLeda is an information system for astronomy: It consists in a database and tools to process that data according to the user's requirements. The scientific goal which motivates the development of HyperLeda is the study of the physics and evolution of galaxies. LEDA was created more than 20 years ago, in 1983, and became HyperLeda after the merging with Hypercat in 2000