Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 162 result(s)
Country
GEOFON seeks to facilitate cooperation in seismological research and earthquake and tsunami hazard mitigation by providing rapid transnational access to seismological data and source parameters of large earthquakes, and keeping these data accessible in the long term. It pursues these aims by operating and maintaining a global network of permanent broadband stations in cooperation with local partners, facilitating real time access to data from this network and those of many partner networks and plate boundary observatories, providing a permanent and secure archive for seismological data. It also archives and makes accessible data from temporary experiments carried out by scientists at German universities and institutions, thereby fostering cooperation and encouraging the full exploitation of all acquired data and serving as the permanent archive for the Geophysical Instrument Pool at Potsdam (GIPP). It also organises the data exchange of real-time and archived data with partner institutions and international centres.
The OpenMadrigal project seeks to develop and support an on-line database for geospace data. The project has been led by MIT Haystack Observatory since 1980, but now has active support from Jicamarca Observatory and other community members. Madrigal is a robust, World Wide Web based system capable of managing and serving archival and real-time data, in a variety of formats, from a wide range of ground-based instruments. Madrigal is installed at a number of sites around the world. Data at each Madrigal site is locally controlled and can be updated at any time, but shared metadata between Madrigal sites allow searching of all Madrigal sites at once from any Madrigal site. Data is local; metadata is shared.
HPIDB is a public resource, which integrates experimental PPIs from various databases into a single database. The Host-Pathogen Interaction Database (HPIDB) is a genomics resource devoted to understanding molecular interactions between key organisms and the pathogens to which they are susceptible.
OLOS is a Swiss-based data management portal tailored for researchers and institutions. Powerful yet easy to use, OLOS works with most tools and formats across all scientific disciplines to help researchers safely manage, publish and preserve their data. The solution was developed as part of a larger project focusing on Data Life Cycle Management (dlcm.ch) that aims to develop various services for research data management. Thanks to its highly modular architecture, OLOS can be adapted both to small institutions that need a "turnkey" solution and to larger ones that can rely on OLOS to complement what they have already implemented. OLOS is compatible with all formats in use in the different scientific disciplines and is based on modern technology that interconnects with researchers' environments (such as Electronic Laboratory Notebooks or Laboratory Information Management Systems).
Country
The German Neuroinformatics Node's data infrastructure (GIN) services provide a platform for comprehensive and reproducible management and sharing of neuroscience data. Building on well established versioning technology, GIN offers the power of a web based repository management service combined with a distributed file storage. The service addresses the range of research data workflows starting from data analysis on the local workstation to remote collaboration and data publication.
Chempound is a new generation repository architecture based on RDF, semantic dictionaries and linked data. It has been developed to hold any type of chemical object expressible in CML and is exemplified by crystallographic experiments and computational chemistry calculations. In both examples, the repository can hold >50k entries which can be searched by SPARQL endpoints and pre-indexing of key fields. The Chempound architecture is general and adaptable to other fields of data-rich science. The Chempound software is hosted at http://bitbucket.org/chempound and is available under the Apache License, Version 2.0
As a member of SWE-CLARIN, the Humanities Lab will provide tools and expertise related to language archiving, corpus and (meta)data management, with a continued emphasis on multimodal corpora, many of which contain Swedish resources, but also other (often endangered) languages, multilingual or learner corpora. As a CLARIN K-centre we provide advice on multimodal and sensor-based methods, including EEG, eye-tracking, articulography, virtual reality, motion capture, av-recording. Current work targets automatic data retrieval from multimodal data sets, as well as the linking of measurement data (e.g. EEG, fMRI) or geo-demographic data (GIS, GPS) to language data (audio, video, text, annotations). We also provide assistance with speech and language technology related matters to various projects. A primary resource in the Lab is The Humanities Lab corpus server, containing a varied set of multimodal language corpora with standardised metadata and linked layers of annotations and other resources.
The HEASARC is a multi-mission astronomy archive for the EUV, X-ray, and Gamma ray wave bands. Because EUV, X and Gamma rays cannot reach the Earth's surface it is necessary to place the telescopes and sensors on spacecraft. The HEASARC now holds the data from 25 observatories covering over 30 years of X-ray, extreme-ultraviolet and gamma-ray astronomy. Data and software from many of the older missions were restored by the HEASARC staff. Examples of these archived missions include ASCA, BeppoSAX, Chandra, Compton GRO, HEAO 1, Einstein Observatory (HEAO 2), EUVE, EXOSAT, HETE-2, INTEGRAL, ROSAT, Rossi XTE, Suzaku, Swift, and XMM-Newton.
Country
MyTardis began at Monash University to solve the problem of users needing to store large datasets and share them with collaborators online. Its particular focus is on integration with scientific instruments, instrument facilities and research lab file storage. Our belief is that the less effort a researcher has to expend safely storing data, the more likely they are to do so. This approach has flourished with MyTardis capturing data from areas such as protein crystallography, electron microscopy, medical imaging and proteomics and with deployments at Australian institutions such as University of Queensland, RMIT, University of Sydney and the Australian Synchrotron. Data access via https://www.massive.org.au/ and https://store.erc.monash.edu.au/experiment/view/104/ and see 'remarks'.
The European Space Agency's (ESA) X-ray Multi-Mirror Mission (XMM-Newton) was launched by an Ariane 504 on December 10th 1999. XMM-Newton is ESA's second cornerstone of the Horizon 2000 Science Programme. It carries 3 high throughput X-ray telescopes with an unprecedented effective area, and an optical monitor, the first flown on a X-ray observatory. The large collecting area and ability to make long uninterrupted exposures provide highly sensitive observations.
>>>!!!<<< This site is going away on April 1, 2021. General access to the site has been disabled and community users will see an error upon login. >>>!!!<<< Socrata’s cloud-based solution allows government organizations to put their data online, make data-driven decisions, operate more efficiently, and share insights with citizens.
MycoCosm, the DOE JGI’s web-based fungal genomics resource, which integrates fungal genomics data and analytical tools for fungal biologists. It provides navigation through sequenced genomes, genome analysis in context of comparative genomics and genome-centric view. MycoCosm promotes user community participation in data submission, annotation and analysis.
Our research focuses mainly on the past and present bio- and geodiversity and the evolution of animals and plants. The Information Technology Center of the Staatliche Naturwissenschaftliche Sammlungen Bayerns is the institutional repository for scientific data of the SNSB. Its major tasks focus on the management of bio- and geodiversity data using different kinds of information technological structures. The facility guarantees a sustainable curation, storage, archiving and provision of such data.
The Bavarian Natural History Collections (Staatliche Naturwissenschaftliche Sammlungen Bayerns, SNSB) are a research institution for natural history in Bavaria. They encompass five State Collections (zoology, botany, paleontology and geology, mineralogy, anthropology and paleoanatomy), the Botanical Garden Munich-Nymphenburg and eight museums with public exhibitions in Munich, Bamberg, Bayreuth, Eichstätt and Nördlingen. Our research focuses mainly on the past and present bio- and geodiversity and the evolution of animals and plants. To achieve this we have large scientific collections (almost 35,000,000 specimens), see "joint projects".
Country
MetaCrop is a database that summarizes diverse information about metabolic pathways in crop plants and allows automatic export of information for the creation of detailed metabolic models. MetaCrop is a database that contains manually curated, highly detailed information about metabolic pathways in crop plants, including location information, transport processes and reaction kinetics.
The WashU Research Data repository accepts any publishable research data set, including textual, tabular, geospatial, imagery, computer code, or 3D data files, from researchers affiliated with Washington University in St. Louis. Datasets include metadata and are curated and assigned a DOI to align with FAIR data principles.
LinkedEarth is an EarthCube-funded project aiming to better organize and share Earth Science data, especially paleoclimate data. LinkedEarth facilitates the work of scientists by empowering them to curate their own data and to build new tools centered around those.
CDAAC is responsible for processing the science data received from COSMIC. This data is currently being processed not long after the data is received, i.e. approximately eighty percent of radio occultation profiles are delivered to operational weather centers within 3 hours of observation as well as in a more accurate post-processed mode (within 8 weeks of observation).
On February 24, 2000, Terra began collecting what will ultimately become a new, 15-year global data set on which to base scientific investigations about our complex home planet. Together with the entire fleet of EOS spacecraft, Terra is helping scientists unravel the mysteries of climate and environmental change. TERRA's data collection instruments include: Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Clouds and the Earth's Radiant Energy System (CERES), Multi-angle Imaging Spectro-Radiometer (MISR), Moderate-resolution Imaging Spectroradiometer (MODIS), Measurement of Pollution in the Troposphere (MOPITT)
We are working on a new version of ALFRED web interface. The current web interface will not be available from December 15th, 2023. There will be a period where a public web interface is not available for viewing ALFRED data. Expected date for the deployment of the new ALFRED web interface with minimum functions is March 1st, 2024 --------------------------------------------- ALFRED is a free, web-accessible, curated compilation of allele frequency data on DNA sequence polymorphisms in anthropologically defined human populations. ALFRED is distinct from such databases as dbSNP, which catalogs sequence variation.
VertNet is a NSF-funded collaborative project that makes biodiversity data free and available on the web. VertNet is a tool designed to help people discover, capture, and publish biodiversity data. It is also the core of a collaboration between hundreds of biocollections that contribute biodiversity data and work together to improve it. VertNet is an engine for training current and future professionals to use and build upon best practices in data quality, curation, research, and data publishing. Yet, VertNet is still the aggregate of all of the information that it mobilizes. To us, VertNet is all of these things and more.