Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 16 result(s)
!!! >>> intrepidbio.com expired <<< !!!! Intrepid Bioinformatics serves as a community for genetic researchers and scientific programmers who need to achieve meaningful use of their genetic research data – but can’t spend tremendous amounts of time or money in the process. The Intrepid Bioinformatics system automates time consuming manual processes, shortens workflow, and eliminates the threat of lost data in a faster, cheaper, and better environment than existing solutions. The system also provides the functionality and community features needed to analyze the large volumes of Next Generation Sequencing and Single Nucleotide Polymorphism data, which is generated for a wide range of purposes from disease tracking and animal breeding to medical diagnosis and treatment.
The Northern California Earthquake Data Center (NCEDC) is a permanent archive and distribution center primarily for multiple types of digital data relating to earthquakes in central and northern California. The NCEDC is located at the Berkeley Seismological Laboratory, and has been accessible to users via the Internet since mid-1992. The NCEDC was formed as a joint project of the Berkeley Seismological Laboratory (BSL) and the U.S. Geological Survey (USGS) at Menlo Park in 1991, and current USGS funding is provided under a cooperative agreement for seismic network operations.
Country
The Marine Data Portal is a product of the “Underway”- Data initiative of the German Marine Research Alliance (Deutsche Allianz Meeresforschung - DAM) and is supported by the marine science centers AWI, GEOMAR and Hereon of the Helmholtz Association. This initiative aims to improve and standardize the systematic data collection and data evaluation for expeditions with German research vessels and marine observation. It supports scientists in their data management duties and fosters (data) science through FAIR and open access to marine research data. AWI, GEOMAR and Hereon develop this marine data hub (Marehub) to build a decentralized data infrastructure for processing, long-term archiving and dissemination of marine observation and model data and data products. The Marine Data Portal provides user-friendly, centralized access to marine research data, reports and publications from a wide range of data repositories and libraries in the context of German marine research and its international collaboration. The Marine Data Portal is developed by scientists for scientists in order to facilitate Findability and Access of marine research data for Reuse. It supports machine-readable and data driven science. Please note that the quality of the data may vary depending on the purpose for which it was originally collected.
The Museum is committed to open access and open science, and has launched the Data Portal to make its research and collections datasets available online. It allows anyone to explore, download and reuse the data for their own research. Our natural history collection is one of the most important in the world, documenting 4.5 billion years of life, the Earth and the solar system. Almost all animal, plant, mineral and fossil groups are represented. These datasets will increase exponentially. Under the Museum's ambitious digital collections programme we aim to have 20 million specimens digitised in the next five years.
GLOBE (Global Collaboration Engine) is an online collaborative environment that enables land change researchers to share, compare and integrate local and regional studies with global data to assess the global relevance of their work.
Country
The "Flora of Bavaria" initiative with its data portal (14 million occurrence data) and Wiki representation is primarily a citizen science project. Efforts to describe and monitor the flora of Bavaria have been ongoing for 100 years. The goal of these efforts is to record all vascular plants, including newcomers, and to document threatened or former local occurrences. Being geographically largest state of Germany with a broad range of habitats, Bavaria has a special responsibility for documenting and maintaining its plant diversity . About 85% of all German vascular plant species occur in Bavaria, and in addition it has about 50 endemic taxa, only known from Bavaria (most of them occur in the Alps). The Wiki is collaboration of volunteers and local and regional Bavarian botanical societies. Everybody is welcome to contribute, especially with photos or reports of local changes in the flora. The Flora of Bavaria project is providing access to a research data repository for occurrence data powered by the Diversity Workbench database framework.
This database contains references to publications that include numerical data, general information, comments, and reviews on atomic line broadening and shifts, and is part of the collection of the NIST Atomic Spectroscopy Data Center https://www.nist.gov/pml/quantum-measurement/atomic-spectroscopy/atomic-spectroscopy-data-center-contacts.
The Southern California Earthquake Data Center (SCEDC) operates at the Seismological Laboratory at Caltech and is the primary archive of seismological data for southern California. The 1932-to-present Caltech/USGS catalog maintained by the SCEDC is the most complete archive of seismic data for any region in the United States. Our mission is to maintain an easily accessible, well-organized, high-quality, searchable archive for research in seismology and earthquake engineering.
Country
The project brings together national key players providing environmentally related biological data and services to develop the ‘German Federation for Biological Data' (GFBio). The overall goal is to provide a sustainable, service oriented, national data infrastructure facilitating data sharing and stimulating data intensive science in the fields of biological and environmental research.
Argo is an international programme using autonomous floats to collect temperature, salinity and current data in the ice-free oceans. It is teamed with the Jason ocean satellite series. Argo will soon reach its target of 3000 floats delivering data within 24 hours to researchers and operational centres worldwide. 23 countries contribute floats to Argo and many others help with float deployments. Argo has revolutionized the collection of information from inside the oceans. ARGO Project is organized in regional and national Centers with a Project Office, an Information Center (AIC) and 2 Global Data Centers (GDAC), at the United States and at France. Each DAC submits regularly all its new files to both USGODAE and Coriolis GDACs.The whole Argo data set is available in real time and delayed mode from the global data centres (GDACs). The internet addresses are: https://nrlgodae1.nrlmry.navy.mil/ and http://www.argodatamgt.org
Country
HYdrological cycle in the Mediterranean EXperiemnt. Considering the science and societal issues motivating HyMeX, the programme aims to : improve our understanding of the water cycle, with emphasis on extreme events, by monitoring and modelling the Mediterranean atmosphere-land-ocean coupled system, its variability from the event to the seasonal and interannual scales, and its characteristics over one decade (2010-2020) in the context of global change, assess the social and economic vulnerability to extreme events and adaptation capacity.The multidisciplinary research and the database developed within HyMeX should contribute to: improve observational and modelling systems, especially for coupled systems, better predict extreme events, simulate the long-term water-cycle more accurately, provide guidelines for adaptation measures, especially in the context of global change.
Country
This portal applicaton brings together the data collected and published via OGC Web-services from the individual observatories and provides access of the data to the public. Therefore, it serves as a database node to provide scientists and decision makers with reliable and well accessible data and data products.
EOL’s platforms and instruments collect large and often unique data sets that must be validated, archived and made available to the research community. The goal of EOL data services is to advance science through delivering high-quality project data and metadata in ways that are as transparent, secure, and easily accessible as possible - today and into the future. By adhering to accepted standards in data formats and data services, EOL provides infrastructure to facilitate discovery and direct access to data and software from state-of-the-art commercial and locally-developed applications. EOL’s data services are committed to the highest standard of data stewardship from collection to validation to archival.
Country
SAGE is a data and research platform that enables the secondary use of data related to child and youth development, health and well-being. It currently contains research data, and at a later stage we aim to also house administrative and community service delivery data. Technical infrastructure and governance processes are in place to ensure ethical use and the privacy of participants. This dataverse provides metadata for the various data holdings available in SAGE (Secondary Analysis to Generate Evidence), a research data repository based in Edmonton Alberta and an intiative of PolicyWise for Children & Families. In general, SAGE contains data holdings too sensitive for open access. Each study lists a security level which indicates the procedure required to access the data.
<<<!!!<<< This repository is no longer available. >>>!!!>>>The Deep Carbon Observatory (DCO) is a global community of multi-disciplinary scientists unlocking the inner secrets of Earth through investigations into life, energy, and the fundamentally unique chemistry of carbon. Deep Carbon Observatory Digital Object Registry (“DCO-VIVO”) is a centrally-managed digital object identification, object registration and metadata management service for the DCO. Digital object registration includes DCO-ID generation based on the global Handle System infrastructure and metadata collection using VIVO. Users will be able to deposit their data into the DCO Data Repository and have that data discoverable and accessible by others.