Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 29 result(s)
The Northern California Earthquake Data Center (NCEDC) is a permanent archive and distribution center primarily for multiple types of digital data relating to earthquakes in central and northern California. The NCEDC is located at the Berkeley Seismological Laboratory, and has been accessible to users via the Internet since mid-1992. The NCEDC was formed as a joint project of the Berkeley Seismological Laboratory (BSL) and the U.S. Geological Survey (USGS) at Menlo Park in 1991, and current USGS funding is provided under a cooperative agreement for seismic network operations.
NIAID’s TB Portals Program is a multi-national collaboration for TB data sharing and analysis to advance TB research. As a global consortium of clinicians, scientists, and IT professionals from 40 sites in 16 countries throughout eastern Europe, Asia, and sub-Saharan Africa, the TB Portals Program is a web-based, open-access repository of multi-domain TB data and tools for its analysis. Researchers can find linked socioeconomic/geographic, clinical, laboratory, radiological, and genomic data from over 7,500 international published TB patient cases with an emphasis on drug-resistant tuberculosis.
>>>!!!<<< This site is going away on April 1, 2021. General access to the site has been disabled and community users will see an error upon login. >>>!!!<<< Socrata’s cloud-based solution allows government organizations to put their data online, make data-driven decisions, operate more efficiently, and share insights with citizens.
Country
As a research data hub for social and economic history, Emporion enables the free and standards-compliant publication of time series, historical statistical and panel data, georeferenced vector data, text mining analyses and data papers. Emporion is also open to contributions from the fields of business and environmental history and the history of technology. Emporion's supporting institutions are the DFG Priority Programme 1859 'Experience and Expectations. Historical Foundations of Economic Behavior' and the Gesellschaft für Sozial- und Wirtschaftsgeschichte in conjunction with the Staatsbibliothek zu Berlin – Preußischer Kulturbesitz.
LinkedEarth is an EarthCube-funded project aiming to better organize and share Earth Science data, especially paleoclimate data. LinkedEarth facilitates the work of scientists by empowering them to curate their own data and to build new tools centered around those.
We are working on a new version of ALFRED web interface. The current web interface will not be available from December 15th, 2023. There will be a period where a public web interface is not available for viewing ALFRED data. Expected date for the deployment of the new ALFRED web interface with minimum functions is March 1st, 2024 --------------------------------------------- ALFRED is a free, web-accessible, curated compilation of allele frequency data on DNA sequence polymorphisms in anthropologically defined human populations. ALFRED is distinct from such databases as dbSNP, which catalogs sequence variation.
VertNet is a NSF-funded collaborative project that makes biodiversity data free and available on the web. VertNet is a tool designed to help people discover, capture, and publish biodiversity data. It is also the core of a collaboration between hundreds of biocollections that contribute biodiversity data and work together to improve it. VertNet is an engine for training current and future professionals to use and build upon best practices in data quality, curation, research, and data publishing. Yet, VertNet is still the aggregate of all of the information that it mobilizes. To us, VertNet is all of these things and more.
Country
HIstome: The Histone Infobase is a database of human histones, their post-translational modifications and modifying enzymes. HIstome is a combined effort of researchers from two institutions, Advanced Center for Treatment, Research and Education in Cancer (ACTREC), Navi Mumbai and Center of Excellence in Epigenetics, Indian Institute of Science Education and Research (IISER), Pune.
Country
Swedish National Data Service (SND) is a research data infrastructure designed to assist researchers in preserving, maintaining, and disseminating research data in a secure and sustainable manner. The SND Search function makes it easy to find, use, and cite research data from a variety of scientific disciplines. Together with an extensive network of almost 40 Swedish higher education institutions and other research organisations, SND works for increased access to research data, nationally as well as internationally.
Country
HAL is a multidisciplinary open archive that allows research results to be shared in open access, whether published or not. It is at the service of researchers affiliated with academic institutions, whether public or private. In France, HAL is the national archive chosen by the French scientific and academic community for the open dissemination of its research results. The archive is also accessible to researchers affiliated with foreign academic institutions, whether public or private.
TreeGenes is a genomic, phenotypic, and environmental data resource for forest tree species. The TreeGenes database and Dendrome project provide custom informatics tools to manage the flood of information.The database contains several curated modules that support the storage of data and provide the foundation for web-based searches and visualization tools. GMOD GUI tools such as CMAP for genetic maps and GBrowse for genome and transcriptome assemblies are implemented here. A sample tracking system, known as the Forest Tree Genetic Stock Center, sits at the forefront of most large-scale projects. Barcode identifiers assigned to the trees during sample collection are maintained in the database to identify an individual through DNA extraction, resequencing, genotyping and phenotyping. DiversiTree, a user-friendly desktop-style interface, queries the TreeGenes database and is designed for bulk retrieval of resequencing data. CartograTree combines geo-referenced individuals with relevant ecological and trait databases in a user-friendly map-based interface. ---- The Conifer Genome Network (CGN) is a virtual nexus for researchers working in conifer genomics. The CGN web site is maintained by the Dendrome Project at the University of California, Davis.
GigaDB primarily serves as a repository to host data and tools associated with articles published by GigaScience Press; GigaScience and GigaByte (both are online, open-access journals). GigaDB defines a dataset as a group of files (e.g., sequencing data, analyses, imaging files, software programs) that are related to and support a unit-of-work (article or study). GigaDB allows the integration of manuscript publication with supporting data and tools.
Country
Kenya Open Data offers visualizations tools, data downloads, and easy access for software developers. Kenya Open Data provides core government development, demographic, statistical and expenditure data available for researchers, policymakers, developers and the general public. Kenya is the first developing country to have an open government data portal, the first in sub-Saharan Africa and second on the continent after Morocco. The initiative has been widely acclaimed globally as one of the most significant steps Kenya has made to improve governance and implement the new Constitution’s provisions on access to information.
The Sequence Read Archive stores the raw sequencing data from such sequencing platforms as the Roche 454 GS System, the Illumina Genome Analyzer, the Applied Biosystems SOLiD System, the Helicos Heliscope, and the Complete Genomics. It archives the sequencing data associated with RNA-Seq, ChIP-Seq, Genomic and Transcriptomic assemblies, and 16S ribosomal RNA data.
GWAS Central (previously the Human Genome Variation database of Genotype-to-Phenotype information) is a database of summary level findings from genetic association studies, both large and small. We actively gather datasets from public domain projects, and encourage direct data submission from the community.
The CPTAC Data Portal is the centralized repository for the dissemination of proteomic data collected by the Proteome Characterization Centers (PCCs) for the CPTAC program. The portal also hosts analyses of the mass spectrometry data (mapping of spectra to peptide sequences and protein identification) from the PCCs and from a CPTAC-sponsored common data analysis pipeline (CDAP).
Country
The UTM Data Centre is responsible for managing spatial data acquired during oceanographic cruises on board CSIC research vessels (RV Sarmiento de Gamboa, RV García del Cid) and RV Hespérides. The aim is, on the one hand, to disseminate which data exist and where, how and when they have been acquired. And on the other hand, to provide access to as much of the interoperable data as possible, following the FAIR principles, so that they can be used and reused. For this purpose, the UTM has a Spatial Data Infrastructure at a national level that consists of several services: Oceanographic Cruise and Data Catalogue Including metadata from more than 600 cruises carried out since 1991, with links to documentation associated to the cruise, navigation maps and datasets Geoportal Geospatial data mapping interface Underway Plot & QC Visualization, Quality Control and conversion to standard format of meteorological data and temperature and salinity of surface water At an international level, the UTM is a National Oceanographic Data Centre (NODC) of the Distributed European Marine Data Infrastructure SeaDataNet, to which the UTM provides metadata published in the Cruise Summary Report Catalog and in the data catalog Common Data Index Catalog, as well as public data to be shared.
OpenKIM is an online suite of open source tools for molecular simulation of materials. These tools help to make molecular simulation more accessible and more reliable. Within OpenKIM, you will find an online resource for standardized testing and long-term warehousing of interatomic models and data, and an application programming interface (API) standard for coupling atomistic simulation codes and interatomic potential subroutines.
Country
DataStream is an open access platform for sharing information on freshwater health. It currently allows users to access, visualize, and download full water quality datasets collected by Indigenous Nations, community groups, researchers and governments throughout five regional hubs: Atlantic Canada, the Great Lakes and Saint Lawrence region, the Lake Winnipeg Basin, the Mackenzie River Basin and the Pacific region. DataStream was developed by The Gordon Foundation and is carried out in collaboration with regional monitoring networks.
The Atmospheric Science Data Center (ASDC) at NASA Langley Research Center is responsible for processing, archiving, and distribution of NASA Earth science data in the areas of radiation budget, clouds, aerosols, and tropospheric chemistry.The ASDC specializes in atmospheric data important to understanding the causes and processes of global climate change and the consequences of human activities on the climate.
Ag Data Commons provides access to a wide variety of open data relevant to agricultural research. We are a centralized repository for data already on the web, as well as for new data being published for the first time. While compliance with the U.S. Federal public access and open data directives is important, we aim to surpass them. Our goal is to foster innovative data re-use, integration, and visualization to support bigger, better science and policy.
Country
The Ocean Date and Information System provides information on physical, chemical, biological and geological parameters of ocean and coasts on spatial and temporal domains that is vital for both research and operational oceanography. In-situ and remote sensing data are included. The Ocean Information Bank is supported by the data received from Ocean Observing Systems in the Indian Ocean (both the in-situ platforms and satellites) as well as by a chain of Marine Data Centres. Ocean and coastal measurements are available. Data products are accessible through various portals on the site and are largely available by data type (in situ or remote sensing) and then by parameter.
Within WASCAL a large number of heterogeneous data are collected. These data are mainly coming from different initiated research activities within WASCAL (Core Research Program, Graduate School Program) from the hydrological-meteorological, remote sensing, biodiversity and socio economic observation networks within WASCAL, and from the activities of the WASCAL Competence Center in Ouagadougou, Burkina-Faso.