Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 57 result(s)
The Northern California Earthquake Data Center (NCEDC) is a permanent archive and distribution center primarily for multiple types of digital data relating to earthquakes in central and northern California. The NCEDC is located at the Berkeley Seismological Laboratory, and has been accessible to users via the Internet since mid-1992. The NCEDC was formed as a joint project of the Berkeley Seismological Laboratory (BSL) and the U.S. Geological Survey (USGS) at Menlo Park in 1991, and current USGS funding is provided under a cooperative agreement for seismic network operations.
The Ozone Mapping and Profiler Suite measures the ozone layer in our upper atmosphere—tracking the status of global ozone distributions, including the ‘ozone hole.’ It also monitors ozone levels in the troposphere, the lowest layer of our atmosphere. OMPS extends out 40-year long record ozone layer measurements while also providing improved vertical resolution compared to previous operational instruments. Closer to the ground, OMPS’s measurements of harmful ozone improve air quality monitoring and when combined with cloud predictions; help to create the Ultraviolet Index, a guide to safe levels of sunlight exposure. OMPS has two sensors, both new designs, composed of three advanced hyperspectralimaging spectrometers.The three spectrometers: a downward-looking nadir mapper, nadir profiler and limb profiler. The entire OMPS suite currently fly on board the Suomi NPP spacecraft and are scheduled to fly on the JPSS-2 satellite mission. NASA will provide the OMPS-Limb profiler.
EBAS is a database hosting observation data of atmospheric chemical composition and physical properties. EBAS hosts data submitted by data originators in support of a number of national and international programs ranging from monitoring activities to research projects. EBAS is developed and operated by the Norwegian Institute for Air Research (NILU). We hope the information found on the web-site is self explanatory, and we would particularly ask you to consider the text found in the data disclaimer and in the “info” pages associated to the filter criteria.
The CMU Multi-Modal Activity Database (CMU-MMAC) database contains multimodal measures of the human activity of subjects performing the tasks involved in cooking and food preparation. The CMU-MMAC database was collected in Carnegie Mellon's Motion Capture Lab. A kitchen was built and to date twenty-five subjects have been recorded cooking five different recipes: brownies, pizza, sandwich, salad, and scrambled eggs.
Country
A data repository for the storage and sharing of Adaptive Immune Receptor Repertoire data. Primary public repository for the iReceptor Platform and Scientific Gateway. Further URL for the repository: http://www.ireceptor.org
The European Monitoring and Evaluation Programme (EMEP) is a scientifically based and policy driven programme under the Convention on Long-range Transboundary Air Pollution (CLRTAP) for international co-operation to solve transboundary air pollution problems.
The Metropolitan Travel Survey Archive (MTSA) includes travel surveys from numerous public agencies across the United States. The Transportation Secure Data Center has archived these surveys to ensure their continued public availability. The survey data have been converted to a standard file format and cleansed to remove personally identifiable information, including any detailed spatial data regarding individual trips.
The Structure database provides three-dimensional structures of macromolecules for a variety of research purposes and allows the user to retrieve structures for specific molecule types as well as structures for genes and proteins of interest. Three main databases comprise Structure-The Molecular Modeling Database; Conserved Domains and Protein Classification; and the BioSystems Database. Structure also links to the PubChem databases to connect biological activity data to the macromolecular structures. Users can locate structural templates for proteins and interactively view structures and sequence data to closely examine sequence-structure relationships.
Copernicus is a European system for monitoring the Earth. Copernicus consists of a complex set of systems which collect data from multiple sources: earth observation satellites and in situ sensors such as ground stations, airborne and sea-borne sensors. It processes these data and provides users with reliable and up-to-date information through a set of services related to environmental and security issues. The services address six thematic areas: land monitoring, marine monitoring, atmosphere monitoring, climate change, emergency management and security. The main users of Copernicus services are policymakers and public authorities who need the information to develop environmental legislation and policies or to take critical decisions in the event of an emergency, such as a natural disaster or a humanitarian crisis. Based on the Copernicus services and on the data collected through the Sentinels and the contributing missions , many value-added services can be tailored to specific public or commercial needs, resulting in new business opportunities. In fact, several economic studies have already demonstrated a huge potential for job creation, innovation and growth.
EMPIAR, the Electron Microscopy Public Image Archive, is a public resource for raw, 2D electron microscopy images. Here, you can browse, upload, download and reprocess the thousands of raw, 2D images used to build a 3D structure. The purpose of EMPIAR is to provide an easy access to the state-of-the-art raw data to facilitate methods development and validation, which will lead to better 3D structures. It complements the Electron Microscopy Data Bank (EMDB), where 3D images are stored, and uses the fault-tolerant Aspera platform for data transfers
The Cooperative Association for Internet Data Analysis (CAIDA) is a collaborative undertaking among organizations in the commercial, government, and research sectors aimed at promoting greater cooperation in the engineering and maintenance of a robust, scalable global Internet infrastructure.It is an independent analysis and research group with particular focus on: Collection, curation, analysis, visualization, dissemination of sets of the best available Internet data, providing macroscopic insight into the behavior of Internet infrastructure worldwide, improving the integrity of the field of Internet science, improving the integrity of operational Internet measurement and management, informing science, technology, and communications public policies.
The Precipitation Processing System (PPS) evolved from the Tropical Rainfall Measuring Mission (TRMM) Science Data and Information System (TSDIS). The purpose of the PPS is to process, analyze and archive data from the Global Precipitation Measurement (GPM) mission, partner satellites and the TRMM mission. The PPS also supports TRMM by providing validation products from TRMM ground radar sites. All GPM, TRMM and Partner public data products are available to the science community and the general public from the TRMM/GPM FTP Data Archive. Please note that you need to register to be able to access this data. Registered users can also search for GPM, partner and TRMM data, order custom subsets and set up subscriptions using our PPS Data Products Ordering Interface (STORM)
GLOBE (Global Collaboration Engine) is an online collaborative environment that enables land change researchers to share, compare and integrate local and regional studies with global data to assess the global relevance of their work.
Country
B2SHARE allows publishing research data and belonging metadata. It supports different research communities with specific metadata schemas. This server is provided for researchers of the Research Centre Juelich and related communities.
This library is a public and easily accessible resource database of images, videos, and animations of cells, capturing a wide diversity of organisms, cell types, and cellular processes. The Cell Image Library has been merged with "Cell Centered Database" in 2017. The purpose of the database is to advance research on cellular activity, with the ultimate goal of improving human health.
The Brown Digital Repository (BDR) is a place to gather, index, store, preserve, and make available digital assets produced via the scholarly, instructional, research, and administrative activities at Brown.
VertNet is a NSF-funded collaborative project that makes biodiversity data free and available on the web. VertNet is a tool designed to help people discover, capture, and publish biodiversity data. It is also the core of a collaboration between hundreds of biocollections that contribute biodiversity data and work together to improve it. VertNet is an engine for training current and future professionals to use and build upon best practices in data quality, curation, research, and data publishing. Yet, VertNet is still the aggregate of all of the information that it mobilizes. To us, VertNet is all of these things and more.
Country
Swedish National Data Service (SND) is a research data infrastructure designed to assist researchers in preserving, maintaining, and disseminating research data in a secure and sustainable manner. The SND Search function makes it easy to find, use, and cite research data from a variety of scientific disciplines. Together with an extensive network of almost 40 Swedish higher education institutions and other research organisations, SND works for increased access to research data, nationally as well as internationally.
The SURF Data Repository is a user-friendly web-based data publication platform that allows researchers to store, annotate and publish research datasets of any size to ensure long-term preservation and availability of their data. The service allows any dataset to be stored, independent of volume, number of files and structure. A published dataset is enriched with complex metadata, unique identifiers are added and the data is preserved for an agreed-upon period of time. The service is domain-agnostic and supports multiple communities with different policy and metadata requirements.
Central data management of the USGS for water data that provides access to water-resources data collected at approximately 1.5 million sites in all 50 States, the District of Columbia, Puerto Rico, the Virgin Islands, Guam, American Samoa and the Commonwealth of the Northern Mariana Islands. Includes data on water use and quality, groundwater, and surface water.
!!! <<< the repository is offline, please use: https://www.re3data.org/repository/r3d100011650 >>> !!! The USGODAE Project consists of United States academic, government and military researchers working to improve assimilative ocean modeling as part of the International GODAE Project. GODAE hopes to develop a global system of observations, communications, modeling and assimilation, that will deliver regular, comprehensive information on the state of the oceans, in a way that will promote and engender wide utility and availability of this resource for maximum benefit to the community. The USGODAE Argo GDAC is currently operational, serving daily data from the following national DACs: Australia (CSIRO), Canada (MEDS), China (2: CSIO and NMDIS), France (Coriolis), India (INCOIS), Japan (JMA), Korea (2: KMA and Kordi), UK (BODC), and US (AOML).
<<<!!!<<< OFFLINE >>>!!!>>> A recent computer security audit has revealed security flaws in the legacy HapMap site that require NCBI to take it down immediately. We regret the inconvenience, but we are required to do this. That said, NCBI was planning to decommission this site in the near future anyway (although not quite so suddenly), as the 1,000 genomes (1KG) project has established itself as a research standard for population genetics and genomics. NCBI has observed a decline in usage of the HapMap dataset and website with its available resources over the past five years and it has come to the end of its useful life. The International HapMap Project is a multi-country effort to identify and catalog genetic similarities and differences in human beings. Using the information in the HapMap, researchers will be able to find genes that affect health, disease, and individual responses to medications and environmental factors. The Project is a collaboration among scientists and funding agencies from Japan, the United Kingdom, Canada, China, Nigeria, and the United States. All of the information generated by the Project will be released into the public domain. The goal of the International HapMap Project is to compare the genetic sequences of different individuals to identify chromosomal regions where genetic variants are shared. By making this information freely available, the Project will help biomedical researchers find genes involved in disease and responses to therapeutic drugs. In the initial phase of the Project, genetic data are being gathered from four populations with African, Asian, and European ancestry. Ongoing interactions with members of these populations are addressing potential ethical issues and providing valuable experience in conducting research with identified populations. Public and private organizations in six countries are participating in the International HapMap Project. Data generated by the Project can be downloaded with minimal constraints. The Project officially started with a meeting in October 2002 (https://www.genome.gov/10005336/) and is expected to take about three years.
TreeGenes is a genomic, phenotypic, and environmental data resource for forest tree species. The TreeGenes database and Dendrome project provide custom informatics tools to manage the flood of information.The database contains several curated modules that support the storage of data and provide the foundation for web-based searches and visualization tools. GMOD GUI tools such as CMAP for genetic maps and GBrowse for genome and transcriptome assemblies are implemented here. A sample tracking system, known as the Forest Tree Genetic Stock Center, sits at the forefront of most large-scale projects. Barcode identifiers assigned to the trees during sample collection are maintained in the database to identify an individual through DNA extraction, resequencing, genotyping and phenotyping. DiversiTree, a user-friendly desktop-style interface, queries the TreeGenes database and is designed for bulk retrieval of resequencing data. CartograTree combines geo-referenced individuals with relevant ecological and trait databases in a user-friendly map-based interface. ---- The Conifer Genome Network (CGN) is a virtual nexus for researchers working in conifer genomics. The CGN web site is maintained by the Dendrome Project at the University of California, Davis.