Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 27 result(s)
The Northern California Earthquake Data Center (NCEDC) is a permanent archive and distribution center primarily for multiple types of digital data relating to earthquakes in central and northern California. The NCEDC is located at the Berkeley Seismological Laboratory, and has been accessible to users via the Internet since mid-1992. The NCEDC was formed as a joint project of the Berkeley Seismological Laboratory (BSL) and the U.S. Geological Survey (USGS) at Menlo Park in 1991, and current USGS funding is provided under a cooperative agreement for seismic network operations.
The World Ocean Database (WOD) is a collection of scientifically quality-controlled ocean profile and plankton data that includes measurements of temperature, salinity, oxygen, phosphate, nitrate, silicate, chlorophyll, alkalinity, pH, pCO2, TCO2, Tritium, Δ13Carbon, Δ14Carbon, Δ18Oxygen, Freon, Helium, Δ3Helium, Neon, and plankton. WOD contains all data of "World Data Service Oceanography" (WDS-Oceanography).
The Global Hydrology Resource Center (GHRC) provides both historical and current Earth science data, information, and products from satellite, airborne, and surface-based instruments. GHRC acquires basic data streams and produces derived products from many instruments spread across a variety of instrument platforms.
The Mikulski Archive for Space Telescopes (MAST) is a NASA funded project to support and provide to the astronomical community a variety of astronomical data archives, with the primary focus on scientifically related data sets in the optical, ultraviolet, and near-infrared parts of the spectrum. MAST is located at the Space Telescope Science Institute (STScI).
The Cooperative Association for Internet Data Analysis (CAIDA) is a collaborative undertaking among organizations in the commercial, government, and research sectors aimed at promoting greater cooperation in the engineering and maintenance of a robust, scalable global Internet infrastructure.It is an independent analysis and research group with particular focus on: Collection, curation, analysis, visualization, dissemination of sets of the best available Internet data, providing macroscopic insight into the behavior of Internet infrastructure worldwide, improving the integrity of the field of Internet science, improving the integrity of operational Internet measurement and management, informing science, technology, and communications public policies.
>>>!!!<<< 2018-01-18: no data nor programs can be found >>>!!!<<< These archives contain public domain programs for calculations in physics and other programs that we suppose about will help during work with computer. Physical constants and experimental or theoretical data as cross sections, rate constants, swarm parameters, etc., that are necessary for physical calculations are stored here, too. Programs are mainly dedicated to computers compatible with PC IBM. If programs do not use graphic units it is possible to use them on other computers, too. It is necessary to reprogram the graphic parts of programs in the other cases.
CODEX is a database of NGS mouse and human experiments. Although, the main focus of CODEX is Haematopoiesis and Embryonic systems, the database includes a large variety of cell types. In addition to the publically available data, CODEX also includes a private site hosting non-published data. CODEX provides access to processed and curated NGS experiments. To use CODEX: (i) select a specialized repository (HAEMCODE or ESCODE) or choose the whole compendium (CODEX), then (ii) filter by organism and (iii) choose how to explore the database.
LinkedEarth is an EarthCube-funded project aiming to better organize and share Earth Science data, especially paleoclimate data. LinkedEarth facilitates the work of scientists by empowering them to curate their own data and to build new tools centered around those.
This interface provides access to several types of data related to the Chesapeake Bay. Bay Program databases can be queried based upon user-defined inputs such as geographic region and date range. Each query results in a downloadable, tab- or comma-delimited text file that can be imported to any program (e.g., SAS, Excel, Access) for further analysis. Comments regarding the interface are encouraged. Questions in reference to the data should be addressed to the contact provided on subsequent pages.
Central data management of the USGS for water data that provides access to water-resources data collected at approximately 1.5 million sites in all 50 States, the District of Columbia, Puerto Rico, the Virgin Islands, Guam, American Samoa and the Commonwealth of the Northern Mariana Islands. Includes data on water use and quality, groundwater, and surface water.
TreeGenes is a genomic, phenotypic, and environmental data resource for forest tree species. The TreeGenes database and Dendrome project provide custom informatics tools to manage the flood of information.The database contains several curated modules that support the storage of data and provide the foundation for web-based searches and visualization tools. GMOD GUI tools such as CMAP for genetic maps and GBrowse for genome and transcriptome assemblies are implemented here. A sample tracking system, known as the Forest Tree Genetic Stock Center, sits at the forefront of most large-scale projects. Barcode identifiers assigned to the trees during sample collection are maintained in the database to identify an individual through DNA extraction, resequencing, genotyping and phenotyping. DiversiTree, a user-friendly desktop-style interface, queries the TreeGenes database and is designed for bulk retrieval of resequencing data. CartograTree combines geo-referenced individuals with relevant ecological and trait databases in a user-friendly map-based interface. ---- The Conifer Genome Network (CGN) is a virtual nexus for researchers working in conifer genomics. The CGN web site is maintained by the Dendrome Project at the University of California, Davis.
Country
depositar — taking the term from the Portuguese/Spanish verb for to deposit — is an online repository for research data. The site is built by the researchers for the researchers. You are free to deposit, discover, and reuse datasets on depositar for all your research purposes.
OpenTopography facilitates community access to high-resolution, Earth science-oriented, topography data, and related tools and resources. The OpenTopography Facility is based at the San Diego Supercomputer Center at the University of California, San Diego and is operated in collaboration with colleagues in the School of Earth and Space Exploration at Arizona State University and Earthscope Consortium. Core operational support for OpenTopography comes from the National Science Foundation Earth Sciences.
The European Union Open Data Portal is the single point of access to a growing range of data from the institutions and other bodies of the European Union (EU). Data are free for you to use and reuse for commercial or non-commercial purposes. By providing easy and free access to data, the portal aims to promote their innovative use and unleash their economic potential. It also aims to help foster the transparency and the accountability of the institutions and other bodies of the EU. The EU Open Data Portal is managed by the Publications Office of the European Union. Implementation of the EU's open data policy is the responsibility of the Directorate-General for Communications Networks, Content and Technology of the European Commission.
The IRI/LDEO Climate Data Library is a collection of climate data sets with the focus of climate change monitoring and mitigation. Browse data by category and source, navigate and analyze datasets using maps, and the Ingrid Data Analysis Language. The IRI/LDEO also includes web tutorials.
The Sequence Read Archive stores the raw sequencing data from such sequencing platforms as the Roche 454 GS System, the Illumina Genome Analyzer, the Applied Biosystems SOLiD System, the Helicos Heliscope, and the Complete Genomics. It archives the sequencing data associated with RNA-Seq, ChIP-Seq, Genomic and Transcriptomic assemblies, and 16S ribosomal RNA data.
The NCEP/NCAR Reanalysis Project is a joint project between the National Centers for Environmental Prediction (NCEP, formerly "NMC") and the National Center for Atmospheric Research (NCAR). The goal of this joint effort is to produce new atmospheric analyses using historical data (1948 onwards) and as well to produce analyses of the current atmospheric state (Climate Data Assimilation System, CDAS).
The CPTAC Data Portal is the centralized repository for the dissemination of proteomic data collected by the Proteome Characterization Centers (PCCs) for the CPTAC program. The portal also hosts analyses of the mass spectrometry data (mapping of spectra to peptide sequences and protein identification) from the PCCs and from a CPTAC-sponsored common data analysis pipeline (CDAP).
Argo is an international programme using autonomous floats to collect temperature, salinity and current data in the ice-free oceans. It is teamed with the Jason ocean satellite series. Argo will soon reach its target of 3000 floats delivering data within 24 hours to researchers and operational centres worldwide. 23 countries contribute floats to Argo and many others help with float deployments. Argo has revolutionized the collection of information from inside the oceans. ARGO Project is organized in regional and national Centers with a Project Office, an Information Center (AIC) and 2 Global Data Centers (GDAC), at the United States and at France. Each DAC submits regularly all its new files to both USGODAE and Coriolis GDACs.The whole Argo data set is available in real time and delayed mode from the global data centres (GDACs). The internet addresses are: https://nrlgodae1.nrlmry.navy.mil/ and http://www.argodatamgt.org
Country
TUdatalib is the institutional repository of the TU Darmstadt for research data. It enables the structured storage of research data and descriptive metadata, long-term archiving (at least 10 years) and, if desired, the publication of data including DOI assignment. In addition there is a fine granular rights and role management.
The Leicester Database and Archive Service (LEDAS) is an easy to use on-line astronomical database and archive access service, dealing mainly with data from high energy astrophysics missions, but also providing full database functionality for over 200 astronomical catalogues from ground-based observations and space missions. The LEDAS also allows access to images, spectra and light curves in graphics, HDS and FITS formats, as well as access to raw and processed event data. LEDAS provides the primary means of access for the UK astronomical community to the ROSAT Public Data Archive, the ASCA Public Data Archive and the Ginga Products Archive by its Archive Network Interface ARNIE.
The Energy Data eXchange (EDX) is an online collection of capabilities and resources that advance research and customize energy-related needs. EDX is developed and maintained by NETL-RIC researchers and technical computing teams to support private collaboration for ongoing research efforts, and tech transfer of finalized DOE NETL research products. EDX supports NETL-affiliated research by: Coordinating historical and current data and information from a wide variety of sources to facilitate access to research that crosscuts multiple NETL projects/programs; Providing external access to technical products and data published by NETL-affiliated research teams; Collaborating with a variety of organizations and institutions in a secure environment through EDX’s ;Collaborative Workspaces
InterPro collects information about protein sequence analysis and classification, providing access to a database of predictive protein signatures used for the classification and automatic annotation of proteins and genomes. Sequences in InterPro are classified at superfamily, family, and subfamily. InterPro predicts the occurrence of functional domains, repeats, and important sites, and adds in-depth annotation such as GO terms to the protein signatures.