Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 84 result(s)
SCEC's mission includes gathering data on earthquakes, both in Southern California and other locales; integrate the information into a comprehensive understanding of earthquake phenomena; and communicate useful knowledge for reducing earthquake risk to society at large. The SCEC community consists of more than 600 scientists from 16 core institutions and 47 additional participating institutions. SCEC is funded by the National Science Foundation and the U.S. Geological Survey.
>>>!!!<<< SMD has been retired. After approximately fifteen years of microarray-centric research service, the Stanford Microarray Database has been retired. We apologize for any inconvenience; please read below for possible resolutions to your queries. If you are looking for any raw data that was directly linked to SMD from a manuscript, please search one of the public repositories. NCBI Gene Expression Omnibus EBI ArrayExpress All published data were previously communicated to one (or both) of the public repositories. Alternatively, data for publications between 1997 and 2004 were likely migrated to the Princeton University MicroArray Database, and are accessible there. If you are looking for a manuscript supplement (i.e. from a domain other than smd.stanford.edu), perhaps try searching the Internet Archive: Wayback Machine https://archive.org/web/ . >>>!!!<<< The Stanford Microarray Database (SMD) is a DNA microarray research database that provides a large amount of data for public use.
The mission of NCHS is to provide statistical information that will guide actions and policies to improve the health of the American people. As the Nation's principal health statistics agency, NCHS is responsible for collecting accurate, relevant, and timely data. NCHS' mission, and those of its counterparts in the Federal statistics system, focuses on the collection, analysis, and dissemination of information that is of use to a broad range of us.
>>>!!!<<< 2019-01: Global Land Cover Facility goes offline see https://spatialreserves.wordpress.com/2019/01/07/global-land-cover-facility-goes-offline/ ; no more access to http://www.landcover.org >>>!!!<<< The Global Land Cover Facility (GLCF) provides earth science data and products to help everyone to better understand global environmental systems. In particular, the GLCF develops and distributes remotely sensed satellite data and products that explain land cover from the local to global scales.
HepSim is a public repository with Monte Carlo simulations for particle-collision experiments. It contains predictions from leading-order (LO) parton shower models, next-to-leading order (NLO) and NLO with matched parton showers. It also includes Monte Carlo events after fast ("parametric") and full (Geant4) detector simulations and event reconstruction.
TheCellMap.org serves as a central repository for storing and analyzing quantitative genetic interaction data produced by genome-scale Synthetic Genetic Array (SGA) experiments with the budding yeast Saccharomyces cerevisiae. In particular, TheCellMap.org allows users to easily access, visualize, explore, and functionally annotate genetic interactions, or to extract and reorganize subnetworks, using data-driven network layouts in an intuitive and interactive manner.
WorldData.AI comes with a built-in workspace – the next-generation hyper-computing platform powered by a library of 3.3 billion curated external trends. WorldData.AI allows you to save your models in its “My Models Trained” section. You can make your models public and share them on social media with interesting images, model features, summary statistics, and feature comparisons. Empower others to leverage your models. For example, if you have discovered a previously unknown impact of interest rates on new-housing demand, you may want to share it through “My Models Trained.” Upload your data and combine it with external trends to build, train, and deploy predictive models with one click! WorldData.AI inspects your raw data, applies feature processors, chooses the best set of algorithms, trains and tunes multiple models, and then ranks model performance.
The Materials Project produces one of the world's foremost databases of computed information about inorganic, crystalline materials, along with providing powerful web-based apps to help analyze this information to help the design of novel materials. Access is provided free-of-charge with an API available and under a permissive license.
METLIN represents the largest MS/MS collection of data with the database generated at multiple collision energies and in positive and negative ionization modes. The data is generated on multiple instrument types including SCIEX, Agilent, Bruker and Waters QTOF mass spectrometers.
The IMEx consortium is an international collaboration between a group of major public interaction data providers who have agreed to share curation effort and develop and work to a single set of curation rules when capturing data from both directly deposited interaction data or from publications in peer-reviewed journals, capture full details of an interaction in a “deep” curation model, perform a complete curation of all protein-protein interactions experimentally demonstrated within a publication, make these interaction available in a single search interface on a common website, provide the data in standards compliant download formats, make all IMEx records freely accessible under the Creative Commons Attribution License
Climate Data Record (CDR) is a time series of measurements of sufficient length, consistency and continuity to determine climate variability and change. The fundamental CDRs include sensor data, such as calibrated radiances and brightness temperatures, that scientists have improved and quality-controlled along with the data used to calibrate them. The thematic CDRs include geophysical variables derived from the fundamental CDRs, such as sea surface temperature and sea ice concentration, and they are specific to various disciplines.
The International Satellite Cloud Climatology Project (ISCCP) is a database of intended for researchers to share information about cloud radiative properties. The data sets focus on the effects of clouds on the climate, the radiation budget, and the long-term hydrologic cycle. Within the data sets the data entries are broken down into entries of specific characteristics based on temporal resolution, spatial resolution, or temporal coverage.
The Mikulski Archive for Space Telescopes (MAST) is a NASA funded project to support and provide to the astronomical community a variety of astronomical data archives, with the primary focus on scientifically related data sets in the optical, ultraviolet, and near-infrared parts of the spectrum. MAST is located at the Space Telescope Science Institute (STScI).
The Wolfram Data Repository is a public resource that hosts an expanding collection of computable datasets, curated and structured to be suitable for immediate use in computation, visualization, analysis and more. Building on the Wolfram Data Framework and the Wolfram Language, the Wolfram Data Repository provides a uniform system for storing data and making it immediately computable and useful. With datasets of many types and from many sources, the Wolfram Data Repository is built to be a global resource for public data and data-backed publication.
The Alternative Fuels Data Center (AFDC) is a comprehensive clearinghouse of information about advanced transportation technologies. The AFDC offers transportation decision makers unbiased information, data, and tools related to the deployment of alternative fuels and advanced vehicles. The AFDC launched in 1991 in response to the Alternative Motor Fuels Act of 1988 and the Clean Air Act Amendments of 1990. It originally served as a repository for alternative fuel performance data. The AFDC has since evolved to offer a broad array of information resources that support efforts to reduce petroleum use in transportation. The AFDC serves Clean Cities stakeholders, fleets regulated by the Energy Policy Act, businesses, policymakers, government agencies, and the general public.
AceView provides a curated, comprehensive and non-redundant sequence representation of all public mRNA sequences (mRNAs from GenBank or RefSeq, and single pass cDNA sequences from dbEST and Trace). These experimental cDNA sequences are first co-aligned on the genome then clustered into a minimal number of alternative transcript variants and grouped into genes. Using exhaustively and with high quality standards the available cDNA sequences evidences the beauty and complexity of mammals’ transcriptome, and the relative simplicity of the nematode and plant transcriptomes. Genes are classified according to their inferred coding potential; many presumably non-coding genes are discovered. Genes are named by Entrez Gene names when available, else by AceView gene names, stable from release to release. Alternative features (promoters, introns and exons, polyadenylation signals) and coding potential, including motifs, domains, and homologies are annotated in depth; tissues where expression has been observed are listed in order of representation; diseases, phenotypes, pathways, functions, localization or interactions are annotated by mining selected sources, in particular PubMed, GAD and Entrez Gene, and also by performing manual annotation, especially in the worm. In this way, both the anatomy and physiology of the experimentally cDNA supported human, mouse and nematode genes are thoroughly annotated.
>>>!!!<<<2019-02-19: The repository is no longer available>>>!!!<<< >>>!!!<<<Data is archived at ChemSpider https://www.chemspider.com/Search.aspx?dsn=UsefulChem and https://www.chemspider.com/Search.aspx?dsn=Usefulchem Group Bradley Lab >>>!!!<<< see more information at the Standards tab at 'Remarks'
Data Basin is a science-based mapping and analysis platform that supports learning, research, and sustainable environmental stewardship.
Data Observation Network for Earth (DataONE) is the foundation of new innovative environmental science through a distributed framework and sustainable cyberinfrastructure that meets the needs of science and society for open, persistent, robust, and secure access to well-described and easily discovered Earth observational data. Supported by the U.S. National Science Foundation (Grant #OCI-0830944) as one of the initial DataNets, DataONE will ensure the preservation, access, use and reuse of multi-scale, multi-discipline, and multi-national science data via three primary cyberinfrastucture elements and a broad education and outreach program.
NCEP delivers national and global weather, water, climate and space weather guidance, forecasts, warnings and analyses to its Partners and External User Communities. The National Centers for Environmental Prediction (NCEP), an arm of the NOAA's National Weather Service (NWS), is comprised of nine distinct Centers, and the Office of the Director, which provide a wide variety of national and international weather guidance products to National Weather Service field offices, government agencies, emergency managers, private sector meteorologists, and meteorological organizations and societies throughout the world. NCEP is a critical national resource in national and global weather prediction. NCEP is the starting point for nearly all weather forecasts in the United States. The Centers are: Aviation Weather Center (AWC), Climate Prediction Center (CPC), Environmental Modeling Center (EMC), NCEP Central Operations (NCO), National Hurricane Center (NHC), Ocean Prediction Center (OPC), Storm Prediction Center (SPC), Space Weather Prediction Center (SWPC), Weather Prediction Center (WPC)
The primary focus of the Upper Ocean Processes Group is the study of physical processes in the upper ocean and at the air-sea interface using moored surface buoys equipped with meteorological and oceanographic sensors. UOP Project Map The Upper Ocean Processes Group provides technical support to upper ocean and air-sea interface science programs. Deep-ocean and shallow-water moored surface buoy arrays are designed, fabricated, instrumented, tested, and deployed at sea for periods of up to one year
This interface provides access to several types of data related to the Chesapeake Bay. Bay Program databases can be queried based upon user-defined inputs such as geographic region and date range. Each query results in a downloadable, tab- or comma-delimited text file that can be imported to any program (e.g., SAS, Excel, Access) for further analysis. Comments regarding the interface are encouraged. Questions in reference to the data should be addressed to the contact provided on subsequent pages.
TERN provides open data, research and management tools, data infrastructure and site-based research equipment. The open access ecosystem data is provided by TERN Data Discovery Portal , see https://www.re3data.org/repository/r3d100012013
The goals of the Drosophila Genome Center are to finish the sequence of the euchromatic genome of Drosophila melanogaster to high quality and to generate and maintain biological annotations of this sequence. In addition to genomic sequencing, the BDGP is 1) producing gene disruptions using P element-mediated mutagenesis on a scale unprecedented in metazoans; 2) characterizing the sequence and expression of cDNAs; and 3) developing informatics tools that support the experimental process, identify features of DNA sequence, and allow us to present up-to-date information about the annotated sequence to the research community.