Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 40 result(s)
The Chesapeake Bay Environmental Observatory (CBEO) is a prototype to demonstrate the utility of newly developed Cyberinfrastructure (CI) components for transforming environmental research, education, and management. The CBEO project uses a specific problem of water quality (hypoxia) as means of directly involving users and demonstrating the prototype’s utility. Data from the Test Bed are being brought into a CBEO Portal on a National Geoinformatics Grid developed by the NSF funded GEON. This is a cyberinfrastructure netwrok that allows users access to datasets as well as the tools with which to analyze the data. Currently, Test Bed data avaialble on the CBEO Portal includes Water Quality Model output and water quality monitorig data from the Chesapeake Bay Program's CIMS database. This data is also available as aggregated "data cubes". Avaialble tools include the Data Access System for Hydrology (DASH), Hydroseek and an online R-based interpolator.
---<<< This repository is no longer available. This record is out-dated >>>--- The ONS challenge contains open solubility data, experiments with raw data from different scientists and institutions. It is part of the The Open Notebook Science wiki community, ideally suited for community-wide collaborative research projects involving mathematical modeling and computer simulation work, as it allows researchers to document model development in a step-by-step fashion, then link model prediction to experiments that test the model, and in turn, use feeback from experiments to evolve the model. By making our laboratory notebooks public, the evolutionary process of a model can be followed in its totality by the interested reader. Researchers from laboratories around the world can now follow the progress of our research day-to-day, borrow models at various stages of development, comment or advice on model developments, discuss experiments, ask questions, provide feedback, or otherwise contribute to the progress of science in any manner possible.
The tree of life links all biodiversity through a shared evolutionary history. This project will produce the first online, comprehensive first-draft tree of all 1.8 million named species, accessible to both the public and scientific communities. Assembly of the tree will incorporate previously-published results, with strong collaborations between computational and empirical biologists to develop, test and improve methods of data synthesis. This initial tree of life will not be static; instead, we will develop tools for scientists to update and revise the tree as new data come in. Early release of the tree and tools will motivate data sharing and facilitate ongoing synthesis of knowledge.
NASA Life Sciences Portal is the next generation of the Life Sciences Data Archive for Human, Animal and Plant Research NASA's Human Research Program (HRP) conducts research and develops technologies that allow humans to travel safely and productively in space. The Program uses evidence from data collected on astronauts, as well as other supporting studies. These data are stored in the research data repository, Life Sciences Data Archive (LSDA).
The mission of NCHS is to provide statistical information that will guide actions and policies to improve the health of the American people. As the Nation's principal health statistics agency, NCHS is responsible for collecting accurate, relevant, and timely data. NCHS' mission, and those of its counterparts in the Federal statistics system, focuses on the collection, analysis, and dissemination of information that is of use to a broad range of us.
>>>!!!<<< 2019-01: Global Land Cover Facility goes offline see https://spatialreserves.wordpress.com/2019/01/07/global-land-cover-facility-goes-offline/ ; no more access to http://www.landcover.org >>>!!!<<< The Global Land Cover Facility (GLCF) provides earth science data and products to help everyone to better understand global environmental systems. In particular, the GLCF develops and distributes remotely sensed satellite data and products that explain land cover from the local to global scales.
“B-Clear” stands for Bloomington Clear, or Be Clear about what we’re up to. B-Clear is a one-stop place to build an ever-growing assembly of useful data. We’re organizing it as open, accessible data so everyone can see and use it and manipulate it.
The National Deep Submergence Facility (NDSF) operates the Human Occupied Vehicle (HOV) Alvin, the Remote Operated Vehicle (ROV) Jason 2, and the Autonomous Underwater Vehicle (AUV) Sentry. Data acquired with these platforms is provided both to the science party on each expedition, and to the Woods Hole Oceanographic Institution (WHOI) Data Library.
State of the Salmon provides data on abundance, diversity, and ecosystem health of wild salmon populations specific to the Pacific Ocean, North Western North America, and Asia. Data downloads are available using two geographic frameworks: Salmon Ecoregions or Hydro 1K.
The Roper Center has made available its entire collection of Primary exit polls. Primary exit polls datasets include standard demographic makeup of interviewee and questions pertinent to the issues of each state.
The Brain Biodiversity Bank refers to the repository of images of and information about brain specimens contained in the collections associated with the National Museum of Health and Medicine at the Armed Forces Institute of Pathology in Washington, DC. These collections include, besides the Michigan State University Collection, the Welker Collection from the University of Wisconsin, the Yakovlev-Haleem Collection from Harvard University, the Meyer Collection from the Johns Hopkins University, and the Huber-Crosby and Crosby-Lauer Collections from the University of Michigan and the C.U. Ariëns Kappers brain collection from Amsterdam Netherlands.Introducing online atlases of the brains of humans, sheep, dolphins, and other animals. A world resource for illustrations of whole brains and stained sections from a great variety of mammals
As with most biomedical databases, the first step is to identify relevant data from the research community. The Monarch Initiative is focused primarily on phenotype-related resources. We bring in data associated with those phenotypes so that our users can begin to make connections among other biological entities of interest. We import data from a variety of data sources. With many resources integrated into a single database, we can join across the various data sources to produce integrated views. We have started with the big players including ClinVar and OMIM, but are equally interested in boutique databases. You can learn more about the sources of data that populate our system from our data sources page https://monarchinitiative.org/about/sources.
>>>!!!<<< On June 1, 2020, the Academic Seismic Portal repositories at UTIG were merged into a single collection hosted at Lamont-Doherty Earth Observatory. Content here was removed July 1, 2020. Visit the Academic Seismic Portal @LDEO! https://www.marine-geo.org/collections/#!/collection/Seismic#summary (https://www.re3data.org/repository/r3d100010644) >>>!!!<<<
AceView provides a curated, comprehensive and non-redundant sequence representation of all public mRNA sequences (mRNAs from GenBank or RefSeq, and single pass cDNA sequences from dbEST and Trace). These experimental cDNA sequences are first co-aligned on the genome then clustered into a minimal number of alternative transcript variants and grouped into genes. Using exhaustively and with high quality standards the available cDNA sequences evidences the beauty and complexity of mammals’ transcriptome, and the relative simplicity of the nematode and plant transcriptomes. Genes are classified according to their inferred coding potential; many presumably non-coding genes are discovered. Genes are named by Entrez Gene names when available, else by AceView gene names, stable from release to release. Alternative features (promoters, introns and exons, polyadenylation signals) and coding potential, including motifs, domains, and homologies are annotated in depth; tissues where expression has been observed are listed in order of representation; diseases, phenotypes, pathways, functions, localization or interactions are annotated by mining selected sources, in particular PubMed, GAD and Entrez Gene, and also by performing manual annotation, especially in the worm. In this way, both the anatomy and physiology of the experimentally cDNA supported human, mouse and nematode genes are thoroughly annotated.
MycoCosm, the DOE JGI’s web-based fungal genomics resource, which integrates fungal genomics data and analytical tools for fungal biologists. It provides navigation through sequenced genomes, genome analysis in context of comparative genomics and genome-centric view. MycoCosm promotes user community participation in data submission, annotation and analysis.
Clinical Genomic Database (CGD) is a manually curated database of conditions with known genetic causes, focusing on medically significant genetic data with available interventions.
This database serves forest tree scientists by providing online access to hardwood tree genomic and genetic data, including assembled reference genomes, transcriptomes, and genetic mapping information. The web site also provides access to tools for mining and visualization of these data sets, including BLAST for comparing sequences, Jbrowse for browsing genomes, Apollo for community annotation and Expression Analysis to build gene expression heatmaps.
mentha archives evidence collected from different sources and presents these data in a complete and comprehensive way. Its data comes from manually curated protein-protein interaction databases that have adhered to the IMEx consortium. The aggregated data forms an interactome which includes many organisms. mentha is a resource that offers a series of tools to analyse selected proteins in the context of a network of interactions. Protein interaction databases archive protein-protein interaction (PPI) information from published articles. However, no database alone has sufficient literature coverage to offer a complete resource to investigate "the interactome". mentha's approach generates every week a consistent interactome (graph). Most importantly, the procedure assigns to each interaction a reliability score that takes into account all the supporting evidence. mentha offers eight interactomes (Homo sapiens, Arabidopsis thaliana, Caenorhabditis elegans, Drosophila melanogaster, Escherichia coli K12, Mus musculus, Rattus norvegicus, Saccharomyces cerevisiae) plus a global network that comprises every organism, including those not mentioned. The website and the graphical application are designed to make the data stored in mentha accessible and analysable to all users. Source databases are: MINT, IntAct, DIP, MatrixDB and BioGRID.
Greenland Environmental Observatory (GEOSummit) provides long term year round data on core atmospheric measurements, spatial phenomena, ice sheets, and the Arctic Environment. These data are available to researchers through the National Science Foundation's Science Coordination Office (SCO) which coordinates all research at GEOSummit. Currently there is not a central platform for multi-collaborator data distribution. For specific information related to research it is recommended to contact investigators directly.
Academic Torrents is a distributed data repository. The academic torrents network is built for researchers, by researchers. Its distributed peer-to-peer library system automatically replicates your datasets on many servers, so you don't have to worry about managing your own servers or file availability. Everyone who has data becomes a mirror for those data so the system is fault-tolerant.
The JPL Tropical Cyclone Information System (TCIS) was developed to support hurricane research. There are three components to TCIS; a global archive of multi-satellite hurricane observations 1999-2010 (Tropical Cyclone Data Archive), North Atlantic Hurricane Watch and ASA Convective Processes Experiment (CPEX) aircraft campaign. Together, data and visualizations from the real time system and data archive can be used to study hurricane process, validate and improve models, and assist in developing new algorithms and data assimilation techniques.
The International Ocean Discovery Program’s (IODP) Gulf Coast Repository (GCR) is located in the Research Park on the Texas A&M University campus in College Station, Texas. This repository stores DSDP, ODP, and IODP cores from the Pacific Ocean, the Caribbean Sea and Gulf of Mexico, and the Southern Ocean. A satellite repository at Rutgers University houses New Jersey/Delaware land cores 150X and 174AX.