Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 31 result(s)
!!! >>> intrepidbio.com expired <<< !!!! Intrepid Bioinformatics serves as a community for genetic researchers and scientific programmers who need to achieve meaningful use of their genetic research data – but can’t spend tremendous amounts of time or money in the process. The Intrepid Bioinformatics system automates time consuming manual processes, shortens workflow, and eliminates the threat of lost data in a faster, cheaper, and better environment than existing solutions. The system also provides the functionality and community features needed to analyze the large volumes of Next Generation Sequencing and Single Nucleotide Polymorphism data, which is generated for a wide range of purposes from disease tracking and animal breeding to medical diagnosis and treatment.
The tree of life links all biodiversity through a shared evolutionary history. This project will produce the first online, comprehensive first-draft tree of all 1.8 million named species, accessible to both the public and scientific communities. Assembly of the tree will incorporate previously-published results, with strong collaborations between computational and empirical biologists to develop, test and improve methods of data synthesis. This initial tree of life will not be static; instead, we will develop tools for scientists to update and revise the tree as new data come in. Early release of the tree and tools will motivate data sharing and facilitate ongoing synthesis of knowledge.
We are a leading international centre for genomics and bioinformatics research. Our mandate is to advance knowledge about cancer and other diseases, to improve human health through disease prevention, diagnosis and therapeutic approaches, and to realize the social and economic benefits of genomics research.
MGI is the international database resource for the laboratory mouse, providing integrated genetic, genomic, and biological data to facilitate the study of human health and disease. The projects contributing to this resource are: Mouse Genome Database (MGD) Project, Gene Expression Database (GXD) Project, Mouse Tumor Biology (MTB) Database Project, Gene Ontology (GO) Project at MGI, MouseMine Project, MouseCyc Project at MGI
The NCI's Genomic Data Commons (GDC) provides the cancer research community with a unified data repository that enables data sharing across cancer genomic studies in support of precision medicine. The GDC obtains validated datasets from NCI programs in which the strategies for tissue collection couples quantity with high quality. Tools are provided to guide data submissions by researchers and institutions.
The Bacterial and Viral Bioinformatics Resource Center (BV-BRC) is an information system designed to support research on bacterial and viral infectious diseases. BV-BRC combines two long-running BRCs: PATRIC, the bacterial system, and IRD/ViPR, the viral systems.
GeneWeaver combines cross-species data and gene entity integration, scalable hierarchical analysis of user data with a community-built and curated data archive of gene sets and gene networks, and tools for data driven comparison of user-defined biological, behavioral and disease concepts. Gene Weaver allows users to integrate gene sets across species, tissue and experimental platform. It differs from conventional gene set over-representation analysis tools in that it allows users to evaluate intersections among all combinations of a collection of gene sets, including, but not limited to annotations to controlled vocabularies. There are numerous applications of this approach. Sets can be stored, shared and compared privately, among user defined groups of investigators, and across all users.
BrainMaps.org, launched in May 2005, is an interactive multiresolution next-generation brain atlas that is based on over 20 million megapixels of sub-micron resolution, annotated, scanned images of serial sections of both primate and non-primate brains and that is integrated with a high-speed database for querying and retrieving data about brain structure and function over the internet. Currently featured are complete brain atlas datasets for various species, including Macaca mulatta, Chlorocebus aethiops, Felis catus, Mus musculus, Rattus norvegicus, and Tyto alba.
The National Practitioner Data Bank (NPDB), or "the Data Bank," is a confidential information clearinghouse created by Congress with the primary goals of improving health care quality, protecting the public, and reducing health care fraud and abuse in the U.S.
NeuroMorpho.Org is a centrally curated inventory of digitally reconstructed neurons associated with peer-reviewed publications. It contains contributions from over 80 laboratories worldwide and is continuously updated as new morphological reconstructions are collected, published, and shared. To date, NeuroMorpho.Org is the largest collection of publicly accessible 3D neuronal reconstructions and associated metadata which can be used for detailed single cell simulations.
Funded by the National Science Foundation (NSF) and proudly operated by Battelle, the National Ecological Observatory Network (NEON) program provides open, continental-scale data across the United States that characterize and quantify complex, rapidly changing ecological processes. The Observatory’s comprehensive design supports greater understanding of ecological change and enables forecasting of future ecological conditions. NEON collects and processes data from field sites located across the continental U.S., Puerto Rico, and Hawaii over a 30-year timeframe. NEON provides free and open data that characterize plants, animals, soil, nutrients, freshwater, and the atmosphere. These data may be combined with external datasets or data collected by individual researchers to support the study of continental-scale ecological change.
The Malaria Atlas Project (MAP) brings together researchers based around the world with expertise in a wide range of disciplines from public health to mathematics, geography and epidemiology. We work together to generate new and innovative methods of mapping malaria risk. Ultimately our goal is to produce a comprehensive range of maps and estimates that will support effective planning of malaria control at national and international scales.
This site provides access to complete, annotated genomes from bacteria and archaea (present in the European Nucleotide Archive) through the Ensembl graphical user interface (genome browser). Ensembl Bacteria contains genomes from annotated INSDC records that are loaded into Ensembl multi-species databases, using the INSDC annotation import pipeline.
The KNB Data Repository is an international repository intended to facilitate ecological, environmental and earth science research in the broadest senses. For scientists, the KNB Data Repository is an efficient way to share, discover, access and interpret complex ecological, environmental, earth science, and sociological data and the software used to create and manage those data. Due to rich contextual information provided with data in the KNB, scientists are able to integrate and analyze data with less effort. The data originate from a highly-distributed set of field stations, laboratories, research sites, and individual researchers. The KNB supports rich, detailed metadata to promote data discovery as well as automated and manual integration of data into new projects. The KNB supports a rich set of modern repository services, including the ability to assign Digital Object Identifiers (DOIs) so data sets can be confidently referenced in any publication, the ability to track the versions of datasets as they evolve through time, and metadata to establish the provenance relationships between source and derived data.
Country
DDBJ Sequence Read Archive (DRA) is the public archive of high throughput sequencing data. DRA stores raw sequencing data and alignment information to enhance reproducibility and facilitate new discoveries through data analysis. DRA is a member of the International Nucleotide Sequence Database Collaboration (INSDC) and archiving the data in a close collaboration with NCBI Sequence Read Archive (SRA) and EBI Sequence Read Archive (ERA).
The Open Science Framework (OSF) is part network of research materials, part version control system, and part collaboration software. The purpose of the software is to support the scientist's workflow and help increase the alignment between scientific values and scientific practices. Document and archive studies. Move the organization and management of study materials from the desktop into the cloud. Labs can organize, share, and archive study materials among team members. Web-based project management reduces the likelihood of losing study materials due to computer malfunction, changing personnel, or just forgetting where you put the damn thing. Share and find materials. With a click, make study materials public so that other researchers can find, use and cite them. Find materials by other researchers to avoid reinventing something that already exists. Detail individual contribution. Assign citable, contributor credit to any research material - tools, analysis scripts, methods, measures, data. Increase transparency. Make as much of the scientific workflow public as desired - as it is developed or after publication of reports. Find public projects here. Registration. Registering materials can certify what was done in advance of data analysis, or confirm the exact state of the project at important points of the lifecycle such as manuscript submission or at the onset of data collection. Discover public registrations here. Manage scientific workflow. A structured, flexible system can provide efficiency gain to workflow and clarity to project objectives, as pictured.
The GHO data repository is WHO's gateway to health-related statistics for its 194 Member States. It provides access to over 1000 indicators on priority health topics including mortality and burden of diseases, the Millennium Development Goals (child nutrition, child health, maternal and reproductive health, immunization, HIV/AIDS, tuberculosis, malaria, neglected diseases, water and sanitation), non communicable diseases and risk factors, epidemic-prone diseases, health systems, environmental health, violence and injuries, equity among others. In addition, the GHO provides on-line access to WHO's annual summary of health-related data for its Member states: the World Health Statistics.
arrayMap is a repository of cancer genome profiling data. Original) from primary repositories (e.g. NCBI GEO, EBI ArrayExpress, TCGA) is re-processed and annotated for metadata. Unique visualization of the processed data allows critical evaluation of data quality and genome information. Structured metadata provides easy access to summary statistics, with a focus on copy number aberrations in cancer entities.
Knoema is a knowledge platform. The basic idea is to connect data with analytical and presentation tools. As a result, we end with one uniformed platform for users to access, present and share data-driven content. Within Knoema, we capture most aspects of a typical data use cycle: accessing data from multiple sources, bringing relevant indicators into a common space, visualizing figures, applying analytical functions, creating a set of dashboards, and presenting the outcome.
Ag Data Commons provides access to a wide variety of open data relevant to agricultural research. We are a centralized repository for data already on the web, as well as for new data being published for the first time. While compliance with the U.S. Federal public access and open data directives is important, we aim to surpass them. Our goal is to foster innovative data re-use, integration, and visualization to support bigger, better science and policy.