Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 53 result(s)
The tree of life links all biodiversity through a shared evolutionary history. This project will produce the first online, comprehensive first-draft tree of all 1.8 million named species, accessible to both the public and scientific communities. Assembly of the tree will incorporate previously-published results, with strong collaborations between computational and empirical biologists to develop, test and improve methods of data synthesis. This initial tree of life will not be static; instead, we will develop tools for scientists to update and revise the tree as new data come in. Early release of the tree and tools will motivate data sharing and facilitate ongoing synthesis of knowledge.
Country
Silkworm Pathogen Database (SilkPathDB) is a comprehensive resource for studying on pathogens of silkworm, including microsporidia, fungi, bacteria and virus. SilkPathDB provides access to not only genomic data including functional annotation of genes and gene products, but also extensive biological information for gene expression data and corresponding researches. SilkPathDB will be help with researches on pathogens of silkworm as well as other Lepidoptera insects.
Country
>>> !!! the repository is offline !!! The current successor is https://www.lovd.nl/USH1C. <<< The database contains all the variants published as pathogenic mutations in the international literature up to November 2007. In addition, unpublished Usher mutations and non-pathogenic variants from the laboratory of Montpellier have been included.
Country
The RAMEDIS system is a platform independent, web-based information system for rare metabolic diseases based on filed case reports. It was developed in close cooperation with clinical partners to allow them to collect information on rare metabolic diseases with extensive details, e.g. about occurring symptoms, laboratory findings, therapy and molecular data.
Country
The JenAge Ageing Factor Database AgeFactDB is aimed at the collection and integration of ageing phenotype and lifespan data. Ageing factors are genes, chemical compounds or other factors such as dietary restriction, for example. In a first step ageing-related data are primarily taken from existing databases. In addition, new ageing-related information is included both by manual and automatic information extraction from the scientific literature. Based on a homology analysis, AgeFactDB also includes genes that are homologous to known ageing-related genes. These homologs are considered as candidate or putative ageing-related genes.
OrthoMCL is a genome-scale algorithm for grouping orthologous protein sequences. It provides not only groups shared by two or more species/genomes, but also groups representing species-specific gene expansion families. So it serves as an important utility for automated eukaryotic genome annotation. OrthoMCL starts with reciprocal best hits within each genome as potential in-paralog/recent paralog pairs and reciprocal best hits across any two genomes as potential ortholog pairs. Related proteins are interlinked in a similarity graph. Then MCL (Markov Clustering algorithm,Van Dongen 2000; www.micans.org/mcl) is invoked to split mega-clusters. This process is analogous to the manual review in COG construction. MCL clustering is based on weights between each pair of proteins, so to correct for differences in evolutionary distance the weights are normalized before running MCL.
INDEPTH is a global network of research centres that conduct longitudinal health and demographic evaluation of populations in low- and middle-income countries (LMICs). INDEPTH aims to strengthen global capacity for Health and Demographic Surveillance Systems (HDSSs), and to mount multi-site research to guide health priorities and policies in LMICs, based on up-to-date scientific evidence. The data collected by the INDEPTH Network members constitute a valuable resource of population and health data for LMIC countries. This repository aims to make well documented anonymised longitudinal microdata from these Centres available to data users.
As with most biomedical databases, the first step is to identify relevant data from the research community. The Monarch Initiative is focused primarily on phenotype-related resources. We bring in data associated with those phenotypes so that our users can begin to make connections among other biological entities of interest. We import data from a variety of data sources. With many resources integrated into a single database, we can join across the various data sources to produce integrated views. We have started with the big players including ClinVar and OMIM, but are equally interested in boutique databases. You can learn more about the sources of data that populate our system from our data sources page https://monarchinitiative.org/about/sources.
The Database of Protein Disorder (DisProt) is a curated database that provides information about proteins that lack fixed 3D structure in their putatively native states, either in their entirety or in part. DisProt is a community resource annotating protein sequences for intrinsically disorder regions from the literature. It classifies intrinsic disorder based on experimental methods and three ontologies for molecular function, transition and binding partner.
The Cancer Genome Atlas (TCGA) Data Portal provides a platform for researchers to search, download, and analyze data sets generated by TCGA. It contains clinical information, genomic characterization data, and high level sequence analysis of the tumor genomes. The Data Coordinating Center (DCC) is the central provider of TCGA data. The DCC standardizes data formats and validates submitted data.
This database serves forest tree scientists by providing online access to hardwood tree genomic and genetic data, including assembled reference genomes, transcriptomes, and genetic mapping information. The web site also provides access to tools for mining and visualization of these data sets, including BLAST for comparing sequences, Jbrowse for browsing genomes, Apollo for community annotation and Expression Analysis to build gene expression heatmaps.
Country
BioGrid Australia Limited operates a federated data sharing platform for collaborative translational health and medical research providing a secure infrastructure that advances health research by linking privacy-protected and ethically approved data among a wide network of health collaborators. BioGrid links real-time de-identified health data across institutions, jurisdictions and diseases to assist researchers and clinicians improve their research and clinical outcomes. The web-based infrastructure provides ethical access while protecting both privacy and intellectual property.
The European Xenopus Resource Centre (EXRC) is situated in Portsmouth, United Kingdom and provides tools and services to support researchers using Xenopus models. The EXRC depends on researchers to obtain and deposit Xenopus transgenic and mutant lines, Xenopus in-situ hybridization clones, Xenopus specific antibodies and other resources with the centre. EXRC staff perform quality assurance testing on these reagents and then makes them available to the community at cost. EXRC also supplies wild type Xenopus, embryos, oocytes, egg extracts, X.tropicalis Fosmids, X.laevis BACs and ORFeomes.
The Copernicus Marine Environment Monitoring Service (CMEMS) provides regular and systematic reference information on the physical and biogeochemical state, variability and dynamics of the ocean and marine ecosystems for the global ocean and the European regional seas. The observations and forecasts produced by the service support all marine applications, including: Marine safety; Marine resources; Coastal and marine environment; Weather, seasonal forecasting and climate. For instance, the provision of data on currents, winds and sea ice help to improve ship routing services, offshore operations or search and rescue operations, thus contributing to marine safety. The service also contributes to the protection and the sustainable management of living marine resources in particular for aquaculture, sustainable fisheries management or regional fishery organisations decision-making process. Physical and marine biogeochemical components are useful for water quality monitoring and pollution control. Sea level rise is a key indicator of climate change and helps to assess coastal erosion. Sea surface temperature elevation has direct consequences on marine ecosystems and appearance of tropical cyclones. As a result of this, the service supports a wide range of coastal and marine environment applications. Many of the data delivered by the service (e.g. temperature, salinity, sea level, currents, wind and sea ice) also play a crucial role in the domain of weather, climate and seasonal forecasting.
Our research focuses mainly on the past and present bio- and geodiversity and the evolution of animals and plants. The Information Technology Center of the Staatliche Naturwissenschaftliche Sammlungen Bayerns is the institutional repository for scientific data of the SNSB. Its major tasks focus on the management of bio- and geodiversity data using different kinds of information technological structures. The facility guarantees a sustainable curation, storage, archiving and provision of such data.
VertNet is a NSF-funded collaborative project that makes biodiversity data free and available on the web. VertNet is a tool designed to help people discover, capture, and publish biodiversity data. It is also the core of a collaboration between hundreds of biocollections that contribute biodiversity data and work together to improve it. VertNet is an engine for training current and future professionals to use and build upon best practices in data quality, curation, research, and data publishing. Yet, VertNet is still the aggregate of all of the information that it mobilizes. To us, VertNet is all of these things and more.
The Maize Genetics and Genomics Database focuses on collecting data related to the crop plant and model organism Zea mays. The project's goals are to synthesize, display, and provide access to maize genomics and genetics data, prioritizing mutant and phenotype data and tools, structural and genetic map sets, and gene models. MaizeGDB also aims to make the Maize Newsletter available, and provide support services to the community of maize researchers. MaizeGDB is working with the Schnable lab, the Panzea project, The Genome Reference Consortium, and iPlant Collaborative to create a plan for archiving, dessiminating, visualizing, and analyzing diversity data. MMaizeGDB is short for Maize Genetics/Genomics Database. It is a USDA/ARS funded project to integrate the data found in MaizeDB and ZmDB into a single schema, develop an effective interface to access this data, and develop additional tools to make data analysis easier. Our goal in the long term is a true next-generation online maize database.aize genetics and genomics database.
The Fungal Genetics Stock Center has preserved and distributed strains of genetically characterized fungi since 1960. The collection includes over 20,000 accessioned strains of classical and genetically engineered mutants of key model, human, and plant pathogenic fungi. These materials are distributed as living stocks to researchers around the world.
The Paleobiology Database (PaleoBioDB) is a non-governmental, non-profit public resource for paleontological data. It has been organized and operated by a multi-disciplinary, multi-institutional, international group of paleobiological researchers. Its purpose is to provide global, collection-based occurrence and taxonomic data for organisms of all geological ages, as well data services to allow easy access to data for independent development of analytical tools, visualization software, and applications of all types. The Database’s broader goal is to encourage and enable data-driven collaborative efforts that address large-scale paleobiological questions.
OpenWorm aims to build the first comprehensive computational model of the Caenorhabditis elegans (C. elegans), a microscopic roundworm. With only a thousand cells, it solves basic problems such as feeding, mate-finding and predator avoidance. Despite being extremely well studied in biology, this organism still eludes a deep, principled understanding of its biology. We are using a bottom-up approach, aimed at observing the worm behaviour emerge from a simulation of data derived from scientific experiments carried out over the past decade. To do so we are incorporating the data available in the scientific community into software models. We are engineering Geppetto and Sibernetic, open-source simulation platforms, to be able to run these different models in concert. We are also forging new collaborations with universities and research institutes to collect data that fill in the gaps All the code we produce in the OpenWorm project is Open Source and available on GitHub.
The NCMA maintains the largest and most diverse collection of publically available marine algal strains in the world. The algal strains in the collection have been obtained from all over the world, from polar to tropical waters, marine, freshwater, brackish, and hyper-saline environments. New strains (50 - 100 per year) are added largely through the accession of strains deposited by scientists in the community. A stringent accession policy is required to help populate the collection with a diverse range of strains.