Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 51 result(s)
The HUGO Gene Nomenclature Committee (HGNC) assigned unique gene symbols and names to over 35,000 human loci, of which around 19,000 are protein coding. This curated online repository of HGNC-approved gene nomenclature and associated resources includes links to genomic, proteomic and phenotypic information, as well as dedicated gene family pages.
The Gene database provides detailed information for known and predicted genes defined by nucleotide sequence or map position. Gene supplies gene-specific connections in the nexus of map, sequence, expression, structure, function, citation, and homology data. Unique identifiers are assigned to genes with defining sequences, genes with known map positions, and genes inferred from phenotypic information. These gene identifiers are used throughout NCBI's databases and tracked through updates of annotation. Gene includes genomes represented by NCBI Reference Sequences (or RefSeqs) and is integrated for indexing and query and retrieval from NCBI's Entrez and E-Utilities systems.
<<<!!!<<< Phasing out support for the Database of Genomic Variants archive (DGVa). The submission, archiving, and presentation of structural variation services offered by the DGVa is transitioning to the European Variation Archive (EVA) https://www.re3data.org/repository/r3d100011553. All of the data shown in the DGVa website is already searchable and browsable from the EVA Study Browser. Submission of structural variation data to EVA is done using the VCF format. The VCF specification allows representing multiple types of structural variants such as insertions, deletions, duplications and copy-number variants. Other features such as symbolic alleles, breakends, confidence intervals etc., support more complex events, such as translocations at an imprecise position. >>>!!!>>>
NCBI Datasets is a continually evolving platform designed to provide easy and intuitive access to NCBI’s sequence data and metadata. NCBI Datasets is part of the NIH Comparative Genomics Resource (CGR). CGR facilitates reliable comparative genomics analyses for all eukaryotic organisms through an NCBI Toolkit and community collaboration.
<<<!!!<<< NCBI announced plans to retire the Clone DB web interface. Pursuant to this retirement, starting on May 27, 2019, all web pages associated with Clone DB and CloneFinder will redirect to this blog post https://ncbiinsights.ncbi.nlm.nih.gov/?s=clone+db. Links to Clone DB from the NCBI home page will also be going away. >>>!!!>>>
MGnify (formerly: EBI Metagenomics) offers an automated pipeline for the analysis and archiving of microbiome data to help determine the taxonomic diversity and functional & metabolic potential of environmental samples. Users can submit their own data for analysis or freely browse all of the analysed public datasets held within the repository. In addition, users can request analysis of any appropriate dataset within the European Nucleotide Archive (ENA). User-submitted or ENA-derived datasets can also be assembled on request, prior to analysis.
IntAct provides a freely available, open source database system and analysis tools for molecular interaction data. All interactions are derived from literature curation or direct user submissions and are freely available.
The IMPC is a confederation of international mouse phenotyping projects working towards the agreed goals of the consortium: To undertake the phenotyping of 20,000 mouse mutants over a ten year period, providing the first functional annotation of a mammalian genome. Maintain and expand a world-wide consortium of institutions with capacity and expertise to produce germ line transmission of targeted knockout mutations in embryonic stem cells for 20,000 known and predicted mouse genes. Test each mutant mouse line through a broad based primary phenotyping pipeline in all the major adult organ systems and most areas of major human disease. Through this activity and employing data annotation tools, systematically aim to discover and ascribe biological function to each gene, driving new ideas and underpinning future research into biological systems; Maintain and expand collaborative “networks” with specialist phenotyping consortia or laboratories, providing standardized secondary level phenotyping that enriches the primary dataset, and end-user, project specific tertiary level phenotyping that adds value to the mammalian gene functional annotation and fosters hypothesis driven research; and Provide a centralized data centre and portal for free, unrestricted access to primary and secondary data by the scientific community, promoting sharing of data, genotype-phenotype annotation, standard operating protocols, and the development of open source data analysis tools. Members of the IMPC may include research centers, funding organizations and corporations.
This resource allows users to search for and compare influenza virus genomes and gene sequences taken from GenBank. It also provides a virus sequence annotation tool and links to other influenza resources: NIAID project, JCVI Flu, Influenza research database, CDC Flu, Vaccine Selection and WHO Flu. NOTE: In Fall 2024, NCBI plans to redirect the Influenza Virus Resource to NCBI Virus, possibly as soon as September. For most up-to-date and accurate virus data, see NCBI Virus https://www.re3data.org/repository/r3d100014322
The DIP database catalogs experimentally determined interactions between proteins. It combines information from a variety of sources to create a single, consistent set of protein-protein interactions. The data stored within the DIP database were curated, both, manually by expert curators and also automatically using computational approaches that utilize the the knowledge about the protein-protein interaction networks extracted from the most reliable, core subset of the DIP data. Please, check the reference page to find articles describing the DIP database in greater detail. The Database of Ligand-Receptor Partners (DLRP) is a subset of DIP (Database of Interacting Proteins). The DLRP is a database of protein ligand and protein receptor pairs that are known to interact with each other. By interact we mean that the ligand and receptor are members of a ligand-receptor complex and, unless otherwise noted, transduce a signal. In some instances the ligand and/or receptor may form a heterocomplex with other ligands/receptors in order to be functional. We have entered the majority of interactions in DLRP as full DIP entries, with links to references and additional information
!!! >>> the repository is offline >>> !!! GOBASE is a taxonomically broad organelle genome database that organizes and integrates diverse data related to mitochondria and chloroplasts. GOBASE is currently expanding to include information on representative bacteria that are thought to be specifically related to the bacterial ancestors of mitochondria and chloroplasts
Country
The Cystic Fibrosis Mutation Database (CFTR1) was initiated by the Cystic Fibrosis Genetic Analysis Consortium in 1989 to increase and facilitate communications among CF researchers, and is maintained by the Cystic Fibrosis Centre at the Hospital for Sick Children in Toronto. The specific aim of the database is to provide up to date information about individual mutations in the CFTR gene. In a major upgrade in 2010, all known CFTR mutations and sequence variants have been converted to the standard nomenclature recommended by the Human Genome Variation Society.
Country
SilkDB is a database of the integrated genome resource for the silkworm, Bombyx mori. This database provides access to not only genomic data including functional annotation of genes, gene products and chromosomal mapping, but also extensive biological information such as microarray expression data, ESTs and corresponding references. SilkDB will be useful for the silkworm research community as well as comparative genomics
BiGG is a knowledgebase of Biochemically, Genetically and Genomically structured genome-scale metabolic network reconstructions. BiGG integrates several published genome-scale metabolic networks into one resource with standard nomenclature which allows components to be compared across different organisms. BiGG can be used to browse model content, visualize metabolic pathway maps, and export SBML files of the models for further analysis by external software packages. Users may follow links from BiGG to several external databases to obtain additional information on genes, proteins, reactions, metabolites and citations of interest.
The MG-RAST server is an open source system for annotation and comparative analysis of metagenomes. Users can upload raw sequence data in fasta format; the sequences will be normalized and processed and summaries automatically generated. The server provides several methods to access the different data types, including phylogenetic and metabolic reconstructions, and the ability to compare the metabolism and annotations of one or more metagenomes and genomes. In addition, the server offers a comprehensive search capability. Access to the data is password protected, and all data generated by the automated pipeline is available for download in a variety of common formats. MG-RAST has become an unofficial repository for metagenomic data, providing a means to make your data public so that it is available for download and viewing of the analysis without registration, as well as a static link that you can use in publications. It also requires that you include experimental metadata about your sample when it is made public to increase the usefulness to the community.
Country
<<<!!!<<< 2017-06-02: We recently suffered a server failure and are working to bring the full ORegAnno website back online. In the meantime, you may download the complete database here: http://www.oreganno.org/dump/ ; Data are also available through UCSC Genome Browser (e.g., hg38 -> Regulation -> ORegAnno) https://genome.ucsc.edu/cgi-bin/hgTrackUi?hgsid=686342163_2it3aVMQVoXWn0wuCjkNOVX39wxy&c=chr1&g=oreganno >>>!!!>>> The Open REGulatory ANNOtation database (ORegAnno) is an open database for the curation of known regulatory elements from scientific literature. Annotation is collected from users worldwide for various biological assays and is automatically cross-referenced against PubMED, Entrez Gene, EnsEMBL, dbSNP, the eVOC: Cell type ontology, and the Taxonomy database, where appropriate, with information regarding the original experimentation performed (evidence). ORegAnno further provides an open validation process for all regulatory annotation in the public domain. Assigned validators receive notification of new records in the database and are able to cross-reference the citation to ensure record integrity. Validators have the ability to modify any record (deprecating the old record and creating a new one) if an error is found. Further, any contributor to the database can comment on any annotation by marking errors, or adding special reports into function as they see fit. These features of ORegAnno ensure that the collection is of the highest quality and uniquely provides a dynamic view of our changing understanding of gene regulation in the various genomes.
Country
SILVA is a comprehensive, quality-controlled web resource for up-to-date aligned ribosomal RNA (rRNA) gene sequences from the Bacteria, Archaea and Eukaryota domains alongside supplementary online services. In addition to data products, SILVA provides various online tools such as alignment and classification, phylogenetic tree calculation and viewer, probe/primer matching, and an amplicon analysis pipeline. With every full release a curated guide tree is provided that contains the latest taxonomy and nomenclature based on multiple references. SILVA is an ELIXIR Core Data Resource.
Country
NONCODE is an integrated knowledge database dedicated to non-coding RNAs (excluding tRNAs and rRNAs). Now, there are 16 species in NONCODE(human, mouse, cow, rat, chicken, fruitfly, zebrafish, celegans, yeast, Arabidopsis, chimpanzee, gorilla, orangutan, rhesus macaque, opossum and platypus).The source of NONCODE includes literature and other public databases. We searched PubMed using key words ‘ncrna’, ‘noncoding’, ‘non-coding’,‘no code’, ‘non-code’, ‘lncrna’ or ‘lincrna. We retrieved the new identified lncRNAs and their annotation from the Supplementary Material or web site of these articles. Together with the newest data from Ensembl , RefSeq, lncRNAdb and GENCODE were processed through a standard pipeline for each species.
<<<!!!<<< OFFLINE >>>!!!>>> A recent computer security audit has revealed security flaws in the legacy HapMap site that require NCBI to take it down immediately. We regret the inconvenience, but we are required to do this. That said, NCBI was planning to decommission this site in the near future anyway (although not quite so suddenly), as the 1,000 genomes (1KG) project has established itself as a research standard for population genetics and genomics. NCBI has observed a decline in usage of the HapMap dataset and website with its available resources over the past five years and it has come to the end of its useful life. The International HapMap Project is a multi-country effort to identify and catalog genetic similarities and differences in human beings. Using the information in the HapMap, researchers will be able to find genes that affect health, disease, and individual responses to medications and environmental factors. The Project is a collaboration among scientists and funding agencies from Japan, the United Kingdom, Canada, China, Nigeria, and the United States. All of the information generated by the Project will be released into the public domain. The goal of the International HapMap Project is to compare the genetic sequences of different individuals to identify chromosomal regions where genetic variants are shared. By making this information freely available, the Project will help biomedical researchers find genes involved in disease and responses to therapeutic drugs. In the initial phase of the Project, genetic data are being gathered from four populations with African, Asian, and European ancestry. Ongoing interactions with members of these populations are addressing potential ethical issues and providing valuable experience in conducting research with identified populations. Public and private organizations in six countries are participating in the International HapMap Project. Data generated by the Project can be downloaded with minimal constraints. The Project officially started with a meeting in October 2002 (https://www.genome.gov/10005336/) and is expected to take about three years.
The IPD-IMGT/HLA Database provides a specialist database for sequences of the human major histocompatibility complex (MHC) and includes the official sequences named by the WHO Nomenclature Committee For Factors of the HLA System. The IPD-IMGT/HLA Database is part of the international ImMunoGeneTics project (IMGT). The database uses the 2010 naming convention for HLA alleles in all tools herein. To aid in the adoption of the new nomenclature, all search tools can be used with both the current and pre-2010 allele designations. The pre-2010 nomenclature designations are only used where older reports or outputs have been made available for download.
The European Nucleotide Archive (ENA) captures and presents information relating to experimental workflows that are based around nucleotide sequencing. A typical workflow includes the isolation and preparation of material for sequencing, a run of a sequencing machine in which sequencing data are produced and a subsequent bioinformatic analysis pipeline. ENA records this information in a data model that covers input information (sample, experimental setup, machine configuration), output machine data (sequence traces, reads and quality scores) and interpreted information (assembly, mapping, functional annotation). Data arrive at ENA from a variety of sources. These include submissions of raw data, assembled sequences and annotation from small-scale sequencing efforts, data provision from the major European sequencing centres and routine and comprehensive exchange with our partners in the International Nucleotide Sequence Database Collaboration (INSDC). Provision of nucleotide sequence data to ENA or its INSDC partners has become a central and mandatory step in the dissemination of research findings to the scientific community. ENA works with publishers of scientific literature and funding bodies to ensure compliance with these principles and to provide optimal submission systems and data access tools that work seamlessly with the published literature.