Content Types


AID systems



Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type


Metadata standards

PID systems

Provider types

Quality management

Repository languages



Repository types


  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 19 result(s)
Content type(s)
MICASE provides a collection of transcripts of academic speech events recorded at the University of Michigan. The original DAT audiotapes are held in the English Language Institute and may be consulted by bona fide researchers under special arrangements
The Reference Sequence (RefSeq) collection provides a comprehensive, integrated, non-redundant, well-annotated set of sequences, including genomic DNA, transcripts, and proteins. RefSeq sequences form a foundation for medical, functional, and diversity studies. They provide a stable reference for genome annotation, gene identification and characterization, mutation and polymorphism analysis (especially RefSeqGene records), expression studies, and comparative analyses.
The Archive for Spoken German (AGD) collects and archives data of spoken German in interactions (conversation corpora) and data of domestic and non-domestic varieties of German (variation corpora). Currently, the AGD hosts around 50 corpora comprising more than 15000 audio and 500 video recordings amounting to around 5000 hours of recorded material with more than 7000 transcripts. With the Research and Teaching Corpus of Spoken German (FOLK) the AGD is also compiling an extensive German conversation corpus of its own. The archive curates data and makes them available to researchers. Curation comprises digitization, structuring and consistent documentation of audio and video recordings, transcripts, metadata and additional material. The scientific public can access the data via the Database for Spoken German (DGD2) or via a personal archive service. The AGD also advises researchers in using the existing inventory as well as in creating their own oral corpora.
The Buckeye Corpus of conversational speech contains high-quality recordings from 40 speakers in Columbus OH conversing freely with an interviewer. The speech has been orthographically transcribed and phonetically labeled. The audio and text files, together with time-aligned phonetic labels, are stored in a format for use with speech analysis software (Xwaves and Wavesurfer). Software for searching the transcription files is currently being written.
Genome track alignments using GBrowse on this site are featured with: (1) Annotated and predicted genes and transcripts; (2) QTL / SNP Association tracks; (3) OMIA genes; (4) Various SNP Chip tracks; (5) Other mapping fetures or elements that are available.
The Data Service Centre at the University of Bielefeld is a central archive for quantitative and qualitative data that relate to organizations. In addition to data on businesses and organizations, this includes linked employer-employee data as well as data from member or employee surveys. The organizational units can be establishments and businesses, but also public associations, kindergartens, schools, or public health facilities. The DSC-BO co-ordinates the data circulation and provides the facility of getting access to data sets and transcripts for secondary use regarding to scientific purposes on the base of contracts with the data producers.
The GeneDB project is a core part of the Sanger Institute's Pathogen Genomics activities. Its primary goals are: to provide reliable access to the latest sequence data and annotation/curation for the whole range of organisms sequenced by the Pathogen group. to develop the website and other tools to aid the community in accessing and obtaining the maximum value from these data.
Content type(s)
The iBeetle-Base stores gene related information for all genes of the official gene set (red box). Among others, RNA and protein sequences can be downloaded and links lead to the respective annotation in the genome browser. Further, the Drosophila orthologs are displayed including links to FlyBase. Wherever available, the phenotypic data gathered in the iBeetle screen is displayed below the gene information in separate sections for the pupal and larval screening parts (yellow box).
MTD is focused on mammalian transcriptomes with a current version that contains data from humans, mice, rats and pigs. Regarding the core features, the MTD browses genes based on their neighboring genomic coordinates or joint KEGG pathway and provides expression information on exons, transcripts, and genes by integrating them into a genome browser. We developed a novel nomenclature for each transcript that considers its genomic position and transcriptional features.
The program "Humanist Virtual Libraries" distributes heritage documents and pursues research associating skills in human sciences and computer science. It aggregates several types of digital documents: A selection of facsimiles of Renaissance works digitized in the Central Region and in partner institutions, the Epistemon Textual Database, which offers digital editions in XML-TEI, and Transcripts or analyzes of notarial minutes and manuscripts
A repository for high-quality gene models produced by the manual annotation of vertebrate genomes. The final update of Vega, version 68, was released in February 2017 and is now archived at We plan to maintain this resource until Feb 2020.
LOVD portal provides LOVD software and access to a list of worldwide LOVD applications through Locus Specific Database list and List of Public LOVD installations. The LOVD installations that have indicated to be included in the global LOVD listing are included in the overall LOVD querying service, which is based on an API.
The Research Data Centre Education is a focal point for empirical educational research regarding the archiving and retrieval of audiovisual research data (AV) data and survey instruments (questionnaires and tests). In Research Data Centre Education relevant for empirical educational research data sets and tools for secondary use are provided conform with data protection via a central data repository. Contextual information for each origin study and data and instruments as well as related publications complete the offer. Content of Research Data Centre Education formation (so far) focuses on instruments and data sets of Schulqualitäts- and teaching quality research. Observation and interview data in the form of (anonymous) transcripts and codes - be viewed freely accessible - if any. The release of the original AV data for a scientific re-use is linked to a registration by specifying a reasoned research interest in order to protect the privacy rights of the observed or interviewed people.
AceView provides a curated, comprehensive and non-redundant sequence representation of all public mRNA sequences (mRNAs from GenBank or RefSeq, and single pass cDNA sequences from dbEST and Trace). These experimental cDNA sequences are first co-aligned on the genome then clustered into a minimal number of alternative transcript variants and grouped into genes. Using exhaustively and with high quality standards the available cDNA sequences evidences the beauty and complexity of mammals’ transcriptome, and the relative simplicity of the nematode and plant transcriptomes. Genes are classified according to their inferred coding potential; many presumably non-coding genes are discovered. Genes are named by Entrez Gene names when available, else by AceView gene names, stable from release to release. Alternative features (promoters, introns and exons, polyadenylation signals) and coding potential, including motifs, domains, and homologies are annotated in depth; tissues where expression has been observed are listed in order of representation; diseases, phenotypes, pathways, functions, localization or interactions are annotated by mining selected sources, in particular PubMed, GAD and Entrez Gene, and also by performing manual annotation, especially in the worm. In this way, both the anatomy and physiology of the experimentally cDNA supported human, mouse and nematode genes are thoroughly annotated.
mentha archives evidence collected from different sources and presents these data in a complete and comprehensive way. Its data comes from manually curated protein-protein interaction databases that have adhered to the IMEx consortium. The aggregated data forms an interactome which includes many organisms. mentha is a resource that offers a series of tools to analyse selected proteins in the context of a network of interactions. Protein interaction databases archive protein-protein interaction (PPI) information from published articles. However, no database alone has sufficient literature coverage to offer a complete resource to investigate "the interactome". mentha's approach generates every week a consistent interactome (graph). Most importantly, the procedure assigns to each interaction a reliability score that takes into account all the supporting evidence. mentha offers eight interactomes (Homo sapiens, Arabidopsis thaliana, Caenorhabditis elegans, Drosophila melanogaster, Escherichia coli K12, Mus musculus, Rattus norvegicus, Saccharomyces cerevisiae) plus a global network that comprises every organism, including those not mentioned. The website and the graphical application are designed to make the data stored in mentha accessible and analysable to all users. Source databases are: MINT, IntAct, DIP, MatrixDB and BioGRID.
The goal of the Center of Estonian Language Resources (CELR) is to create and manage an infrastructure to make the Estonian language digital resources (dictionaries, corpora – both text and speech –, various language databases) and language technology tools (software) available to everyone working with digital language materials. CELR coordinates and organises the documentation and archiving of the resources as well as develops language technology standards and draws up necessary legal contracts and licences for different types of users (public, academic, commercial, etc.). In addition to collecting language resources, a system will be launched for introducing the resources to, informing and educating the potential users. The main users of CELR are researchers from Estonian R&D institutions and Social Sciences and Humanities researchers all over the world via the CLARIN ERIC network of similar centers in Europe. Access to data is provided through different sites: Public Repository , Language resources, and MetaShare CELR
ALEXA is a microarray design platform for 'alternative expression analysis'. This platform facilitates the design of expression arrays for analysis of mRNA isoforms generated from a single locus by the use of alternative transcription initiation, splicing and polyadenylation sites. We use the term 'ALEXA' to describe a collection of novel genomic methods for 'alternative expression' analysis. 'Alternative expression' refers to the identification and quantification of alternative mRNA transcripts produced by alternative transcript initiation, alternative splicing and alternative polyadenylation. This website provides supplementary materials, source code and other downloads for recent publications describing our studies of alternative expression (AE). Most recently we have developed a method, 'ALEXA-Seq' and associated resources for alternative expression analysis by massively parallel RNA sequencing.
The European Nucleotide Archive (ENA) captures and presents information relating to experimental workflows that are based around nucleotide sequencing. A typical workflow includes the isolation and preparation of material for sequencing, a run of a sequencing machine in which sequencing data are produced and a subsequent bioinformatic analysis pipeline. ENA records this information in a data model that covers input information (sample, experimental setup, machine configuration), output machine data (sequence traces, reads and quality scores) and interpreted information (assembly, mapping, functional annotation). Data arrive at ENA from a variety of sources. These include submissions of raw data, assembled sequences and annotation from small-scale sequencing efforts, data provision from the major European sequencing centres and routine and comprehensive exchange with our partners in the International Nucleotide Sequence Database Collaboration (INSDC). Provision of nucleotide sequence data to ENA or its INSDC partners has become a central and mandatory step in the dissemination of research findings to the scientific community. ENA works with publishers of scientific literature and funding bodies to ensure compliance with these principles and to provide optimal submission systems and data access tools that work seamlessly with the published literature.