Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 21 result(s)
The Reference Sequence (RefSeq) collection provides a comprehensive, integrated, non-redundant, well-annotated set of sequences, including genomic DNA, transcripts, and proteins. RefSeq sequences form a foundation for medical, functional, and diversity studies. They provide a stable reference for genome annotation, gene identification and characterization, mutation and polymorphism analysis (especially RefSeqGene records), expression studies, and comparative analyses.
MICASE provides a collection of transcripts of academic speech events recorded at the University of Michigan. The original DAT audiotapes are held in the English Language Institute and may be consulted by bona fide researchers under special arrangements. Additional access: https://lsa.umich.edu/eli/language-resources/micase-micusp.html
The Buckeye Corpus of conversational speech contains high-quality recordings from 40 speakers in Columbus OH conversing freely with an interviewer. The speech has been orthographically transcribed and phonetically labeled. The audio and text files, together with time-aligned phonetic labels, are stored in a format for use with speech analysis software (Xwaves and Wavesurfer). Software for searching the transcription files is currently being written.
Country
ACU Research Bank is the Australian Catholic University's institutional research repository. It serves to collect, preserve, and showcase the research publications and outputs of ACU staff and higher degree students. Where possible and permissible, a full text version of a research output is available as open access.
Genome track alignments using GBrowse on this site are featured with: (1) Annotated and predicted genes and transcripts; (2) QTL / SNP Association tracks; (3) OMIA genes; (4) Various SNP Chip tracks; (5) Other mapping fetures or elements that are available.
Content type(s)
Country
The iBeetle-Base stores gene related information for all genes of the official gene set (red box). Among others, RNA and protein sequences can be downloaded and links lead to the respective annotation in the genome browser. Further, the Drosophila orthologs are displayed including links to FlyBase. Wherever available, the phenotypic data gathered in the iBeetle screen is displayed below the gene information in separate sections for the pupal and larval screening parts (yellow box).
Country
The program "Humanist Virtual Libraries" distributes heritage documents and pursues research associating skills in human sciences and computer science. It aggregates several types of digital documents: A selection of facsimiles of Renaissance works digitized in the Central Region and in partner institutions, the Epistemon Textual Database, which offers digital editions in XML-TEI, and Transcripts or analyzes of notarial minutes and manuscripts
A repository for high-quality gene models produced by the manual annotation of vertebrate genomes. The final update of Vega, version 68, was released in February 2017 and is now archived at vega.archive.ensembl.org. We plan to maintain this resource until Feb 2020.
LOVD portal provides LOVD software and access to a list of worldwide LOVD applications through Locus Specific Database list and List of Public LOVD installations. The LOVD installations that have indicated to be included in the global LOVD listing are included in the overall LOVD querying service, which is based on an API.
Country
TopFIND is a protein-centric database for the annotation of protein termini currently in its third version. Non-canonical protein termini can be the result of multiple different biological processes, including pre-translational processes such as alternative splicing and alternative translation initiation or post-translational protein processing by proteases that cleave proteases as part of protein maturation or as a regulatory modification. Accordingly, protein termini evidence in TopFIND is inferred from other databases such as ENSEMBL transcripts, TISdb for alternative translation initiation, MEROPS for protein cleavage by proteases, and UniProt for canonical and protein isoform start sites.
Country
MTD is focused on mammalian transcriptomes with a current version that contains data from humans, mice, rats and pigs. Regarding the core features, the MTD browses genes based on their neighboring genomic coordinates or joint KEGG pathway and provides expression information on exons, transcripts, and genes by integrating them into a genome browser. We developed a novel nomenclature for each transcript that considers its genomic position and transcriptional features.
Country
The Research Data Centre Education is a focal point for empirical educational research regarding the archiving and retrieval of audiovisual research data (AV) data and survey instruments (questionnaires and tests). In Research Data Centre Education relevant for empirical educational research data sets and tools for secondary use are provided conform with data protection via a central data repository. Contextual information for each origin study and data and instruments as well as related publications complete the offer. Content of Research Data Centre Education formation (so far) focuses on instruments and data sets of Schulqualitäts- and teaching quality research. Observation and interview data in the form of (anonymous) transcripts and codes - be viewed freely accessible - if any. The release of the original AV data for a scientific re-use is linked to a registration by specifying a reasoned research interest in order to protect the privacy rights of the observed or interviewed people.
>>>!!!<<< GeneDB will be taken offline 1st of August 2021, as none of the genomes are curated at Sanger anymore. All genomes on GeneDB can now be found on PlasmoDB, FungiDB, TriTrypDB and Wormbase Parasite. >>>!!!<<<
Country
The "Database for Spoken German (DGD)" is a corpus management system in the program area Oral Corpora of the Institute for German Language (IDS). It has been online since the beginning of 2012 and since mid-2014 replaces the spoken German database, which was developed in the "Deutsches Spracharchiv (DSAv)" of the IDS. After single registration, the DGD offers external users a web-based access to selected parts of the collection of the "Archive Spoken German (AGD)" for use in research and teaching. The selection of the data for external use depends on the consent of the respective data provider, who in turn must have the appropriate usage and exploitation rights. Also relevant to the selection are certain protection needs of the archive. The Archive for Spoken German (AGD) collects and archives data of spoken German in interactions (conversation corpora) and data of domestic and non-domestic varieties of German (variation corpora). Currently, the AGD hosts around 50 corpora comprising more than 15000 audio and 500 video recordings amounting to around 5000 hours of recorded material with more than 7000 transcripts. With the Research and Teaching Corpus of Spoken German (FOLK) the AGD is also compiling an extensive German conversation corpus of its own. !!! Access to data of Datenbank Gesprochenes Deutsch (DGD) is also provided by: IDS Repository https://www.re3data.org/repository/r3d100010382 !!!
AceView provides a curated, comprehensive and non-redundant sequence representation of all public mRNA sequences (mRNAs from GenBank or RefSeq, and single pass cDNA sequences from dbEST and Trace). These experimental cDNA sequences are first co-aligned on the genome then clustered into a minimal number of alternative transcript variants and grouped into genes. Using exhaustively and with high quality standards the available cDNA sequences evidences the beauty and complexity of mammals’ transcriptome, and the relative simplicity of the nematode and plant transcriptomes. Genes are classified according to their inferred coding potential; many presumably non-coding genes are discovered. Genes are named by Entrez Gene names when available, else by AceView gene names, stable from release to release. Alternative features (promoters, introns and exons, polyadenylation signals) and coding potential, including motifs, domains, and homologies are annotated in depth; tissues where expression has been observed are listed in order of representation; diseases, phenotypes, pathways, functions, localization or interactions are annotated by mining selected sources, in particular PubMed, GAD and Entrez Gene, and also by performing manual annotation, especially in the worm. In this way, both the anatomy and physiology of the experimentally cDNA supported human, mouse and nematode genes are thoroughly annotated.
mentha archives evidence collected from different sources and presents these data in a complete and comprehensive way. Its data comes from manually curated protein-protein interaction databases that have adhered to the IMEx consortium. The aggregated data forms an interactome which includes many organisms. mentha is a resource that offers a series of tools to analyse selected proteins in the context of a network of interactions. Protein interaction databases archive protein-protein interaction (PPI) information from published articles. However, no database alone has sufficient literature coverage to offer a complete resource to investigate "the interactome". mentha's approach generates every week a consistent interactome (graph). Most importantly, the procedure assigns to each interaction a reliability score that takes into account all the supporting evidence. mentha offers eight interactomes (Homo sapiens, Arabidopsis thaliana, Caenorhabditis elegans, Drosophila melanogaster, Escherichia coli K12, Mus musculus, Rattus norvegicus, Saccharomyces cerevisiae) plus a global network that comprises every organism, including those not mentioned. The website and the graphical application are designed to make the data stored in mentha accessible and analysable to all users. Source databases are: MINT, IntAct, DIP, MatrixDB and BioGRID.
The goal of the Center of Estonian Language Resources (CELR) is to create and manage an infrastructure to make the Estonian language digital resources (dictionaries, corpora – both text and speech –, various language databases) and language technology tools (software) available to everyone working with digital language materials. CELR coordinates and organises the documentation and archiving of the resources as well as develops language technology standards and draws up necessary legal contracts and licences for different types of users (public, academic, commercial, etc.). In addition to collecting language resources, a system will be launched for introducing the resources to, informing and educating the potential users. The main users of CELR are researchers from Estonian R&D institutions and Social Sciences and Humanities researchers all over the world via the CLARIN ERIC network of similar centers in Europe. Access to data is provided through different sites: Public Repository https://entu.keeleressursid.ee/public-document , Language resources https://keeleressursid.ee/en/resources/corpora, and MetaShare CELR https://metashare.ut.ee/
Country
The FDZ-BO at DIW Berlin is a central archive for quantitative and qualitative operational and organizational data. It archives these, informs about their existence and provides datasets for secondary analytical purposes. The archiving of studies and datasets ensures long-term security and long-term availability of the data. In consultation with the responsible scientists, access to individual datasets is made possible as scientific use files, via remote data processing or as part of guest stays. The FDZ-BO offers detailed information on current research projects and develops concepts for research data management of organizational data. The study portal (public in March 2019) provides an overview of existing studies in the field of business and organizational research: content, methodology, information on data and data availability information on how to gain access to the data.
The European Nucleotide Archive (ENA) captures and presents information relating to experimental workflows that are based around nucleotide sequencing. A typical workflow includes the isolation and preparation of material for sequencing, a run of a sequencing machine in which sequencing data are produced and a subsequent bioinformatic analysis pipeline. ENA records this information in a data model that covers input information (sample, experimental setup, machine configuration), output machine data (sequence traces, reads and quality scores) and interpreted information (assembly, mapping, functional annotation). Data arrive at ENA from a variety of sources. These include submissions of raw data, assembled sequences and annotation from small-scale sequencing efforts, data provision from the major European sequencing centres and routine and comprehensive exchange with our partners in the International Nucleotide Sequence Database Collaboration (INSDC). Provision of nucleotide sequence data to ENA or its INSDC partners has become a central and mandatory step in the dissemination of research findings to the scientific community. ENA works with publishers of scientific literature and funding bodies to ensure compliance with these principles and to provide optimal submission systems and data access tools that work seamlessly with the published literature.
Country
ALEXA is a microarray design platform for 'alternative expression analysis'. This platform facilitates the design of expression arrays for analysis of mRNA isoforms generated from a single locus by the use of alternative transcription initiation, splicing and polyadenylation sites. We use the term 'ALEXA' to describe a collection of novel genomic methods for 'alternative expression' analysis. 'Alternative expression' refers to the identification and quantification of alternative mRNA transcripts produced by alternative transcript initiation, alternative splicing and alternative polyadenylation. This website provides supplementary materials, source code and other downloads for recent publications describing our studies of alternative expression (AE). Most recently we have developed a method, 'ALEXA-Seq' and associated resources for alternative expression analysis by massively parallel RNA sequencing.