Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 23 result(s)
The Digital Archaeological Record (tDAR) is an international digital repository for the digital records of archaeological investigations. tDAR’s use, development, and maintenance are governed by Digital Antiquity, an organization dedicated to ensuring the long-term preservation of irreplaceable archaeological data and to broadening the access to these data.
The Endangered Languages Archive (ELAR) is a digital repository for preserving multimedia collections of endangered languages from all over the world, making them available for future generations. In ELAR’s collections you can find recordings of every-day conversations, instructions on how to build fish traps or boats, explanations of kinship systems and the use of medicinal plants, and learn about art forms like string figures and sand drawings. ELAR’s collections are unique records of local knowledge systems encoded in their languages, described by the holders of the knowledge themselves.
The DRH is a quantitative and qualitative encyclopedia of religious history. It consists of a variety of entry types including religious group and religious place. Scholars contribute entries on their area of expertise by answering questions in standardised polls. Answers are initially coded in the binary format Yes/No or categorically, with comment boxes for qualitative comments, references and links. Experts are able to answer both Yes and No to the same question, enabling nuanced answers for specific circumstances. Media, such as photos, can also be attached to either individual questions or whole entries. The DRH captures scholarly disagreement, through fine-grained records and multiple temporally and spatially overlapping entries. Users can visualise changes in answers to questions over time and the extent of scholarly consensus or disagreement.
MGnify (formerly: EBI Metagenomics) offers an automated pipeline for the analysis and archiving of microbiome data to help determine the taxonomic diversity and functional & metabolic potential of environmental samples. Users can submit their own data for analysis or freely browse all of the analysed public datasets held within the repository. In addition, users can request analysis of any appropriate dataset within the European Nucleotide Archive (ENA). User-submitted or ENA-derived datasets can also be assembled on request, prior to analysis.
The Environmental Information Data Centre (EIDC) is part of the Natural Environment Research Council's (NERC) Environmental Data Service and is hosted by the UK Centre for Ecology & Hydrology (UKCEH). We manage nationally-important datasets concerned with the terrestrial and freshwater sciences.
The Durham High Energy Physics Database (HEPData), formerly: the Durham HEPData Project, has been built up over the past four decades as a unique open-access repository for scattering data from experimental particle physics. It currently comprises the data points from plots and tables related to several thousand publications including those from the Large Hadron Collider (LHC). The Durham HepData Project has for more than 25 years compiled the Reactions Database containing what can be loosly described as cross sections from HEP scattering experiments. The data comprise total and differential cross sections, structure functions, fragmentation functions, distributions of jet measures, polarisations, etc... from a wide range of interactions. In the new HEPData site (hepdata.net), you can explore new functionalities for data providers and data consumers, as well as the submission interface. HEPData is operated by CERN and IPPP at Durham University and is based on the digital library framework Invenio.
VectorBase provides data on arthropod vectors of human pathogens. Sequence data, gene expression data, images, population data, and insecticide resistance data for arthropod vectors are available for download. VectorBase also offers genome browser, gene expression and microarray repository, and BLAST searches for all VectorBase genomes. VectorBase Genomes include Aedes aegypti, Anopheles gambiae, Culex quinquefasciatus, Ixodes scapularis, Pediculus humanus, Rhodnius prolixus. VectorBase is one the Bioinformatics Resource Centers (BRC) projects which is funded by National Institute of Allergy and Infectious Diseases (NAID).
THIN is a medical data collection scheme that collects anonymised patient data from its members through the healthcare software Vision. The UK Primary Care database contains longitudinal patient records for approximately 6% of the UK Population. The anonymised data collection, which goes back to 1994, is nationally representative of the UK population.
WikiPathways was established to facilitate the contribution and maintenance of pathway information by the biology community. WikiPathways is an open, collaborative platform dedicated to the curation of biological pathways. WikiPathways thus presents a new model for pathway databases that enhances and complements ongoing efforts, such as KEGG, Reactome and Pathway Commons. Building on the same MediaWiki software that powers Wikipedia, we added a custom graphical pathway editing tool and integrated databases covering major gene, protein, and small-molecule systems. The familiar web-based format of WikiPathways greatly reduces the barrier to participate in pathway curation. More importantly, the open, public approach of WikiPathways allows for broader participation by the entire community, ranging from students to senior experts in each field. This approach also shifts the bulk of peer review, editorial curation, and maintenance to the community.
FAIRsharing is a web-based, searchable portal of three interlinked registries, containing both in-house and crowdsourced manually curated descriptions of standards, databases and data policies, combined with an integrated view across all three types of resource. By registering your resource on FAIRsharing, you not only gain credit for your work, but you increase its visibility outside of your direct domain, so reducing the potential for unnecessary reinvention and proliferation of standards and databases.
Virtual Fly Brain (VFB) - an interactive tool for neurobiologists to explore the detailed neuroanatomy, neuron connectivity and gene expression of the Drosophila melanogaster CNS.
The European Nucleotide Archive (ENA) captures and presents information relating to experimental workflows that are based around nucleotide sequencing. A typical workflow includes the isolation and preparation of material for sequencing, a run of a sequencing machine in which sequencing data are produced and a subsequent bioinformatic analysis pipeline. ENA records this information in a data model that covers input information (sample, experimental setup, machine configuration), output machine data (sequence traces, reads and quality scores) and interpreted information (assembly, mapping, functional annotation). Data arrive at ENA from a variety of sources. These include submissions of raw data, assembled sequences and annotation from small-scale sequencing efforts, data provision from the major European sequencing centres and routine and comprehensive exchange with our partners in the International Nucleotide Sequence Database Collaboration (INSDC). Provision of nucleotide sequence data to ENA or its INSDC partners has become a central and mandatory step in the dissemination of research findings to the scientific community. ENA works with publishers of scientific literature and funding bodies to ensure compliance with these principles and to provide optimal submission systems and data access tools that work seamlessly with the published literature.
The data publishing portal of Marine Scotland, the directorate of the Scottish Government responsible for the management of Scotland's seas.
myExperiment is a collaborative environment where scientists can safely publish their workflows and in silico experiments, share them with groups and find those of others. Workflows, other digital objects and bundles (called Packs) can now be swapped, sorted and searched like photos and videos on the Web. Unlike Facebook or MySpace, myExperiment fully understands the needs of the researcher and makes it really easy for the next generation of scientists to contribute to a pool of scientific methods, build communities and form relationships — reducing time-to-experiment, sharing expertise and avoiding reinvention. myExperiment is now the largest public repository of scientific workflows.
GWAS Central (previously the Human Genome Variation database of Genotype-to-Phenotype information) is a database of summary level findings from genetic association studies, both large and small. We actively gather datasets from public domain projects, and encourage direct data submission from the community.
OpenStreetMap (https://www.openstreetmap.org/export#map=6/51.324/10.426) is built by a community of mappers that contribute and maintain data about roads, trails, cafés, railway stations, and much more, all over the world. Planet.osm is the OpenStreetMap data in one file.
San Raffaele Open Research Data Repository (ORDR) is an institutional platform which allows to safely store, preserve and share research data. ORDR is endowed with the essential characteristics of trusted repositories, as it ensures: a) open or restricted access to contents, with persistent unique identifiers to enable referencing and citation; b) a comprehensive set of Metadata fields to enable discovery and reuse; c) provisions to safeguard integrity, authenticity and long-term preservation of deposited data.
The Manchester Romani Project is part of an international network of scholarly projects devoted to research on Romani language and linguistics, coordinated in partnership with Dieter Halwachs (Institute of Linguistics, Graz University and Romani-Projekt Graz), and Peter Bakker (Institute of Linguistics, Aarhus University). The project explores the linguistic features of the dialects of the Romani language, and their distribution in geographical space. An interactive web application is being designed, which will allow users to search and locate on a map different dialectal variants, and to explore how variants cluster in particular regions. Examples sentences and words with sound files will also be made available, to give impressions of dialectal variation within Romani. From the distribution of linguistic forms among the dialects it will be possible to make infeences about social-historical contacts among the Romani communities, and about migration patterns.
The ADS is an accredited digital repository for heritage data that supports research, learning and teaching with freely available, high quality and dependable digital resources by preserving and disseminating digital data in the long term. The ADS also promotes good practice in the use of digital data, provides technical advice to the heritage community, and supports the deployment of digital technologies.
BOARD (Bicocca Open Archive Research Data) is the institutional data repository of the University of Milano-Bicocca. BOARD is an open, free-to-use research data repository, which enables members of University of Milano-Bicocca to make their research data publicly available. By depositing their research data in BOARD researchers can: - Make their research data citable - Share their data privately or publicly - Ensure long-term storage for their data - Keep access to all versions - Link their article to their data
<<<!!!<<< This repository is no longer available. >>>!!!>>> see https://beta.ukdataservice.ac.uk/datacatalogue/studies/study?id=7021#!/details and https://ota.bodleian.ox.ac.uk/repository/xmlui/discover?query=germanc&submit=Search&filtertype_1=title&filter_relational_operator_1=contains&filter_1=&query=germanc
The objective of the Database of Genomic Variants is to provide a comprehensive summary of structural variation in the human genome. We define structural variation as genomic alterations that involve segments of DNA that are larger than >1kb. Now we also annotate InDels in 100bp-1kb range. The content of the database is only representing structural variation identified in healthy control samples. The Database of Genomic Variants provides a useful catalog of control data for studies aiming to correlate genomic variation with phenotypic data. The database is continuously updated with new data from peer reviewed research studies. We always welcome suggestions and comments regarding the database from the research community.
The PRIDE PRoteomics IDEntifications database is a centralized, standards compliant, public data repository for proteomics data, including protein and peptide identifications, post-translational modifications and supporting spectral evidence. PRIDE encourages and welcomes direct user submissions of mass spectrometry data to be published in peer-reviewed publications.