Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 30 result(s)
The Durham High Energy Physics Database (HEPData), formerly: the Durham HEPData Project, has been built up over the past four decades as a unique open-access repository for scattering data from experimental particle physics. It currently comprises the data points from plots and tables related to several thousand publications including those from the Large Hadron Collider (LHC). The Durham HepData Project has for more than 25 years compiled the Reactions Database containing what can be loosly described as cross sections from HEP scattering experiments. The data comprise total and differential cross sections, structure functions, fragmentation functions, distributions of jet measures, polarisations, etc... from a wide range of interactions. In the new HEPData site (hepdata.net), you can explore new functionalities for data providers and data consumers, as well as the submission interface. HEPData is operated by CERN and IPPP at Durham University and is based on the digital library framework Invenio.
The Digital Archaeological Record (tDAR) is an international digital repository for the digital records of archaeological investigations. tDAR’s use, development, and maintenance are governed by Digital Antiquity, an organization dedicated to ensuring the long-term preservation of irreplaceable archaeological data and to broadening the access to these data.
The European Monitoring and Evaluation Programme (EMEP) is a scientifically based and policy driven programme under the Convention on Long-range Transboundary Air Pollution (CLRTAP) for international co-operation to solve transboundary air pollution problems.
The Endangered Languages Archive (ELAR) is a digital repository for preserving multimedia collections of endangered languages from all over the world, making them available for future generations. In ELAR’s collections you can find recordings of every-day conversations, instructions on how to build fish traps or boats, explanations of kinship systems and the use of medicinal plants, and learn about art forms like string figures and sand drawings. ELAR’s collections are unique records of local knowledge systems encoded in their languages, described by the holders of the knowledge themselves.
ForestPlots.net is a web-accessible secure repository for forest plot inventories in South America, Africa and Asia. The database includes plot geographical information; location, taxonomic information and diameter measurements of trees inside each plot; and participants in plot establishment and re-measurement, including principal investigators, field assistants, students.
The DRH is a quantitative and qualitative encyclopedia of religious history. It consists of a variety of entry types including religious group and religious place. Scholars contribute entries on their area of expertise by answering questions in standardised polls. Answers are initially coded in the binary format Yes/No or categorically, with comment boxes for qualitative comments, references and links. Experts are able to answer both Yes and No to the same question, enabling nuanced answers for specific circumstances. Media, such as photos, can also be attached to either individual questions or whole entries. The DRH captures scholarly disagreement, through fine-grained records and multiple temporally and spatially overlapping entries. Users can visualise changes in answers to questions over time and the extent of scholarly consensus or disagreement.
EMPIAR, the Electron Microscopy Public Image Archive, is a public resource for raw, 2D electron microscopy images. Here, you can browse, upload, download and reprocess the thousands of raw, 2D images used to build a 3D structure. The purpose of EMPIAR is to provide an easy access to the state-of-the-art raw data to facilitate methods development and validation, which will lead to better 3D structures. It complements the Electron Microscopy Data Bank (EMDB), where 3D images are stored, and uses the fault-tolerant Aspera platform for data transfers
MGnify (formerly: EBI Metagenomics) offers an automated pipeline for the analysis and archiving of microbiome data to help determine the taxonomic diversity and functional & metabolic potential of environmental samples. Users can submit their own data for analysis or freely browse all of the analysed public datasets held within the repository. In addition, users can request analysis of any appropriate dataset within the European Nucleotide Archive (ENA). User-submitted or ENA-derived datasets can also be assembled on request, prior to analysis.
Content type(s)
The IDR makes datasets that have never previously been accessible publicly available, allowing the community to search, view, mine and even process and analyze large, complex, multidimensional life sciences image data. Sharing data promotes the validation of experimental methods and scientific conclusions, the comparison with new data obtained by the global scientific community, and enables data reuse by developers of new analysis and processing tools.
VectorBase provides data on arthropod vectors of human pathogens. Sequence data, gene expression data, images, population data, and insecticide resistance data for arthropod vectors are available for download. VectorBase also offers genome browser, gene expression and microarray repository, and BLAST searches for all VectorBase genomes. VectorBase Genomes include Aedes aegypti, Anopheles gambiae, Culex quinquefasciatus, Ixodes scapularis, Pediculus humanus, Rhodnius prolixus. VectorBase is one the Bioinformatics Resource Centers (BRC) projects which is funded by National Institute of Allergy and Infectious Diseases (NAID).
THIN is a medical data collection scheme that collects anonymised patient data from its members through the healthcare software Vision. The UK Primary Care database contains longitudinal patient records for approximately 6% of the UK Population. The anonymised data collection, which goes back to 1994, is nationally representative of the UK population.
WikiPathways was established to facilitate the contribution and maintenance of pathway information by the biology community. WikiPathways is an open, collaborative platform dedicated to the curation of biological pathways. WikiPathways thus presents a new model for pathway databases that enhances and complements ongoing efforts, such as KEGG, Reactome and Pathway Commons. Building on the same MediaWiki software that powers Wikipedia, we added a custom graphical pathway editing tool and integrated databases covering major gene, protein, and small-molecule systems. The familiar web-based format of WikiPathways greatly reduces the barrier to participate in pathway curation. More importantly, the open, public approach of WikiPathways allows for broader participation by the entire community, ranging from students to senior experts in each field. This approach also shifts the bulk of peer review, editorial curation, and maintenance to the community.
FAIRsharing is a web-based, searchable portal of three interlinked registries, containing both in-house and crowdsourced manually curated descriptions of standards, databases and data policies, combined with an integrated view across all three types of resource. By registering your resource on FAIRsharing, you not only gain credit for your work, but you increase its visibility outside of your direct domain, so reducing the potential for unnecessary reinvention and proliferation of standards and databases.
IAGOS aims to provide long-term, regular and spatially resolved in situ observations of the atmospheric composition. The observation systems are deployed on a fleet of 10 to 15 commercial aircraft measuring atmospheric chemistry concentrations and meteorological fields. The IAGOS Data Centre manages and gives access to all the data produced within the project.
The Environmental Information Data Centre (EIDC) is part of the Natural Environment Research Council's (NERC) Environmental Data Service and is hosted by the UK Centre for Ecology & Hydrology (UKCEH). We manage nationally-important datasets concerned with the terrestrial and freshwater sciences.
The European Nucleotide Archive (ENA) captures and presents information relating to experimental workflows that are based around nucleotide sequencing. A typical workflow includes the isolation and preparation of material for sequencing, a run of a sequencing machine in which sequencing data are produced and a subsequent bioinformatic analysis pipeline. ENA records this information in a data model that covers input information (sample, experimental setup, machine configuration), output machine data (sequence traces, reads and quality scores) and interpreted information (assembly, mapping, functional annotation). Data arrive at ENA from a variety of sources. These include submissions of raw data, assembled sequences and annotation from small-scale sequencing efforts, data provision from the major European sequencing centres and routine and comprehensive exchange with our partners in the International Nucleotide Sequence Database Collaboration (INSDC). Provision of nucleotide sequence data to ENA or its INSDC partners has become a central and mandatory step in the dissemination of research findings to the scientific community. ENA works with publishers of scientific literature and funding bodies to ensure compliance with these principles and to provide optimal submission systems and data access tools that work seamlessly with the published literature.
The data publishing portal of Marine Scotland, the directorate of the Scottish Government responsible for the management of Scotland's seas.
EMAGE (e-Mouse Atlas of Gene Expression) is an online biological database of gene expression data in the developing mouse (Mus musculus) embryo. The data held in EMAGE is spatially annotated to a framework of 3D mouse embryo models produced by EMAP (e-Mouse Atlas Project). These spatial annotations allow users to query EMAGE by spatial pattern as well as by gene name, anatomy term or Gene Ontology (GO) term. EMAGE is a freely available web-based resource funded by the Medical Research Council (UK) and based at the MRC Human Genetics Unit in the Institute of Genetics and Molecular Medicine, Edinburgh, UK.
myExperiment is a collaborative environment where scientists can safely publish their workflows and in silico experiments, share them with groups and find those of others. Workflows, other digital objects and bundles (called Packs) can now be swapped, sorted and searched like photos and videos on the Web. Unlike Facebook or MySpace, myExperiment fully understands the needs of the researcher and makes it really easy for the next generation of scientists to contribute to a pool of scientific methods, build communities and form relationships — reducing time-to-experiment, sharing expertise and avoiding reinvention. myExperiment is now the largest public repository of scientific workflows.
GWAS Central (previously the Human Genome Variation database of Genotype-to-Phenotype information) is a database of summary level findings from genetic association studies, both large and small. We actively gather datasets from public domain projects, and encourage direct data submission from the community.
OpenStreetMap (https://www.openstreetmap.org/export#map=6/51.324/10.426) is built by a community of mappers that contribute and maintain data about roads, trails, cafés, railway stations, and much more, all over the world. Planet.osm is the OpenStreetMap data in one file.
San Raffaele Open Research Data Repository (ORDR) is an institutional platform which allows to safely store, preserve and share research data. ORDR is endowed with the essential characteristics of trusted repositories, as it ensures: a) open or restricted access to contents, with persistent unique identifiers to enable referencing and citation; b) a comprehensive set of Metadata fields to enable discovery and reuse; c) provisions to safeguard integrity, authenticity and long-term preservation of deposited data.
The Manchester Romani Project is part of an international network of scholarly projects devoted to research on Romani language and linguistics, coordinated in partnership with Dieter Halwachs (Institute of Linguistics, Graz University and Romani-Projekt Graz), and Peter Bakker (Institute of Linguistics, Aarhus University). The project explores the linguistic features of the dialects of the Romani language, and their distribution in geographical space. An interactive web application is being designed, which will allow users to search and locate on a map different dialectal variants, and to explore how variants cluster in particular regions. Examples sentences and words with sound files will also be made available, to give impressions of dialectal variation within Romani. From the distribution of linguistic forms among the dialects it will be possible to make infeences about social-historical contacts among the Romani communities, and about migration patterns.
Under the World Climate Research Programme (WCRP) the Working Group on Coupled Modelling (WGCM) established the Coupled Model Intercomparison Project (CMIP) as a standard experimental protocol for studying the output of coupled atmosphere-ocean general circulation models (AOGCMs). CMIP provides a community-based infrastructure in support of climate model diagnosis, validation, intercomparison, documentation and data access. This framework enables a diverse community of scientists to analyze GCMs in a systematic fashion, a process which serves to facilitate model improvement. Virtually the entire international climate modeling community has participated in this project since its inception in 1995. The Program for Climate Model Diagnosis and Intercomparison (PCMDI) archives much of the CMIP data and provides other support for CMIP. We are now beginning the process towards the IPCC Fifth Assessment Report and with it the CMIP5 intercomparison activity. The CMIP5 (CMIP Phase 5) experiment design has been finalized with the following suites of experiments: I Decadal Hindcasts and Predictions simulations, II "long-term" simulations, III "atmosphere-only" (prescribed SST) simulations for especially computationally-demanding models. The new ESGF peer-to-peer (P2P) enterprise system (http://pcmdi9.llnl.gov) is now the official site for CMIP5 model output. The old gateway (http://pcmdi3.llnl.gov) is deprecated and now shut down permanently.