Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 39 result(s)
The Data Catalogue is a service that allows University of Liverpool Researchers to create records of information about their finalised research data, and save those data in a secure online environment. The Data Catalogue provides a good means of making that data available in a structured way, in a form that can be discovered by both general search engines and academic search tools. There are two types of record that can be created in the Data Catalogue: A discovery-only record – in these cases, the research data may be held somewhere else but a record is provided to help people find it. A record is created that alerts users to the existence of the data, and provides a link to where those data are held. A discovery and data record – in these cases, a record is created to help people discover the data exist, and the data themselves are deposited into the Data Catalogue. This process creates a unique Digital Object identifier (DOI) which can be used in citations to the data.
The Digital Archaeological Record (tDAR) is an international digital repository for the digital records of archaeological investigations. tDAR’s use, development, and maintenance are governed by Digital Antiquity, an organization dedicated to ensuring the long-term preservation of irreplaceable archaeological data and to broadening the access to these data.
The DRH is a quantitative and qualitative encyclopedia of religious history. It consists of a variety of entry types including religious group and religious place. Scholars contribute entries on their area of expertise by answering questions in standardised polls. Answers are initially coded in the binary format Yes/No or categorically, with comment boxes for qualitative comments, references and links. Experts are able to answer both Yes and No to the same question, enabling nuanced answers for specific circumstances. Media, such as photos, can also be attached to either individual questions or whole entries. The DRH captures scholarly disagreement, through fine-grained records and multiple temporally and spatially overlapping entries. Users can visualise changes in answers to questions over time and the extent of scholarly consensus or disagreement.
As with most biomedical databases, the first step is to identify relevant data from the research community. The Monarch Initiative is focused primarily on phenotype-related resources. We bring in data associated with those phenotypes so that our users can begin to make connections among other biological entities of interest. We import data from a variety of data sources. With many resources integrated into a single database, we can join across the various data sources to produce integrated views. We have started with the big players including ClinVar and OMIM, but are equally interested in boutique databases. You can learn more about the sources of data that populate our system from our data sources page https://monarchinitiative.org/about/sources.
EnsemblPlants is a genome-centric portal for plant species. Ensembl Plants is developed in coordination with other plant genomics and bioinformatics groups via the EBI's role in the transPLANT consortium.
The UBIRA eData repository is a multidisciplinary online service for the registration, preservation and publication of research datasets produced or collected at the University of Birmingham. It is part of the University of Birmingham Research Archive (UBIRA).
ORA (Oxford University Research Archive) is the institutional repository for the University of Oxford. ORA was established in 2007 as a permanent and secure online archive of research materials produced by members of the University of Oxford. ORA aims to provide access to the full text of as much of Oxford's academic research as possible. This includes articles, conference papers, theses, research data, working papers, posters and more. Making materials open access removes barriers that restrict access to research, allowing for free dissemination of full text content, available to anyone with Internet access. ORA promotes and encourages the sharing of the scholarly output produced by the members of the University of Oxford that have been published under open access conditions, whilst additionally supporting University compliance with research funder policy and assessment.
<<<!!!<<< History Data Service ressources now available in https://www.data-archive.ac.uk/find, see re3data https://www.re3data.org/repository/r3d100010215>>>!!!>>>
ShareGeo Open was a repository of geospatial data, previously hosted by EDINA. ShareGeo Open has been discontinued, so its datasets have been migrated to this Edinburgh DataShare Collection, for preservation, in accordance with the agreement signed by all ShareGeo depositors.
The UK Polar Data Centre (UK PDC) is the focal point for Arctic and Antarctic environmental data management in the UK. Part of the Natural Environmental Research Council’s (NERC) network of environmental data centres and based at the British Antarctic Survey, it coordinates the management of polar data from UK-funded research and supports researchers in complying with national and international data legislation and policy.
WikiPathways was established to facilitate the contribution and maintenance of pathway information by the biology community. WikiPathways is an open, collaborative platform dedicated to the curation of biological pathways. WikiPathways thus presents a new model for pathway databases that enhances and complements ongoing efforts, such as KEGG, Reactome and Pathway Commons. Building on the same MediaWiki software that powers Wikipedia, we added a custom graphical pathway editing tool and integrated databases covering major gene, protein, and small-molecule systems. The familiar web-based format of WikiPathways greatly reduces the barrier to participate in pathway curation. More importantly, the open, public approach of WikiPathways allows for broader participation by the entire community, ranging from students to senior experts in each field. This approach also shifts the bulk of peer review, editorial curation, and maintenance to the community.
<<<!!!<<< This repository is no longer available. >>>!!!>>> BioVeL is a virtual e-laboratory that supports research on biodiversity issues using large amounts of data from cross-disciplinary sources. BioVeL supports the development and use of workflows to process data. It offers the possibility to either use already made workflows or create own. BioVeL workflows are stored in MyExperiment - Biovel Group http://www.myexperiment.org/groups/643/content. They are underpinned by a range of analytical and data processing functions (generally provided as Web Services or R scripts) to support common biodiversity analysis tasks. You can find the Web Services catalogued in the BiodiversityCatalogue.
The CONP portal is a web interface for the Canadian Open Neuroscience Platform (CONP) to facilitate open science in the neuroscience community. CONP simplifies global researcher access and sharing of datasets and tools. The portal internalizes the cycle of a typical research project: starting with data acquisition, followed by processing using already existing/published tools, and ultimately publication of the obtained results including a link to the original dataset. From more information on CONP, please visit https://conp.ca
ArrayExpress is one of the major international repositories for high-throughput functional genomics data from both microarray and high-throughput sequencing studies, many of which are supported by peer-reviewed publications. Data sets are submitted directly to ArrayExpress and curated by a team of specialist biological curators. In the past (until 2018) datasets from the NCBI Gene Expression Omnibus database were imported on a weekly basis. Data is collected to MIAME and MINSEQE standards.
The UK Data Archive, based at the University of Essex, is curator of the largest collection of digital data in the social sciences and humanities in the United Kingdom. With several thousand datasets relating to society, both historical and contemporary, our Archive is a vital resource for researchers, teachers and learners. We are an internationally acknowledged centre of expertise in the areas of acquiring, curating and providing access to data. We are the lead partner in the UK Data Service (https://service.re3data.org/repository/r3d100010230) through which data users can browse collections online and register to analyse and download them. Open Data collections are available for anyone to use. The UK Data Archive is a Trusted Digital Repository (TDR) certified against the CoreTrustSeal (https://www.coretrustseal.org/) and certified against ISO27001 for Information Security (https://www.iso.org/isoiec-27001-information-security.html).
Codex Sinaiticus is one of the most important books in the world. Handwritten well over 1600 years ago, the manuscript contains the Christian Bible in Greek, including the oldest complete copy of the New Testament. The Codex Sinaiticus Project is an international collaboration to reunite the entire manuscript in digital form and make it accessible to a global audience for the first time. Drawing on the expertise of leading scholars, conservators and curators, the Project gives everyone the opportunity to connect directly with this famous manuscript.
The European Nucleotide Archive (ENA) captures and presents information relating to experimental workflows that are based around nucleotide sequencing. A typical workflow includes the isolation and preparation of material for sequencing, a run of a sequencing machine in which sequencing data are produced and a subsequent bioinformatic analysis pipeline. ENA records this information in a data model that covers input information (sample, experimental setup, machine configuration), output machine data (sequence traces, reads and quality scores) and interpreted information (assembly, mapping, functional annotation). Data arrive at ENA from a variety of sources. These include submissions of raw data, assembled sequences and annotation from small-scale sequencing efforts, data provision from the major European sequencing centres and routine and comprehensive exchange with our partners in the International Nucleotide Sequence Database Collaboration (INSDC). Provision of nucleotide sequence data to ENA or its INSDC partners has become a central and mandatory step in the dissemination of research findings to the scientific community. ENA works with publishers of scientific literature and funding bodies to ensure compliance with these principles and to provide optimal submission systems and data access tools that work seamlessly with the published literature.
The Ligand-Gated Ion Channel database provides access to information about transmembrane proteins that exist under different conformations, with three primary subfamilies: the cys-loop superfamily, the ATP gated channels superfamily, and the glutamate activated cationic channels superfamily. The development of the Ligand-Gated Ion Channel database was started in 1994, as part of Le Novère's work on the phylogeny of those receptors' subunits. It grew into a serious data resource, that served the community at large. However, it is not actively maintained anymore. In addition, bioinformatics technology evolved a lot over the last two decades, so that scientists can now generate quickly customised databases from trustworthy primary data resources. Therefore, we decided to officialy freeze the data resource. The resource will not disappear, and all the information and links will stay there. But people should not consider it as an up-to-date trustable resource. For any new work, they should consider using alternative sources, such as UniProt, Ensembl, Protein Databank etc.
The University of Reading Research Data Archive (the Archive) is a multidisciplinary online service for the registration, preservation and publication of research datasets produced or collected at the University of Reading.
SeaDataNet is a standardized system for managing the large and diverse data sets collected by the oceanographic fleets and the automatic observation systems. The SeaDataNet infrastructure network and enhance the currently existing infrastructures, which are the national oceanographic data centres of 35 countries, active in data collection. The networking of these professional data centres, in a unique virtual data management system provide integrated data sets of standardized quality on-line. As a research infrastructure, SeaDataNet contributes to build research excellence in Europe.
myExperiment is a collaborative environment where scientists can safely publish their workflows and in silico experiments, share them with groups and find those of others. Workflows, other digital objects and bundles (called Packs) can now be swapped, sorted and searched like photos and videos on the Web. Unlike Facebook or MySpace, myExperiment fully understands the needs of the researcher and makes it really easy for the next generation of scientists to contribute to a pool of scientific methods, build communities and form relationships — reducing time-to-experiment, sharing expertise and avoiding reinvention. myExperiment is now the largest public repository of scientific workflows.