Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 50 result(s)
Scholars' Bank is the open access repository for the intellectual work of faculty, students and staff at the University of Oregon and partner institution collections.
The Data Catalogue is a service that allows University of Liverpool Researchers to create records of information about their finalised research data, and save those data in a secure online environment. The Data Catalogue provides a good means of making that data available in a structured way, in a form that can be discovered by both general search engines and academic search tools. There are two types of record that can be created in the Data Catalogue: A discovery-only record – in these cases, the research data may be held somewhere else but a record is provided to help people find it. A record is created that alerts users to the existence of the data, and provides a link to where those data are held. A discovery and data record – in these cases, a record is created to help people discover the data exist, and the data themselves are deposited into the Data Catalogue. This process creates a unique Digital Object identifier (DOI) which can be used in citations to the data.
Country
The GeoPortal.rlp allows the central search and visualization of geo data. Inside the geo data infrastructure of Rhineland-Palatinate the GeoPortal.rlp inherit the central duty a service orientated branch exchange between user and offerer of geo data. The GeoPortal.rlp establishes the access to geo data over the electronic network. The GeoPortal.rlp was brought on line on January, 8th 2007 for the first time, on February, 2nd 2011 it occured a site-relaunch.
Galaxies, made up of billions of stars like our Sun, are the beacons that light up the structure of even the most distant regions in space. Not all galaxies are alike, however. They come in very different shapes and have very different properties; they may be large or small, old or young, red or blue, regular or confused, luminous or faint, dusty or gas-poor, rotating or static, round or disky, and they live either in splendid isolation or in clusters. In other words, the universe contains a very colourful and diverse zoo of galaxies. For almost a century, astronomers have been discussing how galaxies should be classified and how they relate to each other in an attempt to attack the big question of how galaxies form. Galaxy Zoo (Lintott et al. 2008, 2011) pioneered a novel method for performing large-scale visual classifications of survey datasets. This webpage allows anyone to download the resulting GZ classifications of galaxies in the project.
Country
The Marine Data Portal is a product of the “Underway”- Data initiative of the German Marine Research Alliance (Deutsche Allianz Meeresforschung - DAM) and is supported by the marine science centers AWI, GEOMAR and Hereon of the Helmholtz Association. This initiative aims to improve and standardize the systematic data collection and data evaluation for expeditions with German research vessels and marine observation. It supports scientists in their data management duties and fosters (data) science through FAIR and open access to marine research data. AWI, GEOMAR and Hereon develop this marine data hub (Marehub) to build a decentralized data infrastructure for processing, long-term archiving and dissemination of marine observation and model data and data products. The Marine Data Portal provides user-friendly, centralized access to marine research data, reports and publications from a wide range of data repositories and libraries in the context of German marine research and its international collaboration. The Marine Data Portal is developed by scientists for scientists in order to facilitate Findability and Access of marine research data for Reuse. It supports machine-readable and data driven science. Please note that the quality of the data may vary depending on the purpose for which it was originally collected.
ePrints Soton is the University's Research Repository. It contains journal articles, books, PhD theses, conference papers, data, reports, working papers, art exhibitions and more. Where possible, journal articles, conference proceedings and research data made open access.
As with most biomedical databases, the first step is to identify relevant data from the research community. The Monarch Initiative is focused primarily on phenotype-related resources. We bring in data associated with those phenotypes so that our users can begin to make connections among other biological entities of interest. We import data from a variety of data sources. With many resources integrated into a single database, we can join across the various data sources to produce integrated views. We have started with the big players including ClinVar and OMIM, but are equally interested in boutique databases. You can learn more about the sources of data that populate our system from our data sources page https://monarchinitiative.org/about/sources.
Country
With Open Science OVGU the Otto-von-Guericke University provides its scientists with a research data repository. Measurement data, laboratory values, survey data, methodical test procedures, etc. can be archived in Open Science OVGU and made available to the scientific community via Open Access.
The UBIRA eData repository is a multidisciplinary online service for the registration, preservation and publication of research datasets produced or collected at the University of Birmingham. It is part of the University of Birmingham Research Archive (UBIRA).
Country
The "Flora of Bavaria" initiative with its data portal (14 million occurrence data) and Wiki representation is primarily a citizen science project. Efforts to describe and monitor the flora of Bavaria have been ongoing for 100 years. The goal of these efforts is to record all vascular plants, including newcomers, and to document threatened or former local occurrences. Being geographically largest state of Germany with a broad range of habitats, Bavaria has a special responsibility for documenting and maintaining its plant diversity . About 85% of all German vascular plant species occur in Bavaria, and in addition it has about 50 endemic taxa, only known from Bavaria (most of them occur in the Alps). The Wiki is collaboration of volunteers and local and regional Bavarian botanical societies. Everybody is welcome to contribute, especially with photos or reports of local changes in the flora. The Flora of Bavaria project is providing access to a research data repository for occurrence data powered by the Diversity Workbench database framework.
This database contains references to publications that include numerical data, general information, comments, and reviews on atomic line broadening and shifts, and is part of the collection of the NIST Atomic Spectroscopy Data Center https://www.nist.gov/pml/quantum-measurement/atomic-spectroscopy/atomic-spectroscopy-data-center-contacts.
GeneCards is a searchable, integrative database that provides comprehensive, user-friendly information on all annotated and predicted human genes. It automatically integrates gene-centric data from ~125 web sources, including genomic, transcriptomic, proteomic, genetic, clinical and functional information.
The Brown Digital Repository (BDR) is a place to gather, index, store, preserve, and make available digital assets produced via the scholarly, instructional, research, and administrative activities at Brown.
GENCODE is a scientific project in genome research and part of the ENCODE (ENCyclopedia Of DNA Elements) scale-up project. The GENCODE consortium was initially formed as part of the pilot phase of the ENCODE project to identify and map all protein-coding genes within the ENCODE regions (approx. 1% of Human genome). Given the initial success of the project, GENCODE now aims to build an “Encyclopedia of genes and genes variants” by identifying all gene features in the human and mouse genome using a combination of computational analysis, manual annotation, and experimental validation, and annotating all evidence-based gene features in the entire human genome at a high accuracy.
Country
MTD is focused on mammalian transcriptomes with a current version that contains data from humans, mice, rats and pigs. Regarding the core features, the MTD browses genes based on their neighboring genomic coordinates or joint KEGG pathway and provides expression information on exons, transcripts, and genes by integrating them into a genome browser. We developed a novel nomenclature for each transcript that considers its genomic position and transcriptional features.
<<<!!!<<< This record is merged into Continental Scientific Drilling Facility https://www.re3data.org/repository/r3d100012874 >>>!!!>>> LacCore curates cores and samples from continental coring and drilling expeditions around the world, and also archives metadata and contact information for cores stored at other institutions.LacCore curates cores and samples from continental coring and drilling expeditions around the world, and also archives metadata and contact information for cores stored at other institutions.
Geochron is a global database that hosts geochronologic and thermochronologic information from detrital minerals. Information included with each sample consists of a table with the essential isotopic information and ages, a table with basic geologic metadata (e.g., location, collector, publication, etc.), a Pb/U Concordia diagram, and a relative age probability diagram. This information can be accessed and viewed with any web browser, and depending on the level of access desired, can be designated as either private or public. Loading information into Geochron requires the use of U-Pb_Redux, a Java-based program that also provides enhanced capabilities for data reduction, plotting, and analysis. Instructions are provided for three different levels of interaction with Geochron: 1. Accessing samples that are already in the Geochron database. 2. Preparation of information for new samples, and then transfer to Arizona LaserChron Center personnel for uploading to Geochron. 3. Preparation of information and uploading to Geochron using U-Pb_Redux.
An increasing number of Language Resources (LT) in the various fields of Human Language Technology (HLT) are distributed on behalf of ELRA via its operational body ELDA, thanks to the contribution of various players of the HLT community. Our aim is to provide Language Resources, by means of this repository, so as to prevent researchers and developers from investing efforts to rebuild resources which already exist as well as help them identify and access those resources.
The Humanitarian Data Exchange (HDX) is an open platform for sharing data across crises and organisations. Launched in July 2014, the goal of HDX is to make humanitarian data easy to find and use for analysis. HDX is managed by OCHA's Centre for Humanitarian Data, which is located in The Hague. OCHA is part of the United Nations Secretariat and is responsible for bringing together humanitarian actors to ensure a coherent response to emergencies. The HDX team includes OCHA staff and a number of consultants who are based in North America, Europe and Africa.
The Bremen Core Repository - BCR, for International Ocean Discovery Program (IODP), Integrated Ocean Discovery Program (IODP), Ocean Drilling Program (ODP), and Deep Sea Drilling Project (DSDP) cores from the Atlantic Ocean, Mediterranean and Black Seas and Arctic Ocean is operated at University of Bremen within the framework of the German participation in IODP. It is one of three IODP repositories (beside Gulf Coast Repository (GCR) in College Station, TX, and Kochi Core Center (KCC), Japan). One of the scientific goals of IODP is to research the deep biosphere and the subseafloor ocean. IODP has deep-frozen microbiological samples from the subseafloor available for interested researchers and will continue to collect and preserve geomicrobiology samples for future research.
The Linguistic Data Consortium (LDC) is an open consortium of universities, libraries, corporations and government research laboratories. It was formed in 1992 to address the critical data shortage then facing language technology research and development. Initially, LDC's primary role was as a repository and distribution point for language resources. Since that time, and with the help of its members, LDC has grown into an organization that creates and distributes a wide array of language resources. LDC also supports sponsored research programs and language-based technology evaluations by providing resources and contributing organizational expertise. LDC is hosted by the University of Pennsylvania and is a center within the University’s School of Arts and Sciences.
<<<!!!<<< OFFLINE >>>!!!>>> A recent computer security audit has revealed security flaws in the legacy HapMap site that require NCBI to take it down immediately. We regret the inconvenience, but we are required to do this. That said, NCBI was planning to decommission this site in the near future anyway (although not quite so suddenly), as the 1,000 genomes (1KG) project has established itself as a research standard for population genetics and genomics. NCBI has observed a decline in usage of the HapMap dataset and website with its available resources over the past five years and it has come to the end of its useful life. The International HapMap Project is a multi-country effort to identify and catalog genetic similarities and differences in human beings. Using the information in the HapMap, researchers will be able to find genes that affect health, disease, and individual responses to medications and environmental factors. The Project is a collaboration among scientists and funding agencies from Japan, the United Kingdom, Canada, China, Nigeria, and the United States. All of the information generated by the Project will be released into the public domain. The goal of the International HapMap Project is to compare the genetic sequences of different individuals to identify chromosomal regions where genetic variants are shared. By making this information freely available, the Project will help biomedical researchers find genes involved in disease and responses to therapeutic drugs. In the initial phase of the Project, genetic data are being gathered from four populations with African, Asian, and European ancestry. Ongoing interactions with members of these populations are addressing potential ethical issues and providing valuable experience in conducting research with identified populations. Public and private organizations in six countries are participating in the International HapMap Project. Data generated by the Project can be downloaded with minimal constraints. The Project officially started with a meeting in October 2002 (https://www.genome.gov/10005336/) and is expected to take about three years.