Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 278 result(s)
Country
The Swedish Human Protein Atlas project has been set up to allow for a systematic exploration of the human proteome using Antibody-Based Proteomics. This is accomplished by combining high-throughput generation of affinity-purified antibodies with protein profiling in a multitude of tissues and cells assembled in tissue microarrays. Confocal microscopy analysis using human cell lines is performed for more detailed protein localization. The program hosts the Human Protein Atlas portal with expression profiles of human proteins in tissues and cells. The main objective of the resource centre is to produce specific antibodies to human target proteins using a high-throughput production method involving the cloning and protein expression of Protein Epitope Signature Tags (PrESTs). After purification, the antibodies are used to study expression profiles in cells and tissues and for functional analysis of the corresponding proteins in a wide range of platforms.
ScholarsArchive@OSU is Oregon State University's digital service for gathering, indexing, making available and storing the scholarly work of the Oregon State University community. It also includes materials from outside the institution in support of the university's land, sun, sea and space grant missions and other research interests.
Country
This repository contains about 500 medieval codices and manuscripts (dated from 8th to 18th century), which were a part of the BMBF-funded project eCodicology. The collection and database contains descriptional data about the manuscripts in TEI conformant XML format as well as digitized images of every codex page.
The ProteomeXchange consortium has been set up to provide a single point of submission of MS proteomics data to the main existing proteomics repositories, and to encourage the data exchange between them for optimal data dissemination. Current members accepting submissions are: The PRIDE PRoteomics IDEntifications database at the European Bioinformatics Institute focusing mainly on shotgun mass spectrometry proteomics data PeptideAtlas/PASSEL focusing on SRM/MRM datasets.
The UK Data Archive, based at the University of Essex, is curator of the largest collection of digital data in the social sciences and humanities in the United Kingdom. With several thousand datasets relating to society, both historical and contemporary, our Archive is a vital resource for researchers, teachers and learners. We are an internationally acknowledged centre of expertise in the areas of acquiring, curating and providing access to data. We are the lead partner in the UK Data Service (https://service.re3data.org/repository/r3d100010230) through which data users can browse collections online and register to analyse and download them. Open Data collections are available for anyone to use. The UK Data Archive is a Trusted Digital Repository (TDR) certified against the CoreTrustSeal (https://www.coretrustseal.org/) and certified against ISO27001 for Information Security (https://www.iso.org/isoiec-27001-information-security.html).
The Warwick Research Archive Portal (WRAP) is the home of the University's full text, open access research content and contains, journal articles, Warwick doctoral dissertations, book chapters, conference papers, working papers and more.
Codex Sinaiticus is one of the most important books in the world. Handwritten well over 1600 years ago, the manuscript contains the Christian Bible in Greek, including the oldest complete copy of the New Testament. The Codex Sinaiticus Project is an international collaboration to reunite the entire manuscript in digital form and make it accessible to a global audience for the first time. Drawing on the expertise of leading scholars, conservators and curators, the Project gives everyone the opportunity to connect directly with this famous manuscript.
Country
GFZ Data Services is a repository for research data and scientific software across the Earth System Sciences, hosted at GFZ. The curated data are archived, persistently accessible and published with digital object identifier (DOI). They range from large dynamic datasets from global monitoring networks with real-time aquisition, to international services in geodesy and geophysics, to the full suite of small and highly heterogeneous datasets collected by individual researchers or small teams ("long-tail data"). In addition to the DOI registration and data archiving itself, GFZ Data Services team offers comprehensive consultation by domain scientists and IT specialists. Among others, GFZ Data Services is data publisher for the IAG Services ICGEM, IGETS and ISG (IAG = Int. Association for Geodesy; ICGEM = Int. Center for Global Earth Models; IGETS = Int. Geodynamics and Earth Tide Service; ISG = Int. Service for the Geoid), the World Stress Map, INTERMAGNET, GEOFON, the Geophysical Instrument Pool Potsdam GIPP, TERENO, EnMAP Flight Campaigns, the Potsdam Institute for Climate Impact Research PIK, the Specialised Information Service for Solid Earth Geosciences (FID GEO) and hosts the GFZ Catalogue for the International Generic Sample Number IGSN.
Biological collections are replete with taxonomic, geographic, temporal, numerical, and historical information. This information is crucial for understanding and properly managing biodiversity and ecosystems, but is often difficult to access. Canadensys, operated from the Université de Montréal Biodiversity Centre, is a Canada-wide effort to unlock the biodiversity information held in biological collections.
The CLARIN Centre at the University of Copenhagen, Denmark, hosts and manages a data repository (CLARIN-DK-UCPH Repository), which is part of a research infrastructure for humanities and social sciences financed by the University of Copenhagen. The CLARIN-DK-UCPH Repository provides easy and sustainable access for scholars in the humanities and social sciences to digital language data (in written, spoken, video or multimodal form) and provides advanced tools for discovering, exploring, exploiting, annotating, and analyzing data. CLARIN-DK also shares knowledge on Danish language technology and resources and is the Danish node in the European Research Infrastructure Consortium, CLARIN ERIC.
The World Glacier Monitoring Service (WGMS) collects standardized observations on changes in mass, volume, area and length of glaciers with time (glacier fluctuations), as well as statistical information on the distribution of perennial surface ice in space (glacier inventories). Such glacier fluctuation and inventory data are high priority key variables in climate system monitoring; they form a basis for hydrological modelling with respect to possible effects of atmospheric warming, and provide fundamental information in glaciology, glacial geomorphology and quaternary geology. The highest information density is found for the Alps and Scandinavia, where long and uninterrupted records are available. As a contribution to the Global Terrestrial/Climate Observing System (GTOS, GCOS), the Division of Early Warning and Assessment and the Global Environment Outlook of UNEP, and the International Hydrological Programme of UNESCO, the WGMS collects and publishes worldwide standardized glacier data.
The focus of CLARIN INT Portal is on resources that are relevant to the lexicological study of the Dutch language and on resources relevant for research in and development of language and speech technology. For Example: lexicons, lexical databases, text corpora, speech corpora, language and speech technology tools, etc. The resources are: Cornetto-LMF (Lexicon Markup Framework), Corpus of Contemporary Dutch (Corpus Hedendaags Nederlands), Corpus Gysseling, Corpus VU-DNC (VU University Diachronic News text Corpus), Dictionary of the Frisian Language (Woordenboek der Friese Taal), DuELME-LMF (Lexicon Markup Framework), Language Portal (Taalportaal), Namescape, NERD (Named Entity Recognition and Disambiguation) and TICCLops (Text-Induced Corpus Clean-up online processing system).
Country
The Johanna Mestorf Academy provides data from several archaeology related projects. JMA supports open access/open data and open formats. The JMA promotes research and education pertaining to the field of ‘Societal, Environmental, Cultural Change’ (Kiel SECC), which is one of the four research foci of CAU.
Country
The CORA. Repositori de dades de Recerca is a repository of open, curated and FAIR data that covers all academic disciplines. CORA. Repositori de dades de Recerca is a shared service provided by participating Catalan institutions (Universities and CERCA Research Centers). The repository is managed by the CSUC and technical infrastructure is based on the Dataverse application, developed by international developers and users led by Harvard University (https://dataverse.org).
The Language Archive at the Max Planck Institute in Nijmegen provides a unique record of how people around the world use language in everyday life. It focuses on collecting spoken and signed language materials in audio and video form along with transcriptions, analyses, annotations and other types of relevant material (e.g. photos, accompanying notes).
DDBJ; DNA Data Bank of Japan is the sole nucleotide sequence data bank in Asia, which is officially certified to collect nucleotide sequences from researchers and to issue the internationally recognized accession number to data submitters.Since we exchange the collected data with EMBL-Bank/EBI; European Bioinformatics Institute and GenBank/NCBI; National Center for Biotechnology Information on a daily basis, the three data banks share virtually the same data at any given time. The virtually unified database is called "INSD; International Nucleotide Sequence Database DDBJ collects sequence data mainly from Japanese researchers, but of course accepts data and issue the accession number to researchers in any other countries.
ETH Data Archive is ETH Zurich's long-term preservation solution for digital information such as research data, digitised content, archival records, or images. It serves as the backbone of data curation and for most of its content, it is a “dark archive” without public access. In this capacity, the ETH Data Archive also archives the content of ETH Zurich’s Research Collection which is the primary repository for members of the university and the first point of contact for publication of data at ETH Zurich. All data that was produced in the context of research at the ETH Zurich, can be published and archived in the Research Collection. An automated connection to the ETH Data Archive in the background ensures the medium to long-term preservation of all publications and research data. Direct access to the ETH Data Archive is intended only for customers who need to deposit software source code within the framework of ETH transfer Software Registration. Open Source code packages and other content from legacy workflows can be accessed via ETH Library @ swisscovery (https://library.ethz.ch/en/).
The Landcare Research DataStore ('the DataStore') is the general data catalogue and repository for Environmental Research Data from Landcare Research. Much of Landcare Research’s research data is available through specific web pages, but many datasets sit outside these areas. This data repository provides a mechanism for our staff to deposit and document this wider range of datasets so that they may be discovered and potentially re-used.
The University of Reading Research Data Archive (the Archive) is a multidisciplinary online service for the registration, preservation and publication of research datasets produced or collected at the University of Reading.
PORTULAN CLARIN Research Infrastructure for the Science and Technology of Language, belonging to the Portuguese National Roadmap of Research Infrastructures of Strategic Relevance, and part of the international research infrastructure CLARIN ERIC
Country
sciencedata.dk is a research data store provided by DTU, the Danish Technical University, specifically aimed at researchers and scientists at Danish academic institutions. The service is intended for working with and sharing active research data as well as for safekeeping of large datasets. The data can be accessed and manipulated via a web interface, synchronization clients, file transfer clients or the command line. The service is built on and with open-source software from the ground up: FreeBSD, ZFS, Apache, PHP, ownCloud/Nextcloud. DTU is actively engaged in community efforts on developing research-specific functionality for data stores. Our servers are attached directly to the 10-Gigabit backbone of "Forskningsnettet" (the National Research and Education Network of Denmark) - implying that up and download speed from Danish academic institutions is in principle comparable to those of an external USB hard drive. Data store for research data allowing private sharing and sharing via links / persistent URLs.