Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 323 result(s)
The German Text Archive (Deutsches Textarchiv, DTA) presents online a selection of key German-language works in various disciplines from the 17th to 19th centuries. The electronic full-texts are indexed linguistically and the search facilities tolerate a range of spelling variants. The DTA presents German-language printed works from around 1650 to 1900 as full text and as digital facsimile. The selection of texts was made on the basis of lexicographical criteria and includes scientific or scholarly texts, texts from everyday life, and literary works. The digitalisation was made from the first edition of each work. Using the digital images of these editions, the text was first typed up manually twice (‘double keying’). To represent the structure of the text, the electronic full-text was encoded in conformity with the XML standard TEI P5. The next stages complete the linguistic analysis, i.e. the text is tokenised, lemmatised, and the parts of speech are annotated. The DTA thus presents a linguistically analysed, historical full-text corpus, available for a range of questions in corpus linguistics. Thanks to the interdisciplinary nature of the DTA Corpus, it also offers valuable source-texts for neighbouring disciplines in the humanities, and for scientists, legal scholars and economists.
The Biological and Chemical Oceanography Data Management Office (BCO-DMO) is a publicly accessible earth science data repository created to curate, publicly serve (publish), and archive digital data and information from biological, chemical and biogeochemical research conducted in coastal, marine, great lakes and laboratory environments. The BCO-DMO repository works closely with investigators funded through the NSF OCE Division’s Biological and Chemical Sections and the Division of Polar Programs Antarctic Organisms & Ecosystems. The office provides services that span the full data life cycle, from data management planning support and DOI creation, to archive with appropriate national facilities.
Country
Kadi4Mat instance for use at the Karlsruhe Institute of Technology (KIT) and for cooperations, including the Cluster of Competence for Solid-state Batteries (FestBatt), the Battery Competence Cluster Analytics/Quality Assurance (AQua), and more. Kadi4Mat is the Karlsruhe Data Infrastructure for Materials Science, an open source software for managing research data. It is being developed as part of several research projects at the Institute for Applied Materials - Microstructure Modelling and Simulation (IAM-MMS) of the Karlsruhe Institute of Technology (KIT). The goal of this project is to combine the ability to manage and exchange data, the repository , with the possibility to analyze, visualize and transform said data, the electronic lab notebook (ELN). Kadi4Mat supports a close cooperation between experimenters, theorists and simulators, especially in materials science, to enable the acquisition of new knowledge and the development of novel materials. This is made possible by employing a modular and generic architecture, which allows to cover the specific needs of different scientists, each utilizing unique workflows. At the same time, this opens up the possibility of covering other research disciplines as well.
IAGOS aims to provide long-term, regular and spatially resolved in situ observations of the atmospheric composition. The observation systems are deployed on a fleet of 10 to 15 commercial aircraft measuring atmospheric chemistry concentrations and meteorological fields. The IAGOS Data Centre manages and gives access to all the data produced within the project.
The University of Pittsburgh English Language Institute Corpus (PELIC) is a 4.2-million-word learner corpus of written texts. These texts were collected in an English for Academic Purposes (EAP) context over seven years in the University of Pittsburgh’s Intensive English Program, and were produced by over 1100 students with a wide range of linguistic backgrounds and proficiency levels. PELIC is longitudinal, offering greater opportunities for tracking development in a natural classroom setting.
Country
The CyberCell database (CCDB) is a comprehensive collection of detailed enzymatic, biological, chemical, genetic, and molecular biological data about E. coli (strain K12, MG1655). It is intended to provide sufficient information and querying capacity for biologists and computer scientists to use computers or detailed mathematical models to simulate all or part of a bacterial cell at a nanoscopic (10-9 m), mesoscopic (10-8 m).The CyberCell database CCDB actually consists of 4 browsable databases: 1) the main CyberCell database (CCDB - containing gene and protein information), 2) the 3D structure database (CC3D – containing information for structural proteomics), 3) the RNA database (CCRD – containing tRNA and rRNA information), and 4) the metabolite database (CCMD – containing metabolite information). Each of these databases is accessible through hyperlinked buttons located at the top of the CCDB homepage. All CCDB sub-databases are fully web enabled, permitting a wide variety of interactive browsing, search and display operations. and microscopic (10-6 m) level.
>>> !!! the repository is offline !!! <<< More information see: https://dknet.org/about/NURSA_Archive All NURSA-biocurated transcriptomic datasets have been preserved for data mining in SPP through an enhanced and expanded version of Transcriptomine named Ominer. To access these datasets, dkNET provides users with the information of 527 transcriptomic datasets that contain data related to nuclear receptors and nuclear receptor coregulators in the NURSA Datasets table view and redirects users to the current SPP dataset page. Once users find the specific dataset of research interest, users can download the dataset by clicking DOI and then clicking the Download Dataset button at the Signaling Pathways Project webpage. See https://www.re3data.org/repository/r3d100013650
<<<!!!<<< This repository is no longer available. >>>!!!>>> BioVeL is a virtual e-laboratory that supports research on biodiversity issues using large amounts of data from cross-disciplinary sources. BioVeL supports the development and use of workflows to process data. It offers the possibility to either use already made workflows or create own. BioVeL workflows are stored in MyExperiment - Biovel Group http://www.myexperiment.org/groups/643/content. They are underpinned by a range of analytical and data processing functions (generally provided as Web Services or R scripts) to support common biodiversity analysis tasks. You can find the Web Services catalogued in the BiodiversityCatalogue.
Country
Phaidra Universität Wien, is the innovative whole-university digital asset management system with long-term archiving functions, offers the possibility to archive valuable data university-wide with permanent security and systematic input, offering multilingual access using metadata (data about data), thus providing worldwide availability around the clock. As a constant data pool for administration, research and teaching, resources can be used flexibly, where continual citability allows the exact location and retrieval of prepared digital objects.
The Agricultural and Environmental Data Archive (AEDA) is the direct result of a project managed by the Freshwater Biological Association in partnership with the Centre for e-Research at King's College London, and funded by the Department for the Environment, Food & Rural Affairs (Defra). This project ran from January 2011 until December 2014 and was called the DTC Archive Project, because it was initially related to the Demonstration Test Catchments Platform developed by Defra. The archive was also designed to hold data from the GHG R&D Platform (www.ghgplatform.org.uk). After the DTC Archive Project was completed the finished archive was renamed as AEDA to reflect it's broader remit to archive data from any and all agricultural and environmental research activities.
The CATH database is a hierarchical domain classification of protein structures in the Protein Data Bank. Protein structures are classified using a combination of automated and manual procedures. There are four major levels in the CATH hierarchy; Class, Architecture, Topology and Homologous superfamily.
Country
Aperta is the open access repository of The Scientific and Technological Research Council of Turkey (TÜBİTAK). The publications produced from the projects supported by TUBITAK must be uploaded to TUBITAK Open Archive Aperta. It is recommended that the research data of these publications should be open access.
Country
GnpIS is a multispecies integrative information system dedicated to plant and fungi pests. It bridges genetic and genomic data, allowing researchers access to both genetic information (e.g. genetic maps, quantitative trait loci, association genetics, markers, polymorphisms, germplasms, phenotypes and genotypes) and genomic data (e.g. genomic sequences, physical maps, genome annotation and expression data) for species of agronomical interest. GnpIS is used by both large international projects and plant science departments at the French National Research Institute for Agriculture, Food and Environment. It is regularly improved and released several times per year. GnpIS is accessible through a web portal and allows to browse different types of data either independently through dedicated interfaces or simultaneously using a quick search ('google like search') or advanced search (Biomart, Galaxy, Intermine) tools.
The Netherlands Polar Data Center (NPDC) is part of the Netherlands Polar Program (NPP). NPDC archives and provides access to the data of Polar Research by researchers funded by Dutch Research Council (NWO) or otherwise carried out by researchers from Dutch universities and research institutions. The repository provides: 1) An overview of current and completed projects from the Netherlands Polar Programme (NPP) and other Dutch projects in the Polar Regions; 2) Access to the data of research carried out by Dutch researchers in the Polar Regions; and, 3) Links to external sources of Polar research data. For more information about the NPDC and the services it may offer to the Dutch Polar research community see https://npdc.nl/npdc.
The Geo Big Data Open Platform of the Korea Institute of Geological Resources is a data-based repository that allows anyone to easily access the latest geological resource information scattered in Korea. It was established for the purpose of quickly organizing and providing domestic and foreign geological resource research information pouring out of a super-gap society to utilize the solution of national social problems and create an open science research ecosystem in the geological resource field.
Country
Multidisciplinary research data repository, hosted by DTU, the Danish Technical University.
Country
depositar — taking the term from the Portuguese/Spanish verb for to deposit — is an online repository for research data. The site is built by the researchers for the researchers. You are free to deposit, discover, and reuse datasets on depositar for all your research purposes.
Country
With the support of TELOTA, the Library of the Berlin-Brandenburg Academy of Sciences and Humanities (BBAW) operates an institutional repository for the first and second publication of scholarly publications authored or edited by members or staff of the BBAW and its predecessor institutions. The edoc server makes the publications available to the worldwide scholarly community in open access and ensures their long-term archiving. The edoc server is thus part of the implementation of the Academy's Open Science mission statement. Since 11/2022 the edoc-server of the academy is as well accepting research data and is thus the institutional research data repository of the academy.
Country
GESIS preserves (mainly quantitative) social research data to make it available to the scientific research community. The data is described in a standardized way, secured for the long term, provided with a permanent identifier (DOI), and can be easily found and reused through browser-optimized catalogs (https://search.gesis.org/).
Country
MDM-Portal (Medical Data Models) is a meta-data registry for creating, analyzing, sharing and reusing medical forms. It serves as an infrastructure for academic (non-commercial) medical research to contribute a solution to this problem. It contains forms in the system-independent CDISC Operational Data Model (ODM) format with more than 500,000 data-elements. The Portal provides numerous core data sets, common data elements or data standards, code lists and value sets. This enables researchers to view, discuss, download and export forms in most common technical formats such as PDF, CSV, Excel, SQL, SPSS, R, etc.
The CONP portal is a web interface for the Canadian Open Neuroscience Platform (CONP) to facilitate open science in the neuroscience community. CONP simplifies global researcher access and sharing of datasets and tools. The portal internalizes the cycle of a typical research project: starting with data acquisition, followed by processing using already existing/published tools, and ultimately publication of the obtained results including a link to the original dataset. From more information on CONP, please visit https://conp.ca
OSGeo's mission is to support the collaborative development of open source geospatial software, in part by providing resources for projects and promoting freely available geodata. The Public Geodata Repository is a distributed repository and registry of data sources free to access, reuse, and re-distribute.