Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 54 result(s)
Chempound is a new generation repository architecture based on RDF, semantic dictionaries and linked data. It has been developed to hold any type of chemical object expressible in CML and is exemplified by crystallographic experiments and computational chemistry calculations. In both examples, the repository can hold >50k entries which can be searched by SPARQL endpoints and pre-indexing of key fields. The Chempound architecture is general and adaptable to other fields of data-rich science. The Chempound software is hosted at http://bitbucket.org/chempound and is available under the Apache License, Version 2.0
The International Union of Basic and Clinical Pharmacology (IUPHAR) / British Pharmacological Society (BPS) Guide to PHARMACOLOGY is an expert-curated resource of ligand-activity-target relationships, the majority of which come from high-quality pharmacological and medicinal chemistry literature. It is intended as a “one-stop shop” portal to pharmacological information and its main aim is to provide a searchable database with quantitative information on drug targets and the prescription medicines and experimental drugs that act on them. In future versions we plan to add resources for education and training in pharmacological principles and techniques along with research guidelines and overviews of key topics. We hope that the IUPHAR/BPS Guide to PHARMACOLOGY (abbreviated as GtoPdb) will be useful for researchers and students in pharmacology and drug discovery and provide the general public with accurate information on the basic science underlying drug action.
Country
SMU Research Data Repository (SMU RDR) is a tool and service for researchers from Singapore Management University (SMU) to store, share and publish their research data. SMU RDR accepts a wide range of research data and outputs generated from research projects.
Country
The TRR228DB is the project-database of the Collaborative Research Centre 228 "Future Rural Africa: Future-making and social-ecological transformation" (CRC/Transregio 228, https://www.crc228.de) funded by the German Research Foundation (DFG, German Research Foundation – Project number 328966760). The project-database is a new implementation of the TR32DB and online since 2018. It handles all data including metadata, which are created by the involved project participants from several institutions (e.g. Universities of Cologne and Bonn) and research fields (e.g. anthropology, agroeconomics, ecology, ethnology, geography, politics and soil sciences). The data is resulting from several field campaigns, interviews, surveys, remote sensing, laboratory studies and modelling approaches. Furthermore, outcomes of the scientists such as publications, conference contributions, PhD reports and corresponding images are collected.
Country
On this server you'll find 127 items of primary data of the University of Munich. Scientists / students of all faculties of LMU and of institutions that cooperate with the LMU are invited to deposit their research data on this platform.
Country
The edoc-Server, start 1998, is the Institutional Repository of the Humboldt-Universität zu Berlin and offers the posibility of text- and data-publications. Every item is published for Open-Access with an optional embargo period of up to five years. Data publications since 01.01.2018.
The UniProtKB Sequence/Annotation Version Archive (UniSave) has the mission of providing freely to the scientific community a repository containing every version of every Swiss-Prot/TrEMBL entry in the UniProt Knowledge Base (UniProtKB). This is achieved by archiving, every release, the entry versions within the current release. The primary usage of this service is to provide open access to all entry versions of all entries. In addition to viewing their content, one can also filter, download and compare versions.
Country
PANGAEA - Data Publisher for Earth & Environmental Sciences has an almost 30-year history as an open-access library for archiving, publishing, and disseminating georeferenced data from the Earth, environmental, and biodiversity sciences. Originally evolving from a database for sediment cores, it is operated as a joint facility of the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI) and the Center for Marine Environmental Sciences (MARUM) at the University of Bremen. PANGAEA holds a mandate from the World Meteorological Organization (WMO) and is accredited as a World Radiation Monitoring Center (WRMC). It was further accredited as a World Data Center by the International Council for Science (ICS) in 2001 and has been certified with the Core Trust Seal since 2019. The successful cooperation between PANGAEA and the publishing industry along with the correspondent technical implementation enables the cross-referencing of scientific publications and datasets archived as supplements to these publications. PANGAEA is the recommended data repository of numerous international scientific journals.
Cocoon "COllections de COrpus Oraux Numériques" is a technical platform that accompanies the oral resource producers, create, organize and archive their corpus; a corpus can consist of records (usually audio) possibly accompanied by annotations of these records. The resources registered are first cataloged and stored while, and then, secondly archived in the archive of the TGIR Huma-Num. The author and his institution are responsible for filings and may benefit from a restricted and secure access to their data for a defined period, if the content of the information is considered sensitive. The COCOON platform is jointly operated by two joint research units: Laboratoire de Langues et civilisations à tradition orale (LACITO - UMR7107 - Université Paris3 / INALCO / CNRS) and Laboratoire Ligérien de Linguistique (LLL - UMR7270 - Universités d'Orléans et de Tours, BnF, CNRS).
D-PLACE contains cultural, linguistic, environmental and geographic information for over 1400 human ‘societies’. A ‘society’ in D-PLACE represents a group of people in a particular locality, who often share a language and cultural identity. All cultural descriptions are tagged with the date to which they refer and with the ethnographic sources that provided the descriptions. The majority of the cultural descriptions in D-PLACE are based on ethnographic work carried out in the 19th and early-20th centuries (pre-1950).
DBpedia is a crowd-sourced community effort to extract structured information from Wikipedia and make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia, and to link the different data sets on the Web to Wikipedia data. We hope that this work will make it easier for the huge amount of information in Wikipedia to be used in some new interesting ways. Furthermore, it might inspire new mechanisms for navigating, linking, and improving the encyclopedia itself.
Country
nmrXiv is an open, FAIR and consensus-driven NMR spectroscopy data repository and analysis platform. We archive raw and processed NMR data, providing support for browsing, search, analysis, and dissemination of NMR data worldwide.
ANPERSANA is the digital library of IKER (UMR 5478), a research centre specialized in Basque language and texts. The online library platform receives and disseminates primary sources of data issued from research in Basque language and culture. As of today, two corpora of documents have been published. The first one, is a collection of private letters written in an 18th century variety of Basque, documented in and transcribed to modern standard Basque. The discovery of the collection, named Le Dauphin, has enabled the emerging of new questions about the history and sociology of writing in the domain of minority languages, not only in France, but also among the whole Atlantic Arc. The second of the two corpora is a selection of sound recordings about monodic chant in the Basque Country. The documents were collected as part of a PhD thesis research work that took place between 2003 and 2012. It's a total of 50 hours of interviews with francophone and bascophone cultural representatives carried out at either their workplace of the informers or in public areas. ANPERSANA is bundled with an advanced search engine. The documents have been indexed and geo-localized on an interactive map. The platform is engaged with open access and all the resources can be uploaded freely under the different Creative Commons (CC) licenses.
eLaborate is an online work environment in which scholars can upload scans, transcribe and annotate text, and publish the results as on online text edition which is freely available to all users. Short information about and a link to already published editions is presented on the page Editions under Published. Information about editions currently being prepared is posted on the page Ongoing projects. The eLaborate work environment for the creation and publication of online digital editions is developed by the Huygens Institute for the History of the Netherlands of the Royal Netherlands Academy of Arts and Sciences. Although the institute considers itself primarily a research facility and does not maintain a public collection profile, Huygens ING actively maintains almost 200 digitally available resource collections.
The Humanitarian Data Exchange (HDX) is an open platform for sharing data across crises and organisations. Launched in July 2014, the goal of HDX is to make humanitarian data easy to find and use for analysis. HDX is managed by OCHA's Centre for Humanitarian Data, which is located in The Hague. OCHA is part of the United Nations Secretariat and is responsible for bringing together humanitarian actors to ensure a coherent response to emergencies. The HDX team includes OCHA staff and a number of consultants who are based in North America, Europe and Africa.
OpenWorm aims to build the first comprehensive computational model of the Caenorhabditis elegans (C. elegans), a microscopic roundworm. With only a thousand cells, it solves basic problems such as feeding, mate-finding and predator avoidance. Despite being extremely well studied in biology, this organism still eludes a deep, principled understanding of its biology. We are using a bottom-up approach, aimed at observing the worm behaviour emerge from a simulation of data derived from scientific experiments carried out over the past decade. To do so we are incorporating the data available in the scientific community into software models. We are engineering Geppetto and Sibernetic, open-source simulation platforms, to be able to run these different models in concert. We are also forging new collaborations with universities and research institutes to collect data that fill in the gaps All the code we produce in the OpenWorm project is Open Source and available on GitHub.
Country
In the framework of the Collaborative Research Centre/Transregio 32 ‘Patterns in Soil-Vegetation-Atmosphere Systems: Monitoring, Modelling, and Data Assimilation’ (CRC/TR32, www.tr32.de), funded by the German Research Foundation from 2007 to 2018, a RDM system was self-designed and implemented. The so-called CRC/TR32 project database (TR32DB, www.tr32db.de) is operating online since early 2008. The TR32DB handles all data including metadata, which are created by the involved project participants from several institutions (e.g. Universities of Cologne, Bonn, Aachen, and the Research Centre Jülich) and research fields (e.g. soil and plant sciences, hydrology, geography, geophysics, meteorology, remote sensing). The data is resulting from several field measurement campaigns, meteorological monitoring, remote sensing, laboratory studies and modelling approaches. Furthermore, outcomes of the scientists such as publications, conference contributions, PhD reports and corresponding images are collected in the TR32DB.
The CONP portal is a web interface for the Canadian Open Neuroscience Platform (CONP) to facilitate open science in the neuroscience community. CONP simplifies global researcher access and sharing of datasets and tools. The portal internalizes the cycle of a typical research project: starting with data acquisition, followed by processing using already existing/published tools, and ultimately publication of the obtained results including a link to the original dataset. From more information on CONP, please visit https://conp.ca
OSGeo's mission is to support the collaborative development of open source geospatial software, in part by providing resources for projects and promoting freely available geodata. The Public Geodata Repository is a distributed repository and registry of data sources free to access, reuse, and re-distribute.
MassBank of North America (MoNA) is a metadata-centric, auto-curating repository designed for efficient storage and querying of mass spectral records. It intends to serve as a the framework for a centralized, collaborative database of metabolite mass spectra, metadata and associated compounds. MoNA currently contains over 200,000 mass spectral records from experimental and in-silico libraries as well as from user contributions.
The Geoscience Data Exchange (GDEX) mission is to provide public access to data and other digital research assets related to the Earth and its atmosphere, oceans, and space environment. GDEX fulfills federal and scientific publication requirements for open data access by: Providing long-term curation and stewardship of research assets; Enabling scientific transparency and traceability of research findings in digital formats; Complementing existing NCAR community data management and archiving capabilities; Facilitating openness and accessibility for the public to leverage the research assets and thereby benefit from NCAR's historical and ongoing scientific research. This mission intentionally supports and aligns with those of NCAR and its sponsor, the National Science Foundation (NSF).
GigaDB primarily serves as a repository to host data and tools associated with articles published by GigaScience Press; GigaScience and GigaByte (both are online, open-access journals). GigaDB defines a dataset as a group of files (e.g., sequencing data, analyses, imaging files, software programs) that are related to and support a unit-of-work (article or study). GigaDB allows the integration of manuscript publication with supporting data and tools.