Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 264 result(s)
Chempound is a new generation repository architecture based on RDF, semantic dictionaries and linked data. It has been developed to hold any type of chemical object expressible in CML and is exemplified by crystallographic experiments and computational chemistry calculations. In both examples, the repository can hold >50k entries which can be searched by SPARQL endpoints and pre-indexing of key fields. The Chempound architecture is general and adaptable to other fields of data-rich science. The Chempound software is hosted at http://bitbucket.org/chempound and is available under the Apache License, Version 2.0
The National Archives and Records Administration (NARA) is the nation's record keeper. Of all documents and materials created in the course of business conducted by the United States Federal government, only 1%-3% are so important for legal or historical reasons that they are kept by us forever. Those valuable records are preserved and are available to you, whether you want to see if they contain clues about your family’s history, need to prove a veteran’s military service, or are researching an historical topic that interests you.
Pubchem contains 3 databases. 1. PubChem BioAssay: The PubChem BioAssay Database contains bioactivity screens of chemical substances described in PubChem Substance. It provides searchable descriptions of each bioassay, including descriptions of the conditions and readouts specific to that screening procedure. 2. PubChem Compound: The PubChem Compound Database contains validated chemical depiction information provided to describe substances in PubChem Substance. Structures stored within PubChem Compounds are pre-clustered and cross-referenced by identity and similarity groups. 3. PubChem Substance. The PubChem Substance Database contains descriptions of samples, from a variety of sources, and links to biological screening results that are available in PubChem BioAssay. If the chemical contents of a sample are known, the description includes links to PubChem Compound.
GAWSIS is being developed and maintained by the Federal Office of Meteorology and Climatology MeteoSwiss in collaboration with the WMO GAW Secretariat, the GAW World Data Centres and other GAW representatives to improve the management of information about the GAW network of ground-based stations. The application is presently hosted by the Swiss Laboratories for Materials Testing and Research Empa. GAWSIS provides the GAW community and other interested people with an up-to-date, searchable data base of site descriptions, measurements programs and data available, contact people, bibliographic references. Linked data collections are hosted at the World Data Centers of the WMO Global Atmosphere Watch.
The World Bank recognizes that transparency and accountability are essential to the development process and central to achieving the Bank’s mission to alleviate poverty. The Bank’s commitment to openness is also driven by a desire to foster public ownership, partnership and participation in development from a wide range of stakeholders. As a knowledge institution, the World Bank’s first step is to share its knowledge freely and openly.
The Ontology Lookup Service (OLS) is a repository for biomedical ontologies that aims to provide a single point of access to the latest ontology versions. The user can browse the ontologies through the website as well as programmatically via the OLS API. The OLS provides a web service interface to query multiple ontologies from a single location with a unified output format.The OLS can integrate any ontology available in the Open Biomedical Ontology (OBO) format. The OLS is an open source project hosted on Google Code.
The nationally recognized National Cancer Database (NCDB)—jointly sponsored by the American College of Surgeons and the American Cancer Society—is a clinical oncology database sourced from hospital registry data that are collected in more than 1,500 Commission on Cancer (CoC)-accredited facilities. NCDB data are used to analyze and track patients with malignant neoplastic diseases, their treatments, and outcomes. Data represent more than 70 percent of newly diagnosed cancer cases nationwide and more than 34 million historical records.
Bitbucket is a web-based version control repository hosting service owned by Atlassian, for source code and development projects that use either Mercurial or Git revision control systems.
Our knowledge of the many life-forms on Earth - of animals, plants, fungi, protists and bacteria - is scattered around the world in books, journals, databases, websites, specimen collections, and in the minds of people everywhere. Imagine what it would mean if this information could be gathered together and made available to everyone – anywhere – at a moment’s notice. This dream is becoming a reality through the Encyclopedia of Life.
Born of the desire to systematize analyses from The Cancer Genome Atlas pilot and scale their execution to the dozens of remaining diseases to be studied, GDAC Firehose now sits atop terabytes of analysis-ready TCGA data and reliably executes thousands of pipelines per month. More information: https://broadinstitute.atlassian.net/wiki/spaces/GDAC/
The CLARIN­/Text+ repository at the Saxon Academy of Sciences and Humanities in Leipzig offers long­term preservation of digital resources, along with their descriptive metadata. The mission of the repository is to ensure the availability and long­term preservation of resources, to preserve knowledge gained in research, to aid the transfer of knowledge into new contexts, and to integrate new methods and resources into university curricula. Among the resources currently available in the Leipzig repository are a set of corpora of the Leipzig Corpora Collection (LCC), based on newspaper, Wikipedia and Web text. Furthermore several REST-based webservices are provided for a variety of different NLP-relevant tasks The repository is part of the CLARIN infrastructure and part of the NFDI consortium Text+. It is operated by the Saxon Academy of Sciences and Humanities in Leipzig.
The Arizona State University (ASU) Research Data Repository provides a platform for ASU-affiliated researchers to share, preserve, cite, and make research data accessible and discoverable. The ASU Research Data Repository provides a permanent digital identifier for research data, which complies with data sharing policies. The repository is powered by the Dataverse open-source application, developed and used by Harvard University. Both the ASU Research Data Repository and the KEEP Institutional Repository are managed by the ASU Library to ensure research produced at Arizona State University is discoverable and accessible to the global community.
Country
DUnAs is the institutional research data repository of the University of Aveiro. This repository is intended to share, archive, preserve, cite, access, and explore research data produced in the university scientific research activities.
CERN, DESY, Fermilab and SLAC have built the next-generation High Energy Physics (HEP) information system, INSPIRE. It combines the successful SPIRES database content, curated at DESY, Fermilab and SLAC, with the Invenio digital library technology developed at CERN. INSPIRE is run by a collaboration of CERN, DESY, Fermilab, IHEP, IN2P3 and SLAC, and interacts closely with HEP publishers, arXiv.org, NASA-ADS, PDG, HEPDATA and other information resources. INSPIRE represents a natural evolution of scholarly communication, built on successful community-based information systems, and provides a vision for information management in other fields of science.
Språkbanken was established in 1975 as a national center located in the Faculty of Arts, University of Gothenburg. Allén's groundbreaking corpus linguistic research resulted in the creation of one of the first large electronic text corpora in another language than English, with one million words of newspaper text. The task of Språkbanken is to collect, develop, and store (Swedish) text corpora, and to make linguistic data extracted from the corpora available to researchers and to the public.
The Tropospheric Ozone Assessment Report (TOAR) database of global surface observations is the world's most extensive collection of surface ozone measurements and includes also data on other air pollutants and on weather for some regions. Measurements from 1970 to 2019 (Version 1) have been collected in a relational database, and are made available via a graphical web interface, a REST service (https://toar-data.fz-juelich.de/api/v1) and as aggregated products on PANGAEA (https://doi.pangaea.de/10.1594/PANGAEA.876108). Measurements from 1970 to present (Version 2) are being collected in a relational database, and are made available via a REST service (https://toar-data.fz-juelich.de/api/v2).
Academic Commons provides open, persistent access to the scholarship produced by researchers at Columbia University, Barnard College, Jewish Theological Seminary, Teachers College, and Union Theological Seminary. Academic Commons is a program of the Columbia University Libraries. Academic Commons accepts articles, dissertations, research data, presentations, working papers, videos, and more.
The mission of the GO Consortium is to develop a comprehensive, computational model of biological systems, ranging from the molecular to the organism level, across the multiplicity of species in the tree of life. The Gene Ontology (GO) knowledgebase is the world’s largest source of information on the functions of genes. This knowledge is both human-readable and machine-readable, and is a foundation for computational analysis of large-scale molecular biology and genetics experiments in biomedical research.
DaSCH is the trusted platform and partner for open research data in the Humanities. DaSCH develops and operates a FAIR long-term repository and a generic virtual research environment for open research data in the humanities in Switzerland. We provide long-term direct access to the data, enable their continuous editing and allow for precise citation of single objects within a dataset. We ensure interoperability with tools used by the Humanities and Cultural Sciences communities and foster the use of standards. The development of our platform happens in close cooperation with these communities. We provide training and advice in the area of research data management, promote open data and the use of standards. DaSCH is the coordinating institution and representative of Switzerland in the European Research Infrastructure Consortium ‘Digital Research Infrastructure for the Arts and Humanities’ (DARIAH ERIC). Within this mandate, we actively engage in community building within Switzerland and abroad. DaSCH cooperates with national and international organizations and initiatives in order to provide services that are fit for purpose within the broader Swiss open research data landscape and that are coordinated with other institutions such as FORS. We base our actions on the values reliability, flexibility, appreciation, curiosity, and persistence. Furthermore, DARIAH’s activities in Switzerland are coordinated by DaSCH and DaSCH is acting as DARIAH-CH Coordination Office.
With the Program EnviDat we develop a unified and managed access portal for WSL's rich reservoir of environmental monitoring and research data. EnviDat is designed as a portal to publish, connect and search across existing data but is not intended to become a large data centre hosting original data. While sharing of data is centrally facilitated, data management remains decentralised and the know-how and responsibility to curate research data remains with the original data providers.
Country
GRO.data is a research data repository for the Göttingen Campus. Belonging researchers can use it for free. It serves different purposes such as: to simply preserve datasets, to keep track of changes across several versions, to share data with colleagues, to make data itself publicly available, to receive a persistent identifier upon publications.
Country
bonndata is the institutional, FAIR-aligned and curated, cross-disciplinary research data repository for the publication of research data for all researchers at the University of Bonn. The repository is fully embedded into the University IT and Data Center and curated by the Research Data Service Center (https://www.forschungsdaten.uni-bonn.de/en). The software that bonndata is based on is the open source software Dataverse (https://dataverse.org)