Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 38 result(s)
>>>>!!!<<< As stated 2017-06-27 The website http://researchcompendia.org is no longer available; repository software is archived on github https://github.com/researchcompendia >>>!!!<<< The ResearchCompendia platform is an attempt to use the web to enhance the reproducibility and verifiability—and thus the reliability—of scientific research. we provide the tools to publish the "actual scholarship" by hosting data, code, and methods in a form that is accessible, trackable, and persistent. Some of our short term goals include: To expand and enhance the platform including adding executability for a greater variety of coding languages and frameworks, and enhancing output presentation. To expand usership and to test the ResearchCompendia model in a number of additional fields, including computational mathematics, statistics, and biostatistics. To pilot integration with existing scholarly platforms, enabling researchers to discover relevant Research Compendia websites when looking at online articles, code repositories, or data archives.
Country
The BCDC serves the research data obtained, and the data syntheses assembled, by researchers within the Bjerknes Centre for Climate Research. Furthermore it is open for all interested scientists independent of institution. All data from the different disciplines (e.g. geology, oceanography, biology, model community) will be archived in a long-term repository, interconnected and made publicly available by the BCDC. BCDC has collaborations with many international data repositories and actively archives metadata and data at those ensuring quality and FAIRness. BCDC has it's main focus on services for data management for external and internal funded projects in the field of climate research, provides data management plans and ensures that data is archived accordingly according to the best practices in the field. The data management services rank from project work for small external funded project to top-of-the-art data management services for research infrastructures on the ESFRI roadmap (e.g. RI ICOS – Integrated Carbon Observation System) and for provides products and services for Copernicus Marine Environmental Monitoring Services. In addition BCDC is advising various communities on data management services e.g. IOC UNESCO, OECD, IAEA and various funding agencies. BCDC will become an Associated Data Unit (ADU) under IODE, International Oceanographic Data and Information Exchange, a worldwide network that operates under the auspices of the Intergovernmental Oceanographic Commission of UNESCO and aims at becoming a part of ICSU World Data System.
The Tropospheric Ozone Assessment Report (TOAR) database of global surface observations is the world's most extensive collection of surface ozone measurements and includes also data on other air pollutants and on weather for some regions. Measurements from 1970 to 2019 (Version 1) have been collected in a relational database, and are made available via a graphical web interface, a REST service (https://toar-data.fz-juelich.de/api/v1) and as aggregated products on PANGAEA (https://doi.pangaea.de/10.1594/PANGAEA.876108). Measurements from 1970 to present (Version 2) are being collected in a relational database, and are made available via a REST service (https://toar-data.fz-juelich.de/api/v2).
The OpenMadrigal project seeks to develop and support an on-line database for geospace data. The project has been led by MIT Haystack Observatory since 1980, but now has active support from Jicamarca Observatory and other community members. Madrigal is a robust, World Wide Web based system capable of managing and serving archival and real-time data, in a variety of formats, from a wide range of ground-based instruments. Madrigal is installed at a number of sites around the world. Data at each Madrigal site is locally controlled and can be updated at any time, but shared metadata between Madrigal sites allow searching of all Madrigal sites at once from any Madrigal site. Data is local; metadata is shared.
The DIP database catalogs experimentally determined interactions between proteins. It combines information from a variety of sources to create a single, consistent set of protein-protein interactions. The data stored within the DIP database were curated, both, manually by expert curators and also automatically using computational approaches that utilize the the knowledge about the protein-protein interaction networks extracted from the most reliable, core subset of the DIP data. Please, check the reference page to find articles describing the DIP database in greater detail. The Database of Ligand-Receptor Partners (DLRP) is a subset of DIP (Database of Interacting Proteins). The DLRP is a database of protein ligand and protein receptor pairs that are known to interact with each other. By interact we mean that the ligand and receptor are members of a ligand-receptor complex and, unless otherwise noted, transduce a signal. In some instances the ligand and/or receptor may form a heterocomplex with other ligands/receptors in order to be functional. We have entered the majority of interactions in DLRP as full DIP entries, with links to references and additional information
>>>!!!<<< This site is going away on April 1, 2021. General access to the site has been disabled and community users will see an error upon login. >>>!!!<<< Socrata’s cloud-based solution allows government organizations to put their data online, make data-driven decisions, operate more efficiently, and share insights with citizens.
Country
Science Data Bank is an open generalist data repository developed and maintained by the Chinese Academy of Sciences Computing and Network Information Center (CNIC). It promotes the publication and reuse of scientific data. Researchers and journal publishers can use it to store, manage and share science data.
Country
Research Data Unipd is a data archive and supports research produced by the members of the University of Padova. The service aims to facilitate data discovery, data sharing, and reuse, as required by funding institutions (eg. European Commission). Datasets published in the archive have a set of metadata that ensure proper description and discoverability.
The German Text Archive (Deutsches Textarchiv, DTA) presents online a selection of key German-language works in various disciplines from the 17th to 19th centuries. The electronic full-texts are indexed linguistically and the search facilities tolerate a range of spelling variants. The DTA presents German-language printed works from around 1650 to 1900 as full text and as digital facsimile. The selection of texts was made on the basis of lexicographical criteria and includes scientific or scholarly texts, texts from everyday life, and literary works. The digitalisation was made from the first edition of each work. Using the digital images of these editions, the text was first typed up manually twice (‘double keying’). To represent the structure of the text, the electronic full-text was encoded in conformity with the XML standard TEI P5. The next stages complete the linguistic analysis, i.e. the text is tokenised, lemmatised, and the parts of speech are annotated. The DTA thus presents a linguistically analysed, historical full-text corpus, available for a range of questions in corpus linguistics. Thanks to the interdisciplinary nature of the DTA Corpus, it also offers valuable source-texts for neighbouring disciplines in the humanities, and for scientists, legal scholars and economists.
Country
Rodare is the institutional research data repository at HZDR (Helmholtz-Zentrum Dresden-Rossendorf). Rodare allows HZDR researchers to upload their research software and data and enrich those with metadata to make them findable, accessible, interoperable and retrievable (FAIR). By publishing all associated research software and data via Rodare research reproducibility can be improved. Uploads receive a Digital Object Identfier (DOI) and can be harvested via a OAI-PMH interface.
CalSurv is a comprehensive information on West Nile virus, plague, malaria, Lyme disease, trench fever and other vectorborne diseases in California — where they are, where they’ve been, where they may be headed and what new diseases may be emerging.The CalSurv Web site serves as a portal or a single interface to all surveillance-related Web sites in California.
The Argo observational network consists of a fleet of 3000+ profiling autonomous floats deployed by about a dozen teams worldwide. WHOI has built about 10% of the global fleet. The mission lifetime of each float is about 4 years. During a typical mission, each float reports a profile of the upper ocean every 10 days. The sensors onboard record fundamental physical properties of the ocean: temperature and conductivity (a measure of salinity) as a function of pressure. The depth range of the observed profile depends on the local stratification and the float's mechanical ability to adjust it's buoyancy. The majority of Argo floats report profiles between 1-2 km depth. At each surfacing, measurements of temperature and salinity are relayed back to shore via satellite. Telemetry is usually received every 10 days, but floats at high-latitudes which are iced-over accumulate their data and transmit the entire record the next time satellite contact is established. With current battery technology, the best performing floats last 6+ years and record over 200 profiles.
The Berman Jewish Databank @ The Jewish Federations of North America is the central online address for quantitative studies of North American Jews and Jewish communities. Archives and makes available electronically questionnaires, reports and data files from the National Jewish Population Surveys (NJPS) of 1971, 1990 and 2000-01. It provides access to other national Jewish population reports, Jewish population statistics and approximately 200 local Jewish community studies from the major Jewish communities in North America.
The European Nucleotide Archive (ENA) captures and presents information relating to experimental workflows that are based around nucleotide sequencing. A typical workflow includes the isolation and preparation of material for sequencing, a run of a sequencing machine in which sequencing data are produced and a subsequent bioinformatic analysis pipeline. ENA records this information in a data model that covers input information (sample, experimental setup, machine configuration), output machine data (sequence traces, reads and quality scores) and interpreted information (assembly, mapping, functional annotation). Data arrive at ENA from a variety of sources. These include submissions of raw data, assembled sequences and annotation from small-scale sequencing efforts, data provision from the major European sequencing centres and routine and comprehensive exchange with our partners in the International Nucleotide Sequence Database Collaboration (INSDC). Provision of nucleotide sequence data to ENA or its INSDC partners has become a central and mandatory step in the dissemination of research findings to the scientific community. ENA works with publishers of scientific literature and funding bodies to ensure compliance with these principles and to provide optimal submission systems and data access tools that work seamlessly with the published literature.
Biological collections are replete with taxonomic, geographic, temporal, numerical, and historical information. This information is crucial for understanding and properly managing biodiversity and ecosystems, but is often difficult to access. Canadensys, operated from the Université de Montréal Biodiversity Centre, is a Canada-wide effort to unlock the biodiversity information held in biological collections.
The World Glacier Monitoring Service (WGMS) collects standardized observations on changes in mass, volume, area and length of glaciers with time (glacier fluctuations), as well as statistical information on the distribution of perennial surface ice in space (glacier inventories). Such glacier fluctuation and inventory data are high priority key variables in climate system monitoring; they form a basis for hydrological modelling with respect to possible effects of atmospheric warming, and provide fundamental information in glaciology, glacial geomorphology and quaternary geology. The highest information density is found for the Alps and Scandinavia, where long and uninterrupted records are available. As a contribution to the Global Terrestrial/Climate Observing System (GTOS, GCOS), the Division of Early Warning and Assessment and the Global Environment Outlook of UNEP, and the International Hydrological Programme of UNESCO, the WGMS collects and publishes worldwide standardized glacier data.
Country
The CORA. Repositori de dades de Recerca is a repository of open, curated and FAIR data that covers all academic disciplines. CORA. Repositori de dades de Recerca is a shared service provided by participating Catalan institutions (Universities and CERCA Research Centers). The repository is managed by the CSUC and technical infrastructure is based on the Dataverse application, developed by international developers and users led by Harvard University (https://dataverse.org).
The GSA Data Repository is an open file in which authors of articles in our journals can place information that supplements and expands on their article. These supplements will not appear in print but may be obtained from GSA.
Country
Phaidra Universität Wien, is the innovative whole-university digital asset management system with long-term archiving functions, offers the possibility to archive valuable data university-wide with permanent security and systematic input, offering multilingual access using metadata (data about data), thus providing worldwide availability around the clock. As a constant data pool for administration, research and teaching, resources can be used flexibly, where continual citability allows the exact location and retrieval of prepared digital objects.
Country
Arquivo.pt is a research infrastructure that preserves millions of files collected from the web since 1996 and provides a public search service over this information. It contains information in several languages. Periodically it collects and stores information published on the web. Then, it processes the collect data to make it searchable, providing a “Google-like” service that enables searching the past web (English user interface available at https://arquivo.pt/?l=en). This preservation workflow is performed through a large-scale distributed information system and can also accessed through API (https://arquivo.pt/api).