Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 1110 result(s)
DIAMM (the Digital Image Archive of Medieval Music) is a leading resource for the study of medieval manuscripts. We present images and metadata for thousands of manuscripts on this website. We also provide a home for scholarly resources and editions, undertake digital restoration of damaged manuscripts and documents, publish high-quality facsimiles, and offer our expertise as consultants.
Country
To target the multidisciplinary, broad scale nature of empirical educational research in the Federal Republic of Germany, a networked research data infrastructure is required which brings together disparate services from different research data providers, delivering services to researchers in a usable, needs-oriented way. The Verbund Forschungsdaten Bildung (Educational Research Data Alliance, VFDB) therefore aims to cooperate with relevant actors from science, politics and research funding institutes to set up a powerful infrastructure for empirical educational research. This service is meant to adequately capture specific needs of the scientific communities and support empirical educational research in carrying out excellent research.
The ChemBio Hub vision is to provide the tools that will make it easier for Oxford University scientists to connect with colleagues to improve their research, to satisfy funders that the data they have paid for is being managed according to their policies, and to make new alliances with pharma and biotech partners. Funding and development of the ChemBio Hub was ending on the 30th June 2016. Please be reassured that the ChemBio Hub system and all your data will continue to be secured on the SGC servers for the foreseeable future. You can continue to use the services as normal. More information see: http://staging.chembiohub.ox.ac.uk/blog/
Network Repository is the first interactive data repository for graph and network data. It hosts graph and network datasets, containing hundreds of real-world networks and benchmark datasets. Unlike other data repositories, Network Repository provides interactive analysis and visualization capabilities to allow researchers to explore, compare, and investigate graph data in real-time on the web.
RDoCdb is an informatics platform for the sharing of human subjects data generated by investigators as part of the NIMH's Research Domain Criteria initiative, and to support this initiative's aims. It also accepts and shares appropriate data related to mental health from other sources.
CottonGen is a new cotton community genomics, genetics and breeding database being developed to enable basic, translational and applied research in cotton. It is being built using the open-source Tripal database infrastructure. CottonGen consolidates and expands the data from CottonDB and the Cotton Marker Database, providing enhanced tools for easy querying, visualizing and downloading research data.
The USDA Agricultural Marketing Service (AMS) Cotton Program maintains a National Database (NDB) in Memphis, Tennessee for owner access to cotton classification data. The NDB is computerized telecommunications system which allows owners or authorized agents of owners to retrieve classing data from the current crop and/or the previous four crops. The NDB stores classing information from all 10 regional classing offices.
Country
OGS is recognised as the Italian National Oceanographic Data Centre (OGS-NODC) within the International Oceanographic Data Exchange System of the UNESCO Intergovernmental Oceanographic Commission (IOC) since 27/6/2002. OGS is also listed in EurOcean (Marine Research Infrastructures Database) and in EDMO (European Directory of Marine Organisations). OGS as part of the IOC's network of National Oceanographic Data Centres has designated responsibility for the coordination of data and information management at national level. The oceanographic database covers the fields of marine physics, chemical, biological, underway geophysics and general information on Italian oceanographic cruises and data sets. The main objectives are (revision IODE-XXII, March 2013): -Facilitate and promote the discovery, exchange of, and access to, marine data and information including metadata, products and information in real-time, near real time and delayed mode, through the use of international standards, and in compliance with the IOC Oceanographic Data Exchange Policy for the ocean research and observation community and other stakeholders; - Encourage the long term archival, preservation, documentation, management and services of all marine data, data products, and information; - Develop or use existing best practices for the discovery, management, exchange of, and access to marine data and information, including international standards, quality control and appropriate information technology; - Assist Member States to acquire the necessary capacity to manage marine research and observation data and information and become partners in the IODE network; - Support international scientific and operational marine programmes, including the Framework for Ocean Observing for the benefit of a wide range.
Content type(s)
Country
Bacteriome.org is a database integrating physical (protein-protein) and functional interactions within the context of an E. coli knowledgebase.
>>>>!!!!<<<< As of 2017-05-17 the data catalog is no longer available >>>>!!!!<<<< DataFed is a web services-based software that non-intrusively mediates between autonomous, distributed data providers and users. The main goals of DataFed are: Aid air quality management and science by effective use of relevant data - Facilitate the access and flow of atmospheric data from provider to users - Support the development of user-driven data processing value chains. DataFed Catalog links searchable Datafed applications worldwide.
Content type(s)
Country
Database of ancient sources concerning Roman Water Law. Specific legal sources, e.g. from the Corpus Iuris Civilis or the Codex Theodosianus, and literary sources, for example from Cicero, Frontinus, Hyginus, Siculus Flaccus or Vitruvius, were collected to give an overview of water related legal problems in ancient Rome. Furthermore, the aim of the database is to classify these sources into different legal topics, in order to facilitate the research for sources concerning specific questions regarding Roman Water Law.
NetSlim is a resource of high-confidence signaling pathway maps derived from NetPath pathway reactions. 40-60% of the molecules and their reactions in NetPath pathways are available in NetSlim.
Country
BacDive is a bacterial metadatabase that provides strain-linked information about bacterial and archaeal biodiversity. The database is a resource for different kind of phenotypic data like taxonomy, morphology, physiology, environment and molecular-biology. The majority of data is manually annotated and curated. With the release in April 2019 BacDive offers information for 80,584 strains. The database is hosted by the Leibniz Institute DSMZ - German Collection of Microorganisms and Cell Cultures GmbH and is part of de.NBI the German Network for Bioinformatics Infrastructure.
Data products developed and distributed by the National Institute of Standards and Technology span multiple disciplines of research and are widely used in research and development programs by industry and academia. NIST's publicly available data sets showcase its committment to providing accurate, well-curated measurements of physical properties, exemplified by the Standard Reference Data program, as well as its committment to advancing basic research. In accordance with U.S. Government Open Data Policy and the NIST Plan for providing public access to the results of federally funded research data, NIST maintains a publicly accessible listing of available data, the NIST Public Dataset List (json). Additionally, these data are assigned a Digital Object Identifier (DOI) to increase the discovery and access to research output; these DOIs are registered with DataCite and provide globally unique persistent identifiers. The NIST Science Data Portal provides a user-friendly discovery and exploration tool for publically available datasets at NIST. This portal is designed and developed with data.gov Project Open Data standards and principles. The portal software is hosted in the usnistgov github repository.
The sources of the data sets include data sets donated by researchers, surveys carried out by SRDA, as well as by government department and other academic organizations. Prior to the release of data sets, the confidentiality and sensitivity of every survey data set are evaluated. Standard data management and cleaning procedures are applied to ensure data accuracy and completeness. In addition, metadata and relevant supplement files are also edited and attached.
Country
The Digital Collections present selected pieces of all historical collections of the Württembergische Landesbibliothek. The aim is to offer digital reproductions of objects which are created within the framework of cataloguing and research projects.
Country
SAFER-Data is a web-based interface to the Environmental Data Archive maintained by the Environmental Research Centre (ERC) in the Environmental Protection Agency (EPA) of Ireland, who has responsibilities for a wide range of licensing, enforcement, monitoring and assessment activities associated with environmental protection.
Data.gov increases the ability of the public to easily find, download, and use datasets that are generated and held by the Federal Government. Data.gov provides descriptions of the Federal datasets (metadata), information about how to access the datasets, and tools that leverage government datasets
The Deep Carbon Observatory (DCO) is a global community of multi-disciplinary scientists unlocking the inner secrets of Earth through investigations into life, energy, and the fundamentally unique chemistry of carbon. Deep Carbon Observatory Digital Object Registry (“DCO-VIVO”) is a centrally-managed digital object identification, object registration and metadata management service for the DCO. Digital object registration includes DCO-ID generation based on the global Handle System infrastructure and metadata collection using VIVO. Users will be able to deposit their data into the DCO Data Repository and have that data discoverable and accessible by others.
INTEGRALL is a web-based platform dedicated to compile information on integrons and designed to organize all the data available for these genetic structures. INTEGRALL provides a public genetic repository for sequence data and nomenclature and offers to scientists an easy and interactive access to integron's DNA sequences, their molecular arrangements as well as their genetic contexts.
This database provides theoretical values of energy levels of hydrogen and deuterium for principle quantum numbers n = 1 to 200 and all allowed orbital angular momenta l and total angular momenta j. The values are based on current knowledge of the revelant theoretical contributions including relativistic, quantum electrodynamic, recoil, and nuclear size effects.
OpenML is an open ecosystem for machine learning. By organizing all resources and results online, research becomes more efficient, useful and fun. OpenML is a platform to share detailed experimental results with the community at large and organize them for future reuse. Moreover, it will be directly integrated in today’s most popular data mining tools (for now: R, KNIME, RapidMiner and WEKA). Such an easy and free exchange of experiments has tremendous potential to speed up machine learning research, to engender larger, more detailed studies and to offer accurate advice to practitioners. Finally, it will also be a valuable resource for education in machine learning and data mining.