Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 119 result(s)
I2D (Interologous Interaction Database) is an on-line database of known and predicted mammalian and eukaryotic protein-protein interactions. It has been built by mapping high-throughput (HTP) data between species. Thus, until experimentally verified, these interactions should be considered "predictions". It remains one of the most comprehensive sources of known and predicted eukaryotic PPI. I2D includes data for S. cerevisiae, C. elegans, D. melonogaster, R. norvegicus, M. musculus, and H. sapiens.
Country
The Canadian Institute for Health Information (CIHI) provides comparable and actionable data and information that are used to accelerate improvements in health care, health system performance and population health across Canada.
DIAMM (the Digital Image Archive of Medieval Music) is a leading resource for the study of medieval manuscripts. We present images and metadata for thousands of manuscripts on this website. We also provide a home for scholarly resources and editions, undertake digital restoration of damaged manuscripts and documents, publish high-quality facsimiles, and offer our expertise as consultants.
ZENODO builds and operates a simple and innovative service that enables researchers, scientists, EU projects and institutions to share and showcase multidisciplinary research results (data and publications) that are not part of the existing institutional or subject-based repositories of the research communities. ZENODO enables researchers, scientists, EU projects and institutions to: easily share the long tail of small research results in a wide variety of formats including text, spreadsheets, audio, video, and images across all fields of science. display their research results and get credited by making the research results citable and integrate them into existing reporting lines to funding agencies like the European Commission. easily access and reuse shared research results.
Within WASCAL a large number of heterogeneous data are collected. These data are mainly coming from different initiated research activities within WASCAL (Core Research Program, Graduate School Program) from the hydrological-meteorological, remote sensing, biodiversity and socio economic observation networks within WASCAL, and from the activities of the WASCAL Competence Center in Ouagadougou, Burkina-Faso.
OpenML is an open ecosystem for machine learning. By organizing all resources and results online, research becomes more efficient, useful and fun. OpenML is a platform to share detailed experimental results with the community at large and organize them for future reuse. Moreover, it will be directly integrated in today’s most popular data mining tools (for now: R, KNIME, RapidMiner and WEKA). Such an easy and free exchange of experiments has tremendous potential to speed up machine learning research, to engender larger, more detailed studies and to offer accurate advice to practitioners. Finally, it will also be a valuable resource for education in machine learning and data mining.
Country
The USACH research data repository provides a platform for researchers affiliated to the University to share, manage and preserve research data. So that they are accessible and detectable, both for the internal and external community, bringing the knowledge generated closer to the population.
When published in 2005, the Millennium Run was the largest ever simulation of the formation of structure within the ΛCDM cosmology. It uses 10(10) particles to follow the dark matter distribution in a cubic region 500h(−1)Mpc on a side, and has a spatial resolution of 5h−1kpc. Application of simplified modelling techniques to the stored output of this calculation allows the formation and evolution of the ~10(7) galaxies more luminous than the Small Magellanic Cloud to be simulated for a variety of assumptions about the detailed physics involved. As part of the activities of the German Astrophysical Virtual Observatory we have created relational databases to store the detailed assembly histories both of all the haloes and subhaloes resolved by the simulation, and of all the galaxies that form within these structures for two independent models of the galaxy formation physics. We have implemented a Structured Query Language (SQL) server on these databases. This allows easy access to many properties of the galaxies and halos, as well as to the spatial and temporal relations between them. Information is output in table format compatible with standard Virtual Observatory tools. With this announcement (from 1/8/2006) we are making these structures fully accessible to all users. Interested scientists can learn SQL and test queries on a small, openly accessible version of the Millennium Run (with volume 1/512 that of the full simulation). They can then request accounts to run similar queries on the databases for the full simulations. In 2008 and 2012 the simulations were repeated.
Country
RADAR4Chem is a low-threshold and easy-to use service for sustainable publication and preservation of research data from all disciplines of chemistry. It offers free publication for any data type and format according to the FAIR principles, independent of the researcher´s institutional affiliation. Through persistent identifiers (DOI) and a guaranteed retention period of at least 25 years, the research data remain available, citable and findable long-term. Currently, the offer is aimed exclusively at researchers in the field of chemistry at publicly funded research institutions and universities in Germany. No contract is required and no data publication fees are charged. The researchers are responsible for the upload, organisation, annotation and curation of research data as well as the peer-review process (as an optional step) and finally their publication.
The Marine Geoscience Data System (MGDS) is a trusted data repository that provides free public access to a curated collection of marine geophysical data products and complementary data related to understanding the formation and evolution of the seafloor and sub-seafloor. Developed and operated by domain scientists and technical specialists with deep knowledge about the creation, analysis and scientific interpretation of marine geoscience data, the system makes available a digital library of data files described by a rich curated metadata catalog. MGDS provides tools and services for the discovery and download of data collected throughout the global oceans. Primary data types are geophysical field data including active source seismic data, potential field, bathymetry, sidescan sonar, near-bottom imagery, other seafloor senor data as well as a diverse array of processed data and interpreted data products (e.g. seismic interpretations, microseismicity catalogs, geologic maps and interpretations, photomosaics and visualizations). Our data resources support scientists working broadly on solid earth science problems ranging from mid-ocean ridge, subduction zone and hotspot processes, to geohazards, continental margin evolution, sediment transport at glaciated and unglaciated margins.
Country
Town of Canmore Open Data Catalogue is a publicly available data sharing in ArcGIS Online portal where anyone can access and utilize data relating to the Town of Canmore, Alberta, Canada. You can browse, search, preview, and download a variety of municipal geospatial datasets.
Country
The NIRD Research Data Archive is a repository that provides long-term storage for research data and is compliant with the Open Archival Information System (OAIS) reference model . The aim of the archive is to provide (public) access to published research data and to promote cross-disciplinary studies. The NIRD Research Data Archive (NIRD Archive) is in full production. The NIRD Archive will operate on a “subject to approval” basis and will accept any type of research data from Norwegian academically funded projects that is no longer considered proprietary.
California Digital Library (CDL) seeks to be a catalyst for deeply collaborative solutions providing a rich, intuitive and seamless environment for publishing, sharing and preserving our scholars’ increasingly diverse outputs, as well as for acquiring and accessing information critical to the University of California’s scholarly enterprise. University of California Curation Center (UC3) is the digital curation program within CDL. The mission of UC3 is to provide transformative preservation, curation, and research data management systems, services, and initiatives that sustain and promote open scholarship.
<<<!!!<<< 2020-08-28; the repository is no longer available >>>!!!>>> The South African Data Archive promotes and facilitates the sharing of research data and related documentation of computerised raw quantitative data of large scale regional, national and international research projects mainly in the humanities and social sciences. It makes these datasets available to the research community for further analysis, comparative studies, longitudinal studies, teaching and decision-making purposes.
The GTN-P database is an object-related database open for a diverse range of data. Because of the complexity of the PAGE21 project, data provided in the GTN-P management system are extremely diverse, ranging from active-layer thickness measurements once per year to flux measurement every second and everthing else in between. The data can be assigned to two broad categories: Quantitative data which is all data that can be measured numerically. Quantitative data comprise all in situ measurements, i.e. permafrost temperatures and active layer thickness (mechanical probing, frost/thaw tubes, soil temperature profiles). Qualitative data (knowledge products) are observations not based on measurements, such as observations on soils, vegetation, relief, etc.
EIDA, an initiative within ORFEUS, is a distributed data centre established to (a) securely archive seismic waveform data and related metadata, gathered by European research infrastructures, and (b) provide transparent access to the archives by the geosciences research communities. EIDA nodes are data centres which collect and archive data from seismic networks deploying broad-band sensors, short period sensors, accelerometers, infrasound sensors and other geophysical instruments. Networks contributing data to EIDA are listed in the ORFEUS EIDA networklist (http://www.orfeus-eu.org/data/eida/networks/). Data from the ORFEUS Data Center (ODC), hosted by KNMI, are available through EIDA. Technically, EIDA is based on an underlying architecture developed by GFZ to provide transparent access to all nodes' data. Data within the distributed archives are accessible via the ArcLink protocol (http://www.seiscomp3.org/wiki/doc/applications/arclink).