Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 1120 result(s)
The Database explores the interactions of chemicals and proteins. It integrates information about interactions from metabolic pathways, crystal structures, binding experiments and drug-target relationships. Inferred information from phenotypic effects, text mining and chemical structure similarity is used to predict relations between chemicals. STITCH further allows exploring the network of chemical relations, also in the context of associated binding proteins.
The Heritage Centre represents the four keepers of historical collections of the municipality of Zutphen: Archeology, Monuments, Museum Zutphen, Regional Archive Zutphen (includes the municipalities of Brummen and Lochem) This portal means to be the online gateway to the municipal heritage in Zutphen and wants to provide you with the opportunity to search all their collections at once.
EIDA, an initiative within ORFEUS, is a distributed data centre established to (a) securely archive seismic waveform data and related metadata, gathered by European research infrastructures, and (b) provide transparent access to the archives by the geosciences research communities. EIDA nodes are data centres which collect and archive data from seismic networks deploying broad-band sensors, short period sensors, accelerometers, infrasound sensors and other geophysical instruments. Networks contributing data to EIDA are listed in the ORFEUS EIDA networklist (http://www.orfeus-eu.org/data/eida/networks/). Data from the ORFEUS Data Center (ODC), hosted by KNMI, are available through EIDA. Technically, EIDA is based on an underlying architecture developed by GFZ to provide transparent access to all nodes' data. Data within the distributed archives are accessible via the ArcLink protocol (http://www.seiscomp3.org/wiki/doc/applications/arclink).
HydroShare is a system operated by The Consortium of Universities for the Advancement of Hydrologic Science Inc. (CUAHSI) that enables users to share and publish data and models in a variety of flexible formats, and to make this information available in a citable, shareable and discoverable manner. HydroShare includes a repository for data and models, and tools (web apps) that can act on content in HydroShare providing users with a gateway to high performance computing and computing in the cloud. With HydroShare you can: share data and models with colleagues; manage access to shared content; share, access, visualize, and manipulate a broad set of hydrologic data types and models; publish data and models and obtain a citable digital object identifier (DOI); aggregate resources into collections; discover and access data and models published by others; use the web services application programming interface (API) to programmatically access resources; and use integrated web applications to visualize, analyze and run models with data in HydroShare.
The World Data Center for Remote Sensing of the Atmosphere, WDC-RSAT, offers scientists and the general public free access (in the sense of a “one-stop shop”) to a continuously growing collection of atmosphere-related satellite-based data sets (ranging from raw to value added data), information products and services. Focus is on atmospheric trace gases, aerosols, dynamics, radiation, and cloud physical parameters. Complementary information and data on surface parameters (e.g. vegetation index, surface temperatures) is also provided. This is achieved either by giving access to data stored at the data center or by acting as a portal containing links to other providers.
The EZRC at KIT houses the largest experimental fish facility in Europe with a capacity of more than 300,000 fish. Zebrafish stocks are maintained mostly as frozen sperm. Frequently requested lines are also kept alive as well as a selection of wildtype strains. Several thousand mutations in protein coding genes generated by TILLING in the Stemple lab of the Sanger Centre, Hinxton, UK and lines generated by ENU mutagenesis by the Nüsslein-Volhard lab in addition to transgenic lines and mutants generated by KIT groups or brought in through collaborations. We also accept submissions on an individual basis and ship fish upon request to PIs in Europe and elsewhere. EZRC also provides screening services and technologies such as imaging and high-throughput sequencing. Key areas include automation of embryo handling and automated image acquisition and processing. Our platform also involves the development of novel microscopy techniques (e.g. SPIM, DSLM, robotic macroscope) to permit high-resolution, real-time imaging in 4D. By association with the ComPlat platform, we can support also chemical screens and offer libraries with up to 20,000 compounds in total for external users. As another service to the community the EZRC provides plasmids (cDNAs, transgenes, Talen, Crispr/cas9) maintained by the Helmholtz repository of Bioparts (HERBI) to the scientific community. In addition the fish facility keeps a range of medaka stocks, maintained by the Loosli group.
Country
In the framework of the Collaborative Research Centre/Transregio 32 ‘Patterns in Soil-Vegetation-Atmosphere Systems: Monitoring, Modelling, and Data Assimilation’ (CRC/TR32, www.tr32.de), funded by the German Research Foundation from 2007 to 2018, a RDM system was self-designed and implemented. The so-called CRC/TR32 project database (TR32DB, www.tr32db.de) is operating online since early 2008. The TR32DB handles all data including metadata, which are created by the involved project participants from several institutions (e.g. Universities of Cologne, Bonn, Aachen, and the Research Centre Jülich) and research fields (e.g. soil and plant sciences, hydrology, geography, geophysics, meteorology, remote sensing). The data is resulting from several field measurement campaigns, meteorological monitoring, remote sensing, laboratory studies and modelling approaches. Furthermore, outcomes of the scientists such as publications, conference contributions, PhD reports and corresponding images are collected in the TR32DB.
Fishbase is a global species database and encyclopedia of over 30,000 species and subspecies of fishes that is searchable by common name, genus, species, geography, family, ecosystem, references literature, tools, etc. Links to other, related databases such as the Catalog of Fishes, GenBack, and LarvalBase. Associated with a partner journal, Acta Ichthyologica et Piscatoria. With mirror sites in English, German, French Spanish, Portuguese, French, Swedish, Chinese and Arabian language.
Country
Phaidra (Permanent Hosting, Archiving and Indexing of Digital Resources and Assets) is the University of Padua Library System’s platform for long-term archiving of digital collections. Phaidra hosts various types of digital objects (antiquarian books, manuscripts, photographs, wallcharts, maps, learning objects, films, archive material and museum objects). Phaidra offers a search facility to identify specific objects, and each object can be viewed, downloaded, used and reused to the extent permitted by law and by its associated licences. The objects in the digital collections on the Phaidra platform are sourced from libraries (in large part due to the digitisation projects promoted by the Library System itself), museums and archives at the University of Padua and other institutions, including the Ca’ Foscari University and the Università Iuav in Venice.
The ISRCTN registry is a primary clinical trial registry recognised by WHO and ICMJE that accepts all clinical research studies (whether proposed, ongoing or completed), providing content validation and curation and the unique identification number necessary for publication. All study records in the database are freely accessible and searchable. ISRCTN supports transparency in clinical research, helps reduce selective reporting of results and ensures an unbiased and complete evidence base. ISRCTN accepts all studies involving human subjects or populations with outcome measures assessing effects on human health and well-being, including studies in healthcare, social care, education, workplace safety and economic development.
ArcGIS 'Living Atlas of the World is a unique collection of worldwide geographic information. It contains maps, apps and data layers that support you in your work. Corona Virus resources https://coronavirus-resources.esri.com/
To help flattening the COVID-19 curve public health systems need better information on whether preventive measures are working and how the virus may spread. Facebook Data for Good offer maps on population movement that researchers and nonprofits are already using to understand the coronavirus crisis, using aggregated data to protect people’s privacy.
The Scientific Data Repository Hosting Service (SARDC) intends to provide a platform for free access to data created and used in the scope of the research work of national institutions. It is characterized by the availability of a repository platform ( DSpace ) and support for the entire data maintenance component, such as backups, monitoring, updating, security, etc., thus keeping researchers out of the concern of these tasks. Finally, the SARDC service intends to make the data deposited in the repository available through the RCAAP Portal.
Country
Dataverse UNIMI is the institutional data repository of the University of Milan. The service aims at facilitating data discovery, data sharing, and reuse, as required by funding institutions (eg. European Commission). Datasets published in the archive have a set of metadata that ensure proper description and discoverability.
The PLANKTON*NET data provider at the Alfred Wegener Institute for Polar and Marine Research is an open access repository for plankton-related information. It covers all types of phytoplankton and zooplankton from marine and freshwater areas. PLANKTON*NET's greatest strength is its comprehensiveness as for the different taxa image information as well as taxonomic descriptions can be archived. PLANKTON*NET also contains a glossary with accompanying images to illustrate the term definitions. PLANKTON*NET therefore presents a vital tool for the preservation of historic data sets as well as the archival of current research results. Because interoperability with international biodiversity data providers (e.g. GBIF) is one of our aims, the architecture behind the new planktonnet@awi repository is observation centric and allows for mulitple assignment of assets (images, references, animations, etc) to any given observation. In addition, images can be grouped in sets and/or assigned tags to satisfy user-specific needs . Sets (and respective images) of relevance to the scientific community and/or general public have been assigned a persistant digital object identifier (DOI) for the purpose of long-term preservation (e.g. set ""Plankton*Net celebrates 50 years of Roman Treaties"", handle: 10013/de.awi.planktonnet.set.495)"
Country
<<<!!!<<< This repository is no longer available. >>>!!!>>> This page is not longer active, please use www.marine-data.de instead. Our data portal data.awi.de offers an integrative one-stop shop framework for discovering AWI research platforms including devices and sensors, tracklines, field reports, peer-reviewed publications, GIS products and mostly important data and data products archived in PANGAEA.
The Bremen Core Repository - BCR, for International Ocean Discovery Program (IODP), Integrated Ocean Discovery Program (IODP), Ocean Drilling Program (ODP), and Deep Sea Drilling Project (DSDP) cores from the Atlantic Ocean, Mediterranean and Black Seas and Arctic Ocean is operated at University of Bremen within the framework of the German participation in IODP. It is one of three IODP repositories (beside Gulf Coast Repository (GCR) in College Station, TX, and Kochi Core Center (KCC), Japan). One of the scientific goals of IODP is to research the deep biosphere and the subseafloor ocean. IODP has deep-frozen microbiological samples from the subseafloor available for interested researchers and will continue to collect and preserve geomicrobiology samples for future research.
The mission of World Data Center for Climate (WDCC) is to provide central support for the German and European climate research community. The WDCC is member of the ISC's World Data System. Emphasis is on development and implementation of best practice methods for Earth System data management. Data for and from climate research are collected, stored and disseminated. The WDCC is restricted to data products. Cooperations exist with thematically corresponding data centres of, e.g., earth observation, meteorology, oceanography, paleo climate and environmental sciences. The services of WDCC are also available to external users at cost price. A special service for the direct integration of research data in scientific publications has been developed. The editorial process at WDCC ensures the quality of metadata and research data in collaboration with the data producers. A citation code and a digital identifier (DOI) are provided and registered together with citation information at the DOI registration agency DataCite.
The Department of Energy Systems Biology Knowledgebase (KBase) is a software and data platform designed to meet the grand challenge of systems biology: predicting and designing biological function. KBase integrates data and tools in a unified graphical interface so users do not need to access them from numerous sources or learn multiple systems in order to create and run sophisticated systems biology workflows. Users can perform large-scale analyses and combine multiple lines of evidence to model plant and microbial physiology and community dynamics. KBase is the first large-scale bioinformatics system that enables users to upload their own data, analyze it (along with collaborator and public data), build increasingly realistic models, and share and publish their workflows and conclusions. KBase aims to provide a knowledgebase: an integrated environment where knowledge and insights are created and multiplied.