Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 19 result(s)
Country
Human biomaterial banks (short: biobanks) are collections of human body substances (i.e. blood, DNA, urine or tissue) connected with disease specific information. This allow for research of relations between deseases and underlying (molecular) modifications and paves the way for developing target-oriented therapies ("personalized medicine"). The biobank material arises from samples taken for therapeutical or diagnostic reasons or is extracted in the context of clinical trials. An approval for usage by the patient is always needed prior to any research activities.
The PAIN Repository is a recently funded NIH initiative, which has two components: an archive for already collected imaging data (Archived Repository), and a repository for structural and functional brain images and metadata acquired prospectively using standardized acquisition parameters (Standardized Repository) in healthy control subjects and patients with different types of chronic pain. The PAIN Repository provides the infrastructure for storage of standardized resting state functional, diffusion tensor imaging and structural brain imaging data and associated biological, physiological and behavioral metadata from multiple scanning sites, and provides tools to facilitate analysis of the resulting comprehensive data sets.
The Vienna Atomic Line Database (VALD) is a collection of atomic and molecular transition parameters of astronomical interest. VALD offers tools for selecting subsets of lines for typical astrophysical applications: line identification, preparing for spectroscopic observations, chemical composition and radial velocity measurements, model atmosphere calculations etc.
In 2003, the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) at NIH established Data, Biosample, and Genetic Repositories to increase the impact of current and previously funded NIDDK studies by making their data and biospecimens available to the broader scientific community. These Repositories enable scientists not involved in the original study to test new hypotheses without any new data or biospecimen collection, and they provide the opportunity to pool data across several studies to increase the power of statistical analyses. In addition, most NIDDK-funded studies are collecting genetic biospecimens and carrying out high-throughput genotyping making it possible for other scientists to use Repository resources to match genotypes to phenotypes and to perform informative genetic analyses.
ETH Data Archive is ETH Zurich's long-term preservation solution for digital information such as research data, digitised content, archival records, or images. It serves as the backbone of data curation and for most of its content, it is a “dark archive” without public access. In this capacity, the ETH Data Archive also archives the content of ETH Zurich’s Research Collection which is the primary repository for members of the university and the first point of contact for publication of data at ETH Zurich. All data that was produced in the context of research at the ETH Zurich, can be published and archived in the Research Collection. An automated connection to the ETH Data Archive in the background ensures the medium to long-term preservation of all publications and research data. Direct access to the ETH Data Archive is intended only for customers who need to deposit software source code within the framework of ETH transfer Software Registration. Open Source code packages and other content from legacy workflows can be accessed via ETH Library @ swisscovery (https://library.ethz.ch/en/).
Country
The aim of the project KCDC (KASCADE Cosmic Ray Data Centre) is the installation and establishment of a public data centre for high-energy astroparticle physics based on the data of the KASCADE experiment. KASCADE was a very successful large detector array which recorded data during more than 20 years on site of the KIT-Campus North, Karlsruhe, Germany (formerly Forschungszentrum, Karlsruhe) at 49,1°N, 8,4°O; 110m a.s.l. KASCADE collected within its lifetime more than 1.7 billion events of which some 433.000.000 survived all quality cuts. Initially about 160 million events are available here for public usage.
<<<!!!<<<The IGETS data base at GFZ Potsdam http://www.re3data.org/repository/r3d100010300 continues the activities of the International Center for Earth Tides (ICET), in particular, in collecting, archiving and distributing Earth tide records from long series of gravimeters, tiltmeters, strainmeters and other geodynamic sensors. >>>!!!>>> The ICET Data Bank contains results from 360 tidal gravity stations: hourly values, main tidal waves obtained by least squares analyses, residual vectors, oceanic attraction and loading vectors. The Data Bank contains also data from tiltmeters and extensometers. ICET is responsible for the Information System and Data Center of the Global Geodynamic Project (GGP). The tasks ascribed to ICET are : to collect all available measurements of Earth tides (which is its task as World Data Centre C), to evaluate these data by convenient methods of analysis in order to reduce the very large amount of measurements to a limited number of parameters which should contain all the desired and needed geophysical information, to compare the data from different instruments and different stations distributed all over the world, evaluate their precision and accuracy from the point of view of internal errors as well as external errors, to help to solve the basic problem of calibrations and to organize reference stations or build reference calibration devices, to fill gaps in information or data as far as feasible, to build a data bank allowing immediate and easy comparison of Earth tide parameters with different Earth models and other geodetical and geophysical parameters like geographical position, Bouguer anomaly, crustal thickness and age, heat flow, ... to ensure a broad diffusion of the results and information to all interested laboratories and individual scientists.
Country
WHIP is a database of individual work histories, based on Inps administrative archives. The reference population is made up by all the people – Italian and foreign – who have worked in Italy even only for only a part of their working career. A large representative sample has been extracted from this population: in the standard file the sampling coefficient is about 1: 180, for a dynamic population of about 370,000 people (figures will be doubled in the full edition). For each of these people the main episodes of their working careers are observed. The complete list of observations includes: private employee working contracts, atypical contracts, self-employment activities as artisan, trader and some activities as freelancer, retirement spells, as well as non-working spells in which the individual received social benefits, like unemployment subsidies or mobility benefits. The workers for whom activity is not observed in WHIP are those who worked in the public sector or as freelancers (lawyers or notaries) – who have an autonomous security fund. The WHIP section concerning employee contracts is a Linked Employer Employee Database: in addition to the data about the contract, thanks to a linkage with the Inps Firm Observatory, data concerning the firm in which the worker is employed is also available.
A consolidated feed from 35 million instruments provides sophisticated normalized data, streamlining analysis and decisions from front office to operations. And with flexible delivery options including cloud and API, timely accurate data enables the enterprise to capture opportunities, evaluate risk and ensure compliance in fast-moving markets.
The Cancer Cell Line Encyclopedia project is a collaboration between the Broad Institute, and the Novartis Institutes for Biomedical Research and its Genomics Institute of the Novartis Research Foundation to conduct a detailed genetic and pharmacologic characterization of a large panel of human cancer models, to develop integrated computational analyses that link distinct pharmacologic vulnerabilities to genomic patterns and to translate cell line integrative genomics into cancer patient stratification. The CCLE provides public access to genomic data, analysis and visualization for about 1000 cell lines.
The FREEBIRD website aims to facilitate data sharing in the area of injury and emergency research in a timely and responsible manner. It has been launched by providing open access to anonymised data on over 30,000 injured patients (the CRASH-1 and CRASH-2 trials).
Country
The FDZ-BO at DIW Berlin is a central archive for quantitative and qualitative operational and organizational data. It archives these, informs about their existence and provides datasets for secondary analytical purposes. The archiving of studies and datasets ensures long-term security and long-term availability of the data. In consultation with the responsible scientists, access to individual datasets is made possible as scientific use files, via remote data processing or as part of guest stays. The FDZ-BO offers detailed information on current research projects and develops concepts for research data management of organizational data. The study portal (public in March 2019) provides an overview of existing studies in the field of business and organizational research: content, methodology, information on data and data availability information on how to gain access to the data.
The ODIN Portal hosts scientific databases in the domains of structural materials and hydrogen research and is operated on behalf of the European energy research community by the Joint Research Centre, the European Commission's in-house science service providing independent scientific advice and support to policies of the European Union. ODIN contains engineering databases (Mat-Database, Hiad-Database, Nesshy-Database, HTR-Fuel-Database, HTR-Graphit-Database) and document management sites and other information related to European research in the area of nuclear and conventional energy.
The project is set up in order to improve the infrastructure for text-based linguistic research and development by building a huge, automatically annotated German text corpus and the corresponding tools for corpus annotation and exploitation. DeReKo constitutes the largest linguistically motivated collection of contemporary German texts, contains fictional, scientific and newspaper texts, as well as several other text types, contains only licenced texts, is encoded with rich meta-textual information, is fully annotated morphosyntactically (three concurrent annotations), is continually expanded, with a focus on size and stratification of data, may be analyzed free of charge via the query system COSMAS II, serves as a 'primordial sample' from which users may draw specialized sub-samples (socalled 'virtual corpora') to represent the language domain they wish to investigate. !!! Access to data of Das Deutsche Referenzkorpus is also provided by: IDS Repository https://www.re3data.org/repository/r3d100010382 !!!
Country
Fairdata IDA is a research data storage service that provides secure storage for research data. The Fairdata services are a group of nationally developed Finnish ICT services for managing research data, especially in the later phases of the research life cycle (sharing, publishing, and preserving). Development of research data management infrastructure has been identified as an important step in enabling implementation of the FAIR principles. The Fairdata services are funded by the Finnish Ministry of Education and Culture, and developed and maintained by CSC IT Center for Science. The services consist of the following service components: IDA – Research Data Storage; Etsin – Research Data Finder; Qvain – Research Dataset Metadata Tool; Metax – Metadata Warehouse; AVAA – Dynamic Data Publishing Platform and the Digital Preservation Service for Research Data (including management and packaging). The services also provide means for applying for and granting permits to use restricted access datasets. The service is offered free of charge for its users. The services are available to the research community in accordance with the applicable usage policy. Minedu offers access to research data storage service IDA to Finnish higher education institutions, state research institutes and projects funded by the Academy of Finland. Minedu may also grant separate access or storage capacity to the service. Finnish higher education institutions and research institutes may distribute IDA storage capacity to actors within the Finnish research system, within the limits of their usage shares. The service is intended for storing research data and materials related to it. The data stored in the service is available to all project users. The users mark their data to be persistently stored (“Frozen”) in the service. All project members may make the “Frozen” data and related metadata publicly accessible by using the other aforementioned Fairdata services. The data in the service is stored in Finland. IDA service stores the data stored by organisations projects continuously or until it’s transferred to digital preservation, provided that the Terms of Use are met. The owners of the data decide on the openness and usage policies for their own data. User organisations are offered support and guidance on using the service.
Country
The COSYNA observatory measures key physical, sedimentary, geochemical and biological parameters at high temporal resolution in the water column and at the sediment and atmospheric boundaries. COSYNA delivers spatial representation through a set of fixed and moving platforms, like tidal flats poles, FerryBoxes, gliders, ship surveys, towed devices, remote sensing, etc.. New technologies like underwater nodes, benthic landers and automated sensors for water biogeochemical parameters are further developed and tested. A great variety of parameters is measured and processed, stored, analyzed, assimilated into models and visualized.