Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 19 result(s)
The CLARIN­/Text+ repository at the Saxon Academy of Sciences and Humanities in Leipzig offers long­term preservation of digital resources, along with their descriptive metadata. The mission of the repository is to ensure the availability and long­term preservation of resources, to preserve knowledge gained in research, to aid the transfer of knowledge into new contexts, and to integrate new methods and resources into university curricula. Among the resources currently available in the Leipzig repository are a set of corpora of the Leipzig Corpora Collection (LCC), based on newspaper, Wikipedia and Web text. Furthermore several REST-based webservices are provided for a variety of different NLP-relevant tasks The repository is part of the CLARIN infrastructure and part of the NFDI consortium Text+. It is operated by the Saxon Academy of Sciences and Humanities in Leipzig.
The European Bioinformatics Institute (EBI) has a long-standing mission to collect, organise and make available databases for biomolecular science. It makes available a collection of databases along with tools to search, download and analyse their content. These databases include DNA and protein sequences and structures, genome annotation, gene expression information, molecular interactions and pathways. Connected to these are linking and descriptive data resources such as protein motifs, ontologies and many others. In many of these efforts, the EBI is a European node in global data-sharing agreements involving, for example, the USA and Japan.
Copernicus is a European system for monitoring the Earth. Copernicus consists of a complex set of systems which collect data from multiple sources: earth observation satellites and in situ sensors such as ground stations, airborne and sea-borne sensors. It processes these data and provides users with reliable and up-to-date information through a set of services related to environmental and security issues. The services address six thematic areas: land monitoring, marine monitoring, atmosphere monitoring, climate change, emergency management and security. The main users of Copernicus services are policymakers and public authorities who need the information to develop environmental legislation and policies or to take critical decisions in the event of an emergency, such as a natural disaster or a humanitarian crisis. Based on the Copernicus services and on the data collected through the Sentinels and the contributing missions , many value-added services can be tailored to specific public or commercial needs, resulting in new business opportunities. In fact, several economic studies have already demonstrated a huge potential for job creation, innovation and growth.
RADAR service offers the ability to search for research data descriptions of the Natural Resources Institute Finland (Luke). The service includes descriptions of research data for agriculture, forestry and food sectors, game management, fisheries and environment. The public web service aims to facilitate discovering subjects of natural resources studies. In addition to Luke's research data descriptions one can search metadata of the Finnish Environment Institute (SYKE). The interface between Luke and SYKE metadata services combines Luke's research data descriptions and SYKE's descriptions of spatial datasets and data systems into a unified search service.
As part of the Copernicus Space Component programme, ESA manages the coordinated access to the data procured from the various Contributing Missions and the Sentinels, in response to the Copernicus users requirements. The Data Access Portfolio documents the data offer and the access rights per user category. The CSCDA portal is the access point to all data, including Sentinel missions, for Copernicus Core Users as defined in the EU Copernicus Programme Regulation (e.g. Copernicus Services).The Copernicus Space Component (CSC) Data Access system is the interface for accessing the Earth Observation products from the Copernicus Space Component. The system overall space capacity relies on several EO missions contributing to Copernicus, and it is continuously evolving, with new missions becoming available along time and others ending and/or being replaced.
MODES focuses on the representation of the inertio-gravity circulation in numerical weather prediction models, reanalyses, ensemble prediction systems and climate simulations. The project methodology relies on the decomposition of global circulation in terms of 3D orthogonal normal-mode functions. It allows quantification of the role of inertio-gravity waves in atmospheric varibility across the whole spectrum of resolved spatial and temporal scales. MODES is compiled by using gfortran although other options have been succesfully tested. The application requires the use of the netcdf and (optionally) grib-api libraries.
The Extreme Light Infrastructure (ELI) is the world's most advanced laser-based research infrastructure. The ELI Facilities provide access to a broad range of world-class high-power, high repetition-rate laser systems and secondary sources. This enables cutting-edge research and new regimes of high intensity physics in physical, chemical, medical, and materials sciences.
The ProteomeXchange consortium has been set up to provide a single point of submission of MS proteomics data to the main existing proteomics repositories, and to encourage the data exchange between them for optimal data dissemination. Current members accepting submissions are: The PRIDE PRoteomics IDEntifications database at the European Bioinformatics Institute focusing mainly on shotgun mass spectrometry proteomics data PeptideAtlas/PASSEL focusing on SRM/MRM datasets.
The IWH Research Data Centre provides external scientists with data for non-commercial research. The research data centre of the IWH was accredited by RatSWD.
The repository of the Hamburg Centre for Speech Corpora is used for archiving, maintenance, distribution and development of spoken language corpora. These usually consist of audio and / or video recordings, transcriptions and other data and structured metadata. The corpora treat the focus on multilingualism and are generally freely available for research and teaching. Most of the measures maintained by the HZSK corpora were created in the years 2000-2011 in the framework of the SFB 538 "Multilingualism" at the University of Hamburg. The HZSK however also strives to take linguistic data from other projects or contexts, and to provide also the scientific community for research and teaching are available, provided that they are compatible with the current focus of HZSK, ie especially spoken language and multilingualism.
Presented is information on changes in weather and climate extremes, as well as the daily dataset needed to monitor and analyse these extremes. map of participating countries. Today, ECA&D is receiving data from 59 participants for 62 countries and the ECA dataset contains 33265 series of observations for 12 elements at 7512 meteorological stations throughout Europe and the Mediterranean (see Daily data > Data dictionary). 51% of these series is public, which means downloadable from this website for non-commercial research. Participation to ECA&D is open to anyone maintaining daily station data
The University of Oxford Text Archive develops, collects, catalogues and preserves electronic literary and linguistic resources for use in Higher Education, in research, teaching and learning. We also give advice on the creation and use of these resources, and are involved in the development of standards and infrastructure for electronic language resources.
<<<!!!<<< The repository is no longer available. 2018-11-20; COMPASS used to be provided and available at FORS but is no longer supported. >>>!!!>>>
DARECLIMED data repository consists of three kind of data: (a) climate, (b) water resources, and (c) energy related data. The first part, climate datasets, will include atmospheric and indirect atmospheric data, proxies and reconstructions, terrestrial and oceanic data. Land use, population, economy and development data will be added as well. Datasets can be handled and analyzed by connecting to the Live Access Server (LAS), which enables to visualize data with on-the-fly graphics, request custom subsets of variables in a choice of file formats, access background reference material about the data (metadata), and compare (difference) variables from distributed locations. Access to server is granted upon request by emailing the data repository manager.
The ODIN Portal hosts scientific databases in the domains of structural materials and hydrogen research and is operated on behalf of the European energy research community by the Joint Research Centre, the European Commission's in-house science service providing independent scientific advice and support to policies of the European Union. ODIN contains engineering databases (Mat-Database, Hiad-Database, Nesshy-Database, HTR-Fuel-Database, HTR-Graphit-Database) and document management sites and other information related to European research in the area of nuclear and conventional energy.
The EZRC at KIT houses the largest experimental fish facility in Europe with a capacity of more than 300,000 fish. Zebrafish stocks are maintained mostly as frozen sperm. Frequently requested lines are also kept alive as well as a selection of wildtype strains. Several thousand mutations in protein coding genes generated by TILLING in the Stemple lab of the Sanger Centre, Hinxton, UK and lines generated by ENU mutagenesis by the Nüsslein-Volhard lab in addition to transgenic lines and mutants generated by KIT groups or brought in through collaborations. We also accept submissions on an individual basis and ship fish upon request to PIs in Europe and elsewhere. EZRC also provides screening services and technologies such as imaging and high-throughput sequencing. Key areas include automation of embryo handling and automated image acquisition and processing. Our platform also involves the development of novel microscopy techniques (e.g. SPIM, DSLM, robotic macroscope) to permit high-resolution, real-time imaging in 4D. By association with the ComPlat platform, we can support also chemical screens and offer libraries with up to 20,000 compounds in total for external users. As another service to the community the EZRC provides plasmids (cDNAs, transgenes, Talen, Crispr/cas9) maintained by the Helmholtz repository of Bioparts (HERBI) to the scientific community. In addition the fish facility keeps a range of medaka stocks, maintained by the Loosli group.
The main function of the GGSP (Galileo Geodetic Service Provider) is to provide a terrestrial reference frame, in the broadest sense of the word, to both the Galileo Core System (GCS) as well as to the Galileo User Segment (all Galileo users). This implies that the GGSP should enable all users of the Galileo System, including the most demanding ones, to access and realise the GTRF with the precision required for their specific application. Furthermore, the GGSP must ensure the proper interfaces to all users of the GTRF, especially the geodetic and scientific user groups. In addition the GGSP must ensure the adherence to the defined standards of all its products. Last but not least the GGSP will play a key role to create awareness of the GTRF and educate users in the usage and realisation of the GTRF.
The DNA Bank Network was established in spring 2007 and was funded until 2011 by the German Research Foundation (DFG). The network was initiated by GBIF Germany (Global Biodiversity Information Facility). It offers a worldwide unique concept. DNA bank databases of all partners are linked and are accessible via a central web portal, providing DNA samples of complementary collections (microorganisms, protists, plants, algae, fungi and animals). The DNA Bank Network was one of the founders of the Global Genome Biodiversity Network (GGBN) and is fully merged with GGBN today. GGBN agreed on using the data model proposed by the DNA Bank Network. The Botanic Garden and Botanical Museum Berlin-Dahlem (BGBM) hosts the technical secretariat of GGBN and its virtual infrastructure. The main focus of the DNA Bank Network is to enhance taxonomic, systematic, genetic, conservation and evolutionary studies by providing: • high quality, long-term storage of DNA material on which molecular studies have been performed, so that results can be verified, extended, and complemented, • complete on-line documentation of each sample, including the provenance of the original material, the place of voucher deposit, information about DNA quality and extraction methodology, digital images of vouchers and links to published molecular data if available.