Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 29 result(s)
nmrshiftdb is a NMR database (web database) for organic structures and their nuclear magnetic resonance (nmr) spectra. It allows for spectrum prediction (13C, 1H and other nuclei) as well as for searching spectra, structures and other properties. Last not least, it features peer-reviewed submission of datasets by its users. The nmrshiftdb2 software is open source, the data is published under an open content license. Please consult the documentation for more detailed information. nmrshiftdb2 is the continuation of the NMRShiftDB project with additional data and bugfixes and changes in the software.
The Infrared Space Observatory (ISO) is designed to provide detailed infrared properties of selected Galactic and extragalactic sources. The sensitivity of the telescopic system is about one thousand times superior to that of the Infrared Astronomical Satellite (IRAS), since the ISO telescope enables integration of infrared flux from a source for several hours. Density waves in the interstellar medium, its role in star formation, the giant planets, asteroids, and comets of the solar system are among the objects of investigation. ISO was operated as an observatory with the majority of its observing time being distributed to the general astronomical community. One of the consequences of this is that the data set is not homogeneous, as would be expected from a survey. The observational data underwent sophisticated data processing, including validation and accuracy analysis. In total, the ISO Data Archive contains about 30,000 standard observations, 120,000 parallel, serendipity and calibration observations and 17,000 engineering measurements. In addition to the observational data products, the archive also contains satellite data, documentation, data of historic aspects and externally derived products, for a total of more than 400 GBytes stored on magnetic disks. The ISO Data Archive is constantly being improved both in contents and functionality throughout the Active Archive Phase, ending in December 2006.
The CLARIN­/Text+ repository at the Saxon Academy of Sciences and Humanities in Leipzig offers long­term preservation of digital resources, along with their descriptive metadata. The mission of the repository is to ensure the availability and long­term preservation of resources, to preserve knowledge gained in research, to aid the transfer of knowledge into new contexts, and to integrate new methods and resources into university curricula. Among the resources currently available in the Leipzig repository are a set of corpora of the Leipzig Corpora Collection (LCC), based on newspaper, Wikipedia and Web text. Furthermore several REST-based webservices are provided for a variety of different NLP-relevant tasks The repository is part of the CLARIN infrastructure and part of the NFDI consortium Text+. It is operated by the Saxon Academy of Sciences and Humanities in Leipzig.
The German Text Archive (Deutsches Textarchiv, DTA) presents online a selection of key German-language works in various disciplines from the 17th to 19th centuries. The electronic full-texts are indexed linguistically and the search facilities tolerate a range of spelling variants. The DTA presents German-language printed works from around 1650 to 1900 as full text and as digital facsimile. The selection of texts was made on the basis of lexicographical criteria and includes scientific or scholarly texts, texts from everyday life, and literary works. The digitalisation was made from the first edition of each work. Using the digital images of these editions, the text was first typed up manually twice (‘double keying’). To represent the structure of the text, the electronic full-text was encoded in conformity with the XML standard TEI P5. The next stages complete the linguistic analysis, i.e. the text is tokenised, lemmatised, and the parts of speech are annotated. The DTA thus presents a linguistically analysed, historical full-text corpus, available for a range of questions in corpus linguistics. Thanks to the interdisciplinary nature of the DTA Corpus, it also offers valuable source-texts for neighbouring disciplines in the humanities, and for scientists, legal scholars and economists.
As part of the Copernicus Space Component programme, ESA manages the coordinated access to the data procured from the various Contributing Missions and the Sentinels, in response to the Copernicus users requirements. The Data Access Portfolio documents the data offer and the access rights per user category. The CSCDA portal is the access point to all data, including Sentinel missions, for Copernicus Core Users as defined in the EU Copernicus Programme Regulation (e.g. Copernicus Services).The Copernicus Space Component (CSC) Data Access system is the interface for accessing the Earth Observation products from the Copernicus Space Component. The system overall space capacity relies on several EO missions contributing to Copernicus, and it is continuously evolving, with new missions becoming available along time and others ending and/or being replaced.
MEMENTO aims to become a valuable tool for identifying regions of the world ocean that should be targeted in future work to improve the quality of air-sea flux estimates.
The CLARIN-D Centre CEDIFOR provides a repository for long-term storage of resources and meta-data. Resources hosted in the repository stem from research of members as well as associated research projects of CEDIFOR. This includes software and web-services as well as corpora of text, lexicons, images and other data.
The IWH Research Data Centre provides external scientists with data for non-commercial research. The research data centre of the IWH was accredited by RatSWD.
The International Ocean Discovery Program (IODP) is an international marine research collaboration that explores Earth's history and dynamics using ocean-going research platforms to recover data recorded in seafloor sediments and rocks and to monitor subseafloor environments. IODP depends on facilities funded by three platform providers with financial contributions from five additional partner agencies. Together, these entities represent 26 nations whose scientists are selected to staff IODP research expeditions conducted throughout the world's oceans. IODP expeditions are developed from hypothesis-driven science proposals aligned with the program's science plan Illuminating Earth's Past, Present, and Future. The science plan identifies 14 challenge questions in the four areas of climate change, deep life, planetary dynamics, and geohazards. Until 2013 under the name: International Ocean Drilling Program.
In collaboration with other centres in the Text+ consortium and in the CLARIN infrastructure, the CLARIND-UdS enables eHumanities by providing a service for hosting and processing language resources (notably corpora) for members of the research community. CLARIND-UdS centre thus contributes of lifting the fragmentation of language resources by assisting members of the research community in preparing language materials in such a way that easy discovery is ensured, interchange is facilitated and preservation is enabled by enriching such materials with meta-information, transforming them into sustainable formats and hosting them. We have an explicit mission to archive language resources especially multilingual corpora (parallel, comparable) and corpora including specific registers, both collected by associated researchers as well as researchers who are not affiliated with us.
The Bavarian Archive for Speech Signals (BAS) is a public institution hosted by the University of Munich. This institution was founded with the aim of making corpora of current spoken German available to both the basic research and the speech technology communities via a maximally comprehensive digital speech-signal database. The speech material will be structured in a manner allowing flexible and precise access, with acoustic-phonetic and linguistic-phonetic evaluation forming an integral part of it.
The European Social Survey (the ESS) is a biennial multi-country survey covering over 30 nations. The first round was fielded in 2002/2003, the fifth in 2010/2011. The questionnaire includes two main sections, each consisting of approximately 120 items; a 'core' module which remains relatively constant from round to round, plus two or more 'rotating' modules, repeated at intervals. The core module aims to monitor change and continuity in a wide range of social variables, including media use; social and public trust; political interest and participation; socio-political orientations; governance and efficacy; moral; political and social values; social exclusion, national, ethnic and religious allegiances; well-being; health and security; human values; demographics and socio-economics
The FAIRDOMHub is built upon the SEEK software suite, which is an open source web platform for sharing scientific research assets, processes and outcomes. FAIRDOM (Web Site) will establish a support and service network for European Systems Biology. It will serve projects in standardizing, managing and disseminating data and models in a FAIR manner: Findable, Accessible, Interoperable and Reusable. FAIRDOM is an initiative to develop a community, and establish an internationally sustained Data and Model Management service to the European Systems Biology community. FAIRDOM is a joint action of ERA-Net EraSysAPP and European Research Infrastructure ISBE.
DEPOD - the human DEPhOsphorylation Database (version 1.1) is a manually curated database collecting human active phosphatases, their experimentally verified protein and non-protein substrates and dephosphorylation site information, and pathways in which they are involved. It also provides links to popular kinase databases and protein-protein interaction databases for these phosphatases and substrates. DEPOD aims to be a valuable resource for studying human phosphatases and their substrate specificities and molecular mechanisms; phosphatase-targeted drug discovery and development; connecting phosphatases with kinases through their common substrates; completing the human phosphorylation/dephosphorylation network.
The repository of the Hamburg Centre for Speech Corpora is used for archiving, maintenance, distribution and development of spoken language corpora. These usually consist of audio and / or video recordings, transcriptions and other data and structured metadata. The corpora treat the focus on multilingualism and are generally freely available for research and teaching. Most of the measures maintained by the HZSK corpora were created in the years 2000-2011 in the framework of the SFB 538 "Multilingualism" at the University of Hamburg. The HZSK however also strives to take linguistic data from other projects or contexts, and to provide also the scientific community for research and teaching are available, provided that they are compatible with the current focus of HZSK, ie especially spoken language and multilingualism.
The EZRC at KIT houses the largest experimental fish facility in Europe with a capacity of more than 300,000 fish. Zebrafish stocks are maintained mostly as frozen sperm. Frequently requested lines are also kept alive as well as a selection of wildtype strains. Several thousand mutations in protein coding genes generated by TILLING in the Stemple lab of the Sanger Centre, Hinxton, UK and lines generated by ENU mutagenesis by the Nüsslein-Volhard lab in addition to transgenic lines and mutants generated by KIT groups or brought in through collaborations. We also accept submissions on an individual basis and ship fish upon request to PIs in Europe and elsewhere. EZRC also provides screening services and technologies such as imaging and high-throughput sequencing. Key areas include automation of embryo handling and automated image acquisition and processing. Our platform also involves the development of novel microscopy techniques (e.g. SPIM, DSLM, robotic macroscope) to permit high-resolution, real-time imaging in 4D. By association with the ComPlat platform, we can support also chemical screens and offer libraries with up to 20,000 compounds in total for external users. As another service to the community the EZRC provides plasmids (cDNAs, transgenes, Talen, Crispr/cas9) maintained by the Helmholtz repository of Bioparts (HERBI) to the scientific community. In addition the fish facility keeps a range of medaka stocks, maintained by the Loosli group.
The focus of PolMine is on texts published by public institutions in Germany. Corpora of parliamentary protocols are at the heart of the project: Parliamentary proceedings are available for long stretches of time, cover a broad set of public policies and are in the public domain, making them a valuable text resource for political science. The project develops repositories of textual data in a sustainable fashion to suit the research needs of political science. Concerning data, the focus is on converting text issued by public institutions into a sustainable digital format (TEI/XML).
The DARIAH-DE repository is a digital long-term archive for human and cultural-scientific research data. Each object described and stored in the DARIAH-DE Repository has a unique and lasting Persistent Identifier (DOI), with which it is permanently referenced, cited, and kept available for the long term. In addition, the DARIAH-DE Repository enables the sustainable and secure archiving of data collections. The DARIAH-DE Repository is not only to DARIAH-DE associated research projects, but also to individual researchers as well as research projects that want to save their research data persistently, referenceable and long-term archived and make it available to third parties. The main focus is the simple and user-oriented access to long-term storage of research data. To ensure its long term sustainability, the DARIAH-DE Repository is operated by the Humanities Data Centre.
The repository is part of the National Research Data Infrastructure initiative Text+, in which the University of Tübingen is a partner. It is housed at the Department of General and Computational Linguistics. The infrastructure is maintained in close cooperation with the Digital Humanities Centre, which is a core facility of the university, colaborating with the library and computing center of the university. Integration of the repository into the national CLARIN-D and international CLARIN infrastructures gives it wide exposure, increasing the likelihood that the resources will be used and further developed beyond the lifetime of the projects in which they were developed. Among the resources currently available in the Tübingen Center Repository, researchers can find widely used treebanks of German (e.g. TüBa-D/Z), the German wordnet (GermaNet), the first manually annotated digital treebank (Index Thomisticus), as well as descriptions of the tools used by the WebLicht ecosystem for natural language processing.
The Satellite Application Facility on Climate Monitoring (CM SAF) develops, produces, archives and disseminates satellite-data-based products in support to climate monitoring. The product suite mainly covers parameters related to the energy & water cycle and addresses many of the Essential Climate Variables as defined by GCOS (GCOS 138). The CM SAF produces both Enviromental Data Records and Climate Data Records.
This is the KONECT project, a project in the area of network science with the goal to collect network datasets, analyse them, and make available all analyses online. KONECT stands for Koblenz Network Collection, as the project has roots at the University of Koblenz–Landau in Germany. All source code is made available as Free Software, and includes a network analysis toolbox for GNU Octave, a network extraction library, as well as code to generate these web pages, including all statistics and plots. KONECT contains over a hundred network datasets of various types, including directed, undirected, bipartite, weighted, unweighted, signed and rating networks. The networks of KONECT are collected from many diverse areas such as social networks, hyperlink networks, authorship networks, physical networks, interaction networks and communication networks. The KONECT project has developed network analysis tools which are used to compute network statistics, to draw plots and to implement various link prediction algorithms. The result of these analyses are presented on these pages. Whenever we are allowed to do so, we provide a download of the networks.