Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 75 result(s)
The European Genome-phenome Archive (EGA) is designed to be a repository for all types of sequence and genotype experiments, including case-control, population, and family studies. We will include SNP and CNV genotypes from array based methods and genotyping done with re-sequencing methods. The EGA will serve as a permanent archive that will archive several levels of data including the raw data (which could, for example, be re-analysed in the future by other algorithms) as well as the genotype calls provided by the submitters. We are developing data mining and access tools for the database. For controlled access data, the EGA will provide the necessary security required to control access, and maintain patient confidentiality, while providing access to those researchers and clinicians authorised to view the data. In all cases, data access decisions will be made by the appropriate data access-granting organisation (DAO) and not by the EGA. The DAO will normally be the same organisation that approved and monitored the initial study protocol or a designate of this approving organisation. The European Genome-phenome Archive (EGA) allows you to explore datasets from genomic studies, provided by a range of data providers. Access to datasets must be approved by the specified Data Access Committee (DAC).
The 1000 Genomes Project is an international collaboration to produce an extensive public catalog of human genetic variation, including SNPs and structural variants, and their haplotype contexts. This resource will support genome-wide association studies and other medical research studies. The genomes of about 2500 unidentified people from about 25 populations around the world will be sequenced using next-generation sequencing technologies. The results of the study will be freely and publicly accessible to researchers worldwide. The International Genome Sample Resource (IGSR) has been established at EMBL-EBI to continue supporting data generated by the 1000 Genomes Project, supplemented with new data and new analysis.
The ODIN Portal hosts scientific databases in the domains of structural materials and hydrogen research and is operated on behalf of the European energy research community by the Joint Research Centre, the European Commission's in-house science service providing independent scientific advice and support to policies of the European Union. ODIN contains engineering databases (Mat-Database, Hiad-Database, Nesshy-Database, HTR-Fuel-Database, HTR-Graphit-Database) and document management sites and other information related to European research in the area of nuclear and conventional energy.
The UniProt Reference Clusters (UniRef) provide clustered sets of sequences from the UniProt Knowledgebase (including isoforms) and selected UniParc records in order to obtain complete coverage of the sequence space at several resolutions while hiding redundant sequences (but not their descriptions) from view.
The Database explores the interactions of chemicals and proteins. It integrates information about interactions from metabolic pathways, crystal structures, binding experiments and drug-target relationships. Inferred information from phenotypic effects, text mining and chemical structure similarity is used to predict relations between chemicals. STITCH further allows exploring the network of chemical relations, also in the context of associated binding proteins.
DEPOD - the human DEPhOsphorylation Database (version 1.1) is a manually curated database collecting human active phosphatases, their experimentally verified protein and non-protein substrates and dephosphorylation site information, and pathways in which they are involved. It also provides links to popular kinase databases and protein-protein interaction databases for these phosphatases and substrates. DEPOD aims to be a valuable resource for studying human phosphatases and their substrate specificities and molecular mechanisms; phosphatase-targeted drug discovery and development; connecting phosphatases with kinases through their common substrates; completing the human phosphorylation/dephosphorylation network.
The EZRC at KIT houses the largest experimental fish facility in Europe with a capacity of more than 300,000 fish. Zebrafish stocks are maintained mostly as frozen sperm. Frequently requested lines are also kept alive as well as a selection of wildtype strains. Several thousand mutations in protein coding genes generated by TILLING in the Stemple lab of the Sanger Centre, Hinxton, UK and lines generated by ENU mutagenesis by the Nüsslein-Volhard lab in addition to transgenic lines and mutants generated by KIT groups or brought in through collaborations. We also accept submissions on an individual basis and ship fish upon request to PIs in Europe and elsewhere. EZRC also provides screening services and technologies such as imaging and high-throughput sequencing. Key areas include automation of embryo handling and automated image acquisition and processing. Our platform also involves the development of novel microscopy techniques (e.g. SPIM, DSLM, robotic macroscope) to permit high-resolution, real-time imaging in 4D. By association with the ComPlat platform, we can support also chemical screens and offer libraries with up to 20,000 compounds in total for external users. As another service to the community the EZRC provides plasmids (cDNAs, transgenes, Talen, Crispr/cas9) maintained by the Helmholtz repository of Bioparts (HERBI) to the scientific community. In addition the fish facility keeps a range of medaka stocks, maintained by the Loosli group.
The main function of the GGSP (Galileo Geodetic Service Provider) is to provide a terrestrial reference frame, in the broadest sense of the word, to both the Galileo Core System (GCS) as well as to the Galileo User Segment (all Galileo users). This implies that the GGSP should enable all users of the Galileo System, including the most demanding ones, to access and realise the GTRF with the precision required for their specific application. Furthermore, the GGSP must ensure the proper interfaces to all users of the GTRF, especially the geodetic and scientific user groups. In addition the GGSP must ensure the adherence to the defined standards of all its products. Last but not least the GGSP will play a key role to create awareness of the GTRF and educate users in the usage and realisation of the GTRF.
IMGT/GENE-DB is the IMGT genome database for IG and TR genes from human, mouse and other vertebrates. IMGT/GENE-DB provides a full characterization of the genes and of their alleles: IMGT gene name and definition, chromosomal localization, number of alleles, and for each allele, the IMGT allele functionality, and the IMGT reference sequences and other sequences from the literature. IMGT/GENE-DB allele reference sequences are available in FASTA format (nucleotide and amino acid sequences with IMGT gaps according to the IMGT unique numbering, or without gaps).
IMGT/mAb-DB provides a unique expertised resource on monoclonal antibodies (mAbs) with diagnostic or therapeutic indications, fusion proteins for immune applications (FPIA), composite proteins for clinical applications (CPCA) and relative proteins of the immune system (RPI) with clinical indications.
Presented is information on changes in weather and climate extremes, as well as the daily dataset needed to monitor and analyse these extremes. map of participating countries. Today, ECA&D is receiving data from 59 participants for 62 countries and the ECA dataset contains 33265 series of observations for 12 elements at 7512 meteorological stations throughout Europe and the Mediterranean (see Daily data > Data dictionary). 51% of these series is public, which means downloadable from this website for non-commercial research. Participation to ECA&D is open to anyone maintaining daily station data
Argo is an international programme using autonomous floats to collect temperature, salinity and current data in the ice-free oceans. It is teamed with the Jason ocean satellite series. Argo will soon reach its target of 3000 floats delivering data within 24 hours to researchers and operational centres worldwide. 23 countries contribute floats to Argo and many others help with float deployments. Argo has revolutionized the collection of information from inside the oceans. ARGO Project is organized in regional and national Centers with a Project Office, an Information Center (AIC) and 2 Global Data Centers (GDAC), at the United States and at France. Each DAC submits regularly all its new files to both USGODAE and Coriolis GDACs.The whole Argo data set is available in real time and delayed mode from the global data centres (GDACs). The internet addresses are: https://nrlgodae1.nrlmry.navy.mil/ and http://www.argodatamgt.org
BeiDare2 is currently at beta version. All new users should try the new service as we no longer provide training for the classic BioDare. - BioDare stands for Biological Data Repository, its main focus is data from circadian experiments. BioDare is an online facility to share, store, analyse and disseminate timeseries data, focussing on circadian clock data, with browser and web service interfaces. Toolbox features include an improved, speedier FFT-NLLs routine and ROBuST’s Spectrum Resampling tool that will analyse rhythmic time series data.
SureChemOpen is a free resource for researchers who want to search, view and link to patent chemistry. For end-users with professional search and analysis needs, we offer the fully-featured SureChemPro. For enterprise users, SureChemDirect provides all our patent chemistry via an API or a data feed. The SureChem family of products is built upon the Claims® Global Patent Database, a comprehensive international patent collection provided by IFI Claims®. This state of the art database is normalized and curated to provide unprecedented consistency and quality.
Currently the institute has more than 700 collections consisting of (digital) research data, digitized material, archival collections, printed material, handwritten questionnaires, maps and pictures. The focus is on resources relevant for the study of function, meaning and coherence of cultural expressions and resources relevant for the structural, dialectological and sociolinguistic study of language variation within the Dutch language. An overview is here https://meertens.knaw.nl/en/datasets/
This is the KONECT project, a project in the area of network science with the goal to collect network datasets, analyse them, and make available all analyses online. KONECT stands for Koblenz Network Collection, as the project has roots at the University of Koblenz–Landau in Germany. All source code is made available as Free Software, and includes a network analysis toolbox for GNU Octave, a network extraction library, as well as code to generate these web pages, including all statistics and plots. KONECT contains over a hundred network datasets of various types, including directed, undirected, bipartite, weighted, unweighted, signed and rating networks. The networks of KONECT are collected from many diverse areas such as social networks, hyperlink networks, authorship networks, physical networks, interaction networks and communication networks. The KONECT project has developed network analysis tools which are used to compute network statistics, to draw plots and to implement various link prediction algorithms. The result of these analyses are presented on these pages. Whenever we are allowed to do so, we provide a download of the networks.
Data repository of a meteorological experiment conducted in Perdigão, Portugal between December 15, 2016 to June 15, 2017. The Perdigao field project is part of a larger joint US/European multi-year program in Portugal. The project is partially funded by the European Union (EU) ERANET+ to provide the wind energy sector with more detailed resource mapping capabilities in the form of a new digital EU wind atlas. A major goal of the Perdigão field project is to quantify errors of wind resource models against a benchmark dataset collected in complex terrain. The US participation will complement this activity by identifying physical and numerical weaknesses of models and developing new knowledge and methods to overcome such deficiencies.
CARIBIC is an innovative scientific project to study and monitor important chemical and physical processes in the Earth´s atmosphere. Detailed and extensive measurements are made during long distance flights. We deploy an airfreight container with automated scientific apparatus which are connected to an air and particle (aerosol) inlet underneath the aircraft. We use an Airbus A340-600 from Lufthansa since December 2004.