Content Types


AID systems


Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type


Metadata standards

PID systems

Provider types

Quality management

Repository languages



Repository types


  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 21 result(s)
The UCI Machine Learning Repository is a collection of databases, domain theories, and data generators that are used by the machine learning community for the empirical analysis of machine learning algorithms. It is used by students, educators, and researchers all over the world as a primary source of machine learning data sets. As an indication of the impact of the archive, it has been cited over 1000 times.
CiteSeerx is an evolving scientific literature digital library and search engine that focuses primarily on the literature in computer and information science. CiteSeerx aims to improve the dissemination of scientific literature and to provide improvements in functionality, usability, availability, cost, comprehensiveness, efficiency, and timeliness in the access of scientific and scholarly knowledge. Rather than creating just another digital library, CiteSeerx attempts to provide resources such as algorithms, data, metadata, services, techniques, and software that can be used to promote other digital libraries. CiteSeerx has developed new methods and algorithms to index PostScript and PDF research articles on the Web.
The SuiteSparse Matrix Collection is a large and actively growing set of sparse matrices that arise in real applications. The Collection is widely used by the numerical linear algebra community for the development and performance evaluation of sparse matrix algorithms. It allows for robust and repeatable experiments. Its matrices cover a wide spectrum of domains, include those arising from problems with underlying 2D or 3D geometry (as structural engineering, computational fluid dynamics, model reduction, electromagnetics, semiconductor devices, thermodynamics, materials, acoustics, computer graphics/vision, robotics/kinematics, and other discretizations) and those that typically do not have such geometry (optimization, circuit simulation, economic and financial modeling, theoretical and quantum chemistry, chemical process simulation, mathematics and statistics, power networks, and other networks and graphs.
The JPL Tropical Cyclone Information System (TCIS) was developed to support hurricane research. There are three components to TCIS; a global archive of multi-satellite hurricane observations 1999-2010 (Tropical Cyclone Data Archive), North Atlantic Hurricane Watch and ASA Convective Processes Experiment (CPEX) aircraft campaign. Together, data and visualizations from the real time system and data archive can be used to study hurricane process, validate and improve models, and assist in developing new algorithms and data assimilation techniques.
WorldData.AI comes with a built-in workspace – the next-generation hyper-computing platform powered by a library of 3.3 billion curated external trends. WorldData.AI allows you to save your models in its “My Models Trained” section. You can make your models public and share them on social media with interesting images, model features, summary statistics, and feature comparisons. Empower others to leverage your models. For example, if you have discovered a previously unknown impact of interest rates on new-housing demand, you may want to share it through “My Models Trained.” Upload your data and combine it with external trends to build, train, and deploy predictive models with one click! WorldData.AI inspects your raw data, applies feature processors, chooses the best set of algorithms, trains and tunes multiple models, and then ranks model performance.
CSDMS is a virtual home for a vibrant and growing community of about 1,000 international modeling experts and students who study the dynamic interactions of lithosphere, hydrosphere, cryosphere, and atmosphere at Earth’s surface. Participating in cross-disciplinary groups, members develop integrated software modules that predict the movement of water, sediment, and nutrients across landscapes and into the ocean. We share an open library of models, software, and access to high-performance computing. We also share knowledge that helps create higher-resolution simulations, often involving higher complexity algorithms. Together, we support the discovery, use, and conservation of natural resources; mitigation of natural hazards; geotechnical support of commercial and infrastructure development; environmental stewardship; and terrestrial surveillance for global security.
To understand the global surface energy budget is to understand climate. Because it is impractical to cover the earth with monitoring stations, the answer to global coverage lies in reliable satellite-based estimates. Efforts are underway at NASA and universities to develop algorithms to do this, but such projects are in their infancy. In concert with these ambitious efforts, accurate and precise ground-based measurements in differing climatic regions are essential to refine and verify the satellite-based estimates, as well as to support specialized research. To fill this niche, the Surface Radiation Budget Network (SURFRAD) was established in 1993 through the support of NOAA's Office of Global Programs.
Using a combination of remote sensing data and ground observations as inputs, CHG scientists have developed rainfall and other models that reliably predict crop performance in parts of the world vulnerable to crop failure. Policy makers within governments and at non-governmental organizations rely on CHG decision-support products for making critical resource allocation decisions. The CHG's scientific focus is "geospatial hydroclimatology", with an emphasis on the early detection and forecasting of hydroclimatic hazards related to food security droughts and floods. Basic research seeks an improved understanding of the climatic processes that govern drought and flood hazards in FEWS.NET countries. We develop better techniques, algorithms, and modeling applications to use remote sensing and other geospatial data for hazard early warning.
A collection of high quality multiple sequence alignments for objective, comparative studies of alignment algorithms. The alignments are constructed based on 3D structure superposition and manually refined to ensure alignment of important functional residues. A number of subsets are defined covering many of the most important problems encountered when aligning real sets of proteins. It is specifically designed to serve as an evaluation resource to address all the problems encountered when aligning complete sequences. The first release provided sets of reference alignments dealing with the problems of high variability, unequal repartition and large N/C-terminal extensions and internal insertions. Version 2.0 of the database incorporates three new reference sets of alignments containing structural repeats, trans-membrane sequences and circular permutations to evaluate the accuracy of detection/prediction and alignment of these complex sequences. Within the resource, users can look at a list of all the alignments, download the whole database by ftp, get the "c" program to compare a test alignment with the BAliBASE reference (The source code for the program is freely available), or look at the results of a comparison study of several multiple alignment programs, using BAliBASE reference sets.
OpenML is an open ecosystem for machine learning. By organizing all resources and results online, research becomes more efficient, useful and fun. OpenML is a platform to share detailed experimental results with the community at large and organize them for future reuse. Moreover, it will be directly integrated in today’s most popular data mining tools (for now: R, KNIME, RapidMiner and WEKA). Such an easy and free exchange of experiments has tremendous potential to speed up machine learning research, to engender larger, more detailed studies and to offer accurate advice to practitioners. Finally, it will also be a valuable resource for education in machine learning and data mining.
KONECT (the Koblenz Network Collection) is a project to collect large network datasets of all types in order to perform research in network science and related fields, collected by the Institute of Web Science and Technologies at the University of Koblenz–Landau. KONECT contains over a hundred network datasets of various types, including directed, undirected, bipartite, weighted, unweighted, signed and rating networks. The networks of KONECT are collected from many diverse areas such as social networks, hyperlink networks, authorship networks, physical networks, interaction networks and communication networks. The KONECT project has developed network analysis tools which are used to compute network statistics, to draw plots and to implement various link prediction algorithms. The result of these analyses are presented on these pages. Whenever we are allowed to do so, we provide a download of the networks.
EartH2Observe brings together the findings from European FP projects DEWFORA, GLOWASIS, WATCH, GEOWOW and others. It will integrate available global earth observations (EO), in-situ datasets and models and will construct a global water resources re-analysis dataset of significant length (several decades). The resulting data will allow for improved insights on the full extent of available water and existing pressures on global water resources in all parts of the water cycle. The project will support efficient and globally consistent water management and decision making by providing comprehensive multi-scale (regional, continental and global) water resources observations. It will test new EO data sources, extend existing processing algorithms and combine data from multiple satellite missions in order to improve the overall resolution and reliability of EO data included in the re-analysis dataset. The resulting datasets will be made available through an open Water Cycle Integrator data portal : the European contribution to the GEOSS/WCI approach. The datasets will be downscaled for application in case-studies at regional and local levels, and optimized based on identified European and local needs supporting water management and decision making . Actual data access:
The ASTER Project consists of two parts, each having a Japanese and a U.S. component. Mission operations are split between Japan Space Systems (J-spacesystems) and the Jet Propulsion Laboratory (JPL) in the U.S. J-spacesystems oversees monitoring instrument performance and health, developing the daily schedule command sequence, processing Level 0 data to Level 1, and providing higher level data processing, archiving, and distribution. The JPL ASTER project provides scheduling support for U.S. investigators, calibration and validation of the instrument and data products, coordinating the U.S. Science Team, and maintaining the science algorithms. The joint Japan/U.S. ASTER Science Team has about 40 scientists and researchers. Data access via NASA Reverb, ASTER Japan site, earth explorer, GloVis,GDEx and LP DAAC. See here . In Addition data are availabe through the newly implemented ASTER Volcano archive (AVA) .
CBS offers Comprehensive public databases of DNA- and protein sequences, macromolecular structure, g ene and protein expression levels, pathway organization and cell signalling, have been established to optimise scientific exploitation of the explosion of data within biology. Unlike many other groups in the field of biomolecular informatics, Center for Biological Sequence Analysis directs its research primarily towards topics related to the elucidation of the functional aspects of complex biological mechanisms. Among contemporary bioinformatics concerns are reliable computational interpretation of a wide range of experimental data, and the detailed understanding of the molecular apparatus behind cellular mechanisms of sequence information. By exploiting available experimental data and evidence in the design of algorithms, sequence correlations and other features of biological significance can be inferred. In addition to the computational research the center also has experimental efforts in gene expression analysis using DNA chips and data generation in relation to the physical and structural properties of DNA. In the last decade, the Center for Biological Sequence Analysis has produced a large number of computational methods, which are offered to others via WWW servers.
ChemSpider is a free chemical structure database providing fast access to over 58 million structures, properties and associated information. By integrating and linking compounds from more than 400 data sources, ChemSpider enables researchers to discover the most comprehensive view of freely available chemical data from a single online search. It is owned by the Royal Society of Chemistry. ChemSpider builds on the collected sources by adding additional properties, related information and links back to original data sources. ChemSpider offers text and structure searching to find compounds of interest and provides unique services to improve this data by curation and annotation and to integrate it with users’ applications.
The European Genome-phenome Archive (EGA) is designed to be a repository for all types of sequence and genotype experiments, including case-control, population, and family studies. We will include SNP and CNV genotypes from array based methods and genotyping done with re-sequencing methods. The EGA will serve as a permanent archive that will archive several levels of data including the raw data (which could, for example, be re-analysed in the future by other algorithms) as well as the genotype calls provided by the submitters. We are developing data mining and access tools for the database. For controlled access data, the EGA will provide the necessary security required to control access, and maintain patient confidentiality, while providing access to those researchers and clinicians authorised to view the data. In all cases, data access decisions will be made by the appropriate data access-granting organisation (DAO) and not by the EGA. The DAO will normally be the same organisation that approved and monitored the initial study protocol or a designate of this approving organisation. The European Genome-phenome Archive (EGA) allows you to explore datasets from genomic studies, provided by a range of data providers. Access to datasets must be approved by the specified Data Access Committee (DAC).
This project is an open invitation to anyone and everyone to participate in a decentralized effort to explore the opportunities of open science in neuroimaging. We aim to document how much (scientific) value can be generated from a data release — from the publication of scientific findings derived from this dataset, algorithms and methods evaluated on this dataset, and/or extensions of this dataset by acquisition and incorporation of new data. The project involves the processing of acoustic stimuli. In this study, the scientists have demonstrated an audiodescription of classic "Forrest Gump" to subjects, while researchers using functional magnetic resonance imaging (fMRI) have captured the brain activity of test candidates in the processing of language, music, emotions, memories and pictorial representations.In collaboration with various labs in Magdeburg we acquired and published what is probably the most comprehensive sample of brain activation patterns of natural language processing. Volunteers listened to a two-hour audio movie version of the Hollywood feature film "Forrest Gump" in a 7T MRI scanner. High-resolution brain activation patterns and physiological measurements were recorded continuously. These data have been placed into the public domain, and are freely available to the scientific community and the general public.
!!! We will terminate ASTER Products Distribution Service in March 2016 although we have been providing ASTER Products since November 20, 2000. !!! ASTER (Advanced Spaceborne Thermal Emission and Reflection radiometer) is the high efficiency optical imager which covers a wide spectral region from the visible to the thermal infra-red by 14 spectral bands. ASTER acquires data which can be used in various fields in earth science. ASTER was launched from Vandenberg Air Force Base in California, USA in 1999 aboard the Terra, which is the first satellite of the EOS Project. The purpose of ASTER project is to make contributions to extend the understanding of local and regional phenomena on the Earth surface and its atmosphere. The followings are ASTER related information, which includes ASTER instrument, ASTER Ground Data System, ASTER Science Activities, ASTER Data Distribution and so on. ASTER Search provides services to search and order ASTER data products on the website.
OASIS-3 is the latest release in the Open Access Series of Imaging Studies (OASIS) that aimed at making neuroimaging datasets freely available to the scientific community. By compiling and freely distributing this multi-modal dataset, we hope to facilitate future discoveries in basic and clinical neuroscience. Previously released data for OASIS-Cross-sectional (Marcus et al, 2007) and OASIS-Longitudinal (Marcus et al, 2010) have been utilized for hypothesis driven data analyses, development of neuroanatomical atlases, and development of segmentation algorithms. OASIS-3 is a longitudinal neuroimaging, clinical, cognitive, and biomarker dataset for normal aging and Alzheimer’s Disease. The OASIS datasets hosted by provide the community with open access to a significant database of neuroimaging and processed imaging data across a broad demographic, cognitive, and genetic spectrum an easily accessible platform for use in neuroimaging, clinical, and cognitive research on normal aging and cognitive decline. All data is available via
The main goal of the ECCAD project is to provide scientific and policy users with datasets of surface emissions of atmospheric compounds, and ancillary data, i.e. data required to estimate or quantify surface emissions. The supply of ancillary data - such as maps of population density, maps of fires spots, burnt areas, land cover - could help improve and encourage the development of new emissions datasets. ECCAD offers: Access to global and regional emission inventories and ancillary data, in a standardized format Quick visualization of emission and ancillary data Rationalization of the use of input data in algorithms or emission models Analysis and comparison of emissions datasets and ancillary data Tools for the evaluation of emissions and ancillary data ECCAD is a dynamical and interactive database, providing the most up to date datasets including data used within ongoing projects. Users are welcome to add their own datasets, or have their regional masks included in order to use ECCAD tools.