Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 42 result(s)
Nuclear Data Services contains atomic, molecular and nuclear data sets for the development and maintenance of nuclear technologies. It includes energy-dependent reaction probabilities (cross sections), the energy and angular distributions of reaction products for many combinations of target and projectile, and the atomic and nuclear properties of excited states, and their radioactive decay data. Their main concern is providing data required to design a modern nuclear reactor for electricity production. Approximately 11.5 million nuclear data points have been measured and compiled into computerized form.
Country
Arachne is the central object-database of the German Archaeological Institute (DAI). In 2004 the DAI and the Research Archive for Ancient Sculpture at the University of Cologne (FA) joined the effort to support Arachne as a tool for free internet-based research. Arachne's database design uses a model that builds on one of the most basic assumptions one can make about archaeology, classical archaeology or art history: all activities in these areas can most generally be described as contextualizing objects. Arachne tries to avoid the basic mistakes of earlier databases, which limited their object modeling to specific project-oriented aspects, thus creating separated containers of only a small number of objects. All objects inside Arachne share a general part of their object model, to which a more class-specific part is added that describes the specialised properties of a category of material like architecture or topography. Seen on the level of the general part, a powerful pool of material can be used for general information retrieval, whereas on the level of categories and properties, very specific structures can be displayed.
WHOI is the world's leading non-profit oceanographic research organization. WHOI maintains unparalleled depth and breadth of expertise across a range of oceanographic research areas. Institution scientists and engineers work collaboratively within and across six research departments to advance knowledge of the global ocean and its fundamental importance to other planetary systems. At the same time, they also train future generations of ocean scientists and address problems that have a direct impact in efforts to understand and manage critical marine resources.
The European Soil Data Centre (ESDAC) is the thematic centre for soil related data in Europe. Its ambition is to be the single reference point for and to host all relevant soil data and information at European level. It contains a number of resources that are organized and presented in various ways: datasets, services/applications, maps, documents, events, projects and external links.
The African Development Bank Group (AfDB) is committed to supporting statistical development in Africa as a sound basis for designing and managing effective development policies for reducing poverty on the continent. Reliable and timely data is critical to setting goals and targets as well as evaluating project impact. Reliable data constitutes the single most convincing way of getting the people involved in what their leaders and institutions are doing. It also helps them to get involved in the development process, thus giving them a sense of ownership of the entire development process. The AfDB has a large team of researchers who focus on the production of statistical data on economic and social situations. The data produced by the institution’s statistics department constitutes the background information in the Bank’s flagship development publications. Besides its own publication, the AfDB also finances studies in collaboration with its partners. The Statistics Department aims to stand as the primary source of relevant, reliable and timely data on African development processes, starting with the data generated from its current management of the Africa component of the International Comparison Program (ICP-Africa). The Department discharges its responsibilities through two divisions: The Economic and Social Statistics Division (ESTA1); The Statistical Capacity Building Division (ESTA2)
<<<!!!<<< This repository is no longer available. >>>!!!>>> TRMM is a research satellite designed to improve our understanding of the distribution and variability of precipitation within the tropics as part of the water cycle in the current climate system. By covering the tropical and sub-tropical regions of the Earth, TRMM provides much needed information on rainfall and its associated heat release that helps to power the global atmospheric circulation that shapes both weather and climate. In coordination with other satellites in NASA's Earth Observing System, TRMM provides important precipitation information using several space-borne instruments to increase our understanding of the interactions between water vapor, clouds, and precipitation, that are central to regulating Earth's climate. The TRMM mission ended in 2015 and final TRMM multi-satellite precipitation analyses (TMPA, product 3B42/3B43) data processing will end December 31st, 2019. As a result, this TRMM webpage is in the process of being retired and some TRMM imagery may not be displaying correctly. Some of the content will be moved to the Precipitation Measurement Missions website https://gpm.nasa.gov/ and our team is exploring ways to provide some of the real-time products using GPM data. Please contact us if you have any additional questions.
Country
The International Network of Nuclear Reaction Data Centres (NRDC) constitutes a worldwide cooperation of nuclear data centres under the auspices of the International Atomic Energy Agency. The Network was established to coordinate the world-wide collection, compilation and dissemination of nuclear reaction data.
Knoema is a knowledge platform. The basic idea is to connect data with analytical and presentation tools. As a result, we end with one uniformed platform for users to access, present and share data-driven content. Within Knoema, we capture most aspects of a typical data use cycle: accessing data from multiple sources, bringing relevant indicators into a common space, visualizing figures, applying analytical functions, creating a set of dashboards, and presenting the outcome.
Historic Environment Scotland was formed in October 2015 following the merger between Historic Scotland and The Royal Commission on the Ancient and Historical Monuments of Scotland. Historic Environment Scotland is the lead public body established to investigate, care for and promote Scotland’s historic environment. We lead and enable Scotland’s first historic environment strategy Our Place in Time, which sets out how our historic environment will be managed. It ensures our historic environment is cared for, valued and enhanced, both now and for future generations.
Country
The Institute of Ocean Sciences (IOS)/Ocean Sciences Division (OSD) data archive contains the holdings of oceanographic data generated by the IOS and other agencies and laboratories, including the Institute of Oceanography at the University of British Columbia and the Pacific Biological Station. The contents include data from B.C. coastal waters and inlets, B.C. continental shelf waters, open ocean North Pacific waters, Beaufort Sea and the Arctic Archipelago.
The National Collaborative on Childhood Obesity Research (NCCOR) brings together four of the nation's leading research funders — the Centers for Disease Control and Prevention (CDC), the National Institutes of Health (NIH), the Robert Wood Johnson Foundation (RWJF), and the U.S. Department of Agriculture (USDA) — to address the problem of childhood obesity in America. The Tools of the NCCOR are: Catalogue of Surveillance Systems, Measures Registry and Registry of Studies.
OpenML is an open ecosystem for machine learning. By organizing all resources and results online, research becomes more efficient, useful and fun. OpenML is a platform to share detailed experimental results with the community at large and organize them for future reuse. Moreover, it will be directly integrated in today’s most popular data mining tools (for now: R, KNIME, RapidMiner and WEKA). Such an easy and free exchange of experiments has tremendous potential to speed up machine learning research, to engender larger, more detailed studies and to offer accurate advice to practitioners. Finally, it will also be a valuable resource for education in machine learning and data mining.
When published in 2005, the Millennium Run was the largest ever simulation of the formation of structure within the ΛCDM cosmology. It uses 10(10) particles to follow the dark matter distribution in a cubic region 500h(−1)Mpc on a side, and has a spatial resolution of 5h−1kpc. Application of simplified modelling techniques to the stored output of this calculation allows the formation and evolution of the ~10(7) galaxies more luminous than the Small Magellanic Cloud to be simulated for a variety of assumptions about the detailed physics involved. As part of the activities of the German Astrophysical Virtual Observatory we have created relational databases to store the detailed assembly histories both of all the haloes and subhaloes resolved by the simulation, and of all the galaxies that form within these structures for two independent models of the galaxy formation physics. We have implemented a Structured Query Language (SQL) server on these databases. This allows easy access to many properties of the galaxies and halos, as well as to the spatial and temporal relations between them. Information is output in table format compatible with standard Virtual Observatory tools. With this announcement (from 1/8/2006) we are making these structures fully accessible to all users. Interested scientists can learn SQL and test queries on a small, openly accessible version of the Millennium Run (with volume 1/512 that of the full simulation). They can then request accounts to run similar queries on the databases for the full simulations. In 2008 and 2012 the simulations were repeated.
MalaCards is an integrated database of human maladies and their annotations, modeled on the architecture and richness of the popular GeneCards database of human genes. MalaCards mines and merges varied web data sources to generate a computerized web card for each human disease. Each MalaCard contains disease specific prioritized annotative information, as well as links between associated diseases, leveraging the GeneCards relational database, search engine, and GeneDecks set-distillation tool. As proofs of concept of the search/distill/infer pipeline we find expected elucidations, as well as potentially novel ones.
It is an interactive website offering access to genome sequence data from a variety of vertebrate and invertebrate species and major model organisms, integrated with a large collection of aligned annotations. The Browser is a graphical viewer optimized to support fast interactive performance and is an open-source, web-based tool suite built on top of a MySQL database for rapid visualization, examination, and querying of the data at many levels.