Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 29 result(s)
The National Archives and Records Administration (NARA) is the nation's record keeper. Of all documents and materials created in the course of business conducted by the United States Federal government, only 1%-3% are so important for legal or historical reasons that they are kept by us forever. Those valuable records are preserved and are available to you, whether you want to see if they contain clues about your family’s history, need to prove a veteran’s military service, or are researching an historical topic that interests you.
NASA’s Precipitation Measurement Missions – TRMM and GPM – provide advanced information on rain and snow characteristics and detailed three-dimensional knowledge of precipitation structure within the atmosphere, which help scientists study and understand Earth's water cycle, weather and climate.
Country
The GeoPortal.rlp allows the central search and visualization of geo data. Inside the geo data infrastructure of Rhineland-Palatinate the GeoPortal.rlp inherit the central duty a service orientated branch exchange between user and offerer of geo data. The GeoPortal.rlp establishes the access to geo data over the electronic network. The GeoPortal.rlp was brought on line on January, 8th 2007 for the first time, on February, 2nd 2011 it occured a site-relaunch.
Country
As the national oceanographic data centre for Canada, MEDS maintains centralized repositories of some oceanographic data types collected in Canada, and coordinates data exchanges between DFO and recognized intergovernmental organizations, as well as acts as a central point for oceanographic data requests. Real-time, near real-time (for operational oceanography) or historical data are made available as appropriate.
The European Bioinformatics Institute (EBI) has a long-standing mission to collect, organise and make available databases for biomolecular science. It makes available a collection of databases along with tools to search, download and analyse their content. These databases include DNA and protein sequences and structures, genome annotation, gene expression information, molecular interactions and pathways. Connected to these are linking and descriptive data resources such as protein motifs, ontologies and many others. In many of these efforts, the EBI is a European node in global data-sharing agreements involving, for example, the USA and Japan.
The Index to Marine and Lacustrine Geological Samples is a tool to help scientists locate and obtain geologic material from sea floor and lakebed cores, grabs, and dredges archived by participating institutions around the world. Data and images related to the samples are prepared and contributed by the institutions for access via the IMLGS and long-term archive at NGDC. Before proposing research on any sample, please contact the curator for sample condition and availability. A consortium of Curators guides the IMLGS, maintained on behalf of the group by NGDC, since 1977.
Copernicus is a European system for monitoring the Earth. Copernicus consists of a complex set of systems which collect data from multiple sources: earth observation satellites and in situ sensors such as ground stations, airborne and sea-borne sensors. It processes these data and provides users with reliable and up-to-date information through a set of services related to environmental and security issues. The services address six thematic areas: land monitoring, marine monitoring, atmosphere monitoring, climate change, emergency management and security. The main users of Copernicus services are policymakers and public authorities who need the information to develop environmental legislation and policies or to take critical decisions in the event of an emergency, such as a natural disaster or a humanitarian crisis. Based on the Copernicus services and on the data collected through the Sentinels and the contributing missions , many value-added services can be tailored to specific public or commercial needs, resulting in new business opportunities. In fact, several economic studies have already demonstrated a huge potential for job creation, innovation and growth.
Country
The Norwegian Marine Data Centre (NMD) at the Institute of Marine Research was established as a national data centre dedicated to the professional processing and long-term storage of marine environmental and fisheries data and production of data products. The Institute of Marine Research continuously collects large amounts of data from all Norwegian seas. Data are collected using vessels, observation buoys, manual measurements, gliders – amongst others. NMD maintains the largest collection of marine environmental and fisheries data in Norway.
The National Sleep Research Resource (NSRR) is an NHLBI-supported repository for sharing large amounts of sleep data (polysomnography, actigraphy and questionnaire-based) from multiple cohorts, clinical trials, and other data sources. Launched in April 2014, the mission of the NSRR is to advance sleep and circadian science by supporting secondary data analysis, algorithmic development, and signal processing through the sharing of high-quality data sets.
By stimulating inspiring research and producing innovative tools, Huygens ING intends to open up old and inaccessible sources, and to understand them better. Huygens ING’s focus is on Digital Humanities, History, History of Science, and Textual Scholarship. Huygens ING pursues research in the fields of History, Literary Studies, the History of Science and Digital Humanities. Huygens ING aims to publish digital sources and data responsibly and with care. Innovative tools are made as widely available as possible. We strive to share the available knowledge at the institute with both academic peers and the wider public.
<<<!!!<<< The repository is no longer available. This record is out-dated. >>>!!!>>>- GEON is an open collaborative project that is developing cyberinfrastructure for integration of 3 and 4 dimensional earth science data. GEON will develop services for data integration and model integration, and associated model execution and visualization. Mid-Atlantic test bed will focus on tectonothermal, paleogeographic, and biotic history from the late-Proterozoicto mid-Paleozoic. Rockies test bed will focus on integration of data with dynamic models, to better understand deformation history. GEON will develop the most comprehensive regional datasets in test bed areas.
TemperateReefBase is a resource for temperate reef researchers worldwide to use and contribute data. Unique in its role as a one-stop-shop for global temperate reef data, TemperateReefBase was initially established by IMAS in collaboration with the Kelp Ecology Ecosystem Network (KEEN). KEEN was instigated through a National Centre for Ecological Analysis and Synthesis (NCEAS) working group which assembled experts from around the world to examine the impacts of global change on kelp-bed ecosystem worldwide. The group has assembled significant global data for kelps, other seaweeds and associated species including fishes, and has embarked on unprecedented global experiments and surveys in which identical experiments and surveys are being conducted at sites in kelp beds around the world to determine global trends and examine the capacity of kelps to respond to disturbance in the face of climate change and other anthropogenic stressors. The TemperateReefBase Data Portal is an online discovery interface showcasing temperate reef data collected from around the globe. The portal aims to make this data freely and openly available for the benefit of marine and environmental science as a whole. The TemperateReefBase Data Portal is hosted and maintained by the Institute for Marine and Antarctic Studies at the University of Tasmania, Australia.
Addgene archives and distributes plasmids for researchers around the globe. They are working with thousands of laboratories to assemble a high-quality library of published plasmids for use in research and discovery. By linking plasmids with articles, scientists can always find data related to the materials they request.
The Minnesota Population Center (MPC) is a University-wide interdisciplinary cooperative for demographic research. The MPC serves more than 80 faculty members and research scientists from eight colleges and institutes at the University of Minnesota. As a leading developer and disseminator of demographic data, we also serve a broader audience of some 50,000 demographic researchers worldwide. MPC is a DataONE member node: https://search.dataone.org/#profile/US_MPC
PDBe is the European resource for the collection, organisation and dissemination of data on biological macromolecular structures. In collaboration with the other worldwide Protein Data Bank (wwPDB) partners - the Research Collaboratory for Structural Bioinformatics (RCSB) and BioMagResBank (BMRB) in the USA and the Protein Data Bank of Japan (PDBj) - we work to collate, maintain and provide access to the global repository of macromolecular structure data. We develop tools, services and resources to make structure-related data more accessible to the biomedical community.
Europeana is the trusted source of cultural heritage brought to you by the Europeana Foundation and a large number of European cultural institutions, projects and partners. It’s a real piece of team work. Ideas and inspiration can be found within the millions of items on Europeana. These objects include: Images - paintings, drawings, maps, photos and pictures of museum objects Texts - books, newspapers, letters, diaries and archival papers Sounds - music and spoken word from cylinders, tapes, discs and radio broadcasts Videos - films, newsreels and TV broadcasts All texts are CC BY-SA, images and media licensed individually.
This hub supports the geospatial modeling, data analysis and visualization needs of the broad research and education communities through hosting of groups, datasets, tools, training materials, and educational contents.
The African Development Bank Group (AfDB) is committed to supporting statistical development in Africa as a sound basis for designing and managing effective development policies for reducing poverty on the continent. Reliable and timely data is critical to setting goals and targets as well as evaluating project impact. Reliable data constitutes the single most convincing way of getting the people involved in what their leaders and institutions are doing. It also helps them to get involved in the development process, thus giving them a sense of ownership of the entire development process. The AfDB has a large team of researchers who focus on the production of statistical data on economic and social situations. The data produced by the institution’s statistics department constitutes the background information in the Bank’s flagship development publications. Besides its own publication, the AfDB also finances studies in collaboration with its partners. The Statistics Department aims to stand as the primary source of relevant, reliable and timely data on African development processes, starting with the data generated from its current management of the Africa component of the International Comparison Program (ICP-Africa). The Department discharges its responsibilities through two divisions: The Economic and Social Statistics Division (ESTA1); The Statistical Capacity Building Division (ESTA2)
<<<!!!<<< This repository is no longer available. >>>!!!>>> TRMM is a research satellite designed to improve our understanding of the distribution and variability of precipitation within the tropics as part of the water cycle in the current climate system. By covering the tropical and sub-tropical regions of the Earth, TRMM provides much needed information on rainfall and its associated heat release that helps to power the global atmospheric circulation that shapes both weather and climate. In coordination with other satellites in NASA's Earth Observing System, TRMM provides important precipitation information using several space-borne instruments to increase our understanding of the interactions between water vapor, clouds, and precipitation, that are central to regulating Earth's climate. The TRMM mission ended in 2015 and final TRMM multi-satellite precipitation analyses (TMPA, product 3B42/3B43) data processing will end December 31st, 2019. As a result, this TRMM webpage is in the process of being retired and some TRMM imagery may not be displaying correctly. Some of the content will be moved to the Precipitation Measurement Missions website https://gpm.nasa.gov/ and our team is exploring ways to provide some of the real-time products using GPM data. Please contact us if you have any additional questions.
Country
The International Network of Nuclear Reaction Data Centres (NRDC) constitutes a worldwide cooperation of nuclear data centres under the auspices of the International Atomic Energy Agency. The Network was established to coordinate the world-wide collection, compilation and dissemination of nuclear reaction data.
Knoema is a knowledge platform. The basic idea is to connect data with analytical and presentation tools. As a result, we end with one uniformed platform for users to access, present and share data-driven content. Within Knoema, we capture most aspects of a typical data use cycle: accessing data from multiple sources, bringing relevant indicators into a common space, visualizing figures, applying analytical functions, creating a set of dashboards, and presenting the outcome.
OpenML is an open ecosystem for machine learning. By organizing all resources and results online, research becomes more efficient, useful and fun. OpenML is a platform to share detailed experimental results with the community at large and organize them for future reuse. Moreover, it will be directly integrated in today’s most popular data mining tools (for now: R, KNIME, RapidMiner and WEKA). Such an easy and free exchange of experiments has tremendous potential to speed up machine learning research, to engender larger, more detailed studies and to offer accurate advice to practitioners. Finally, it will also be a valuable resource for education in machine learning and data mining.
When published in 2005, the Millennium Run was the largest ever simulation of the formation of structure within the ΛCDM cosmology. It uses 10(10) particles to follow the dark matter distribution in a cubic region 500h(−1)Mpc on a side, and has a spatial resolution of 5h−1kpc. Application of simplified modelling techniques to the stored output of this calculation allows the formation and evolution of the ~10(7) galaxies more luminous than the Small Magellanic Cloud to be simulated for a variety of assumptions about the detailed physics involved. As part of the activities of the German Astrophysical Virtual Observatory we have created relational databases to store the detailed assembly histories both of all the haloes and subhaloes resolved by the simulation, and of all the galaxies that form within these structures for two independent models of the galaxy formation physics. We have implemented a Structured Query Language (SQL) server on these databases. This allows easy access to many properties of the galaxies and halos, as well as to the spatial and temporal relations between them. Information is output in table format compatible with standard Virtual Observatory tools. With this announcement (from 1/8/2006) we are making these structures fully accessible to all users. Interested scientists can learn SQL and test queries on a small, openly accessible version of the Millennium Run (with volume 1/512 that of the full simulation). They can then request accounts to run similar queries on the databases for the full simulations. In 2008 and 2012 the simulations were repeated.