Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 73 result(s)
AHEAD, the European Archive of Historical Earthquake Data 1000-1899, is a distributed archive aiming at preserving, inventorying and making available, to investigators and other users, data sources on the earthquake history of Europe, such as papers, reports, Macroseismic Data Points (MDPs), parametric catalogues, and so on.
Country
Birdata is your gateway to BirdLife Australia data including the Atlas of Australian Birds and Nest record scheme. You can use Birdata to draw bird distribution maps and generate bird lists for any part of the country. You can also join in the Atlas and submit survey information to this important environmental database. Birdata is a partnership between Birds Australia and the Tony and Lisette Lewis Foundation's WildlifeLink program to collect and make Birds Australia data available online.
Europeana is the trusted source of cultural heritage brought to you by the Europeana Foundation and a large number of European cultural institutions, projects and partners. It’s a real piece of team work. Ideas and inspiration can be found within the millions of items on Europeana. These objects include: Images - paintings, drawings, maps, photos and pictures of museum objects Texts - books, newspapers, letters, diaries and archival papers Sounds - music and spoken word from cylinders, tapes, discs and radio broadcasts Videos - films, newsreels and TV broadcasts All texts are CC BY-SA, images and media licensed individually.
The Bavarian Archive for Speech Signals (BAS) is a public institution hosted by the University of Munich. This institution was founded with the aim of making corpora of current spoken German available to both the basic research and the speech technology communities via a maximally comprehensive digital speech-signal database. The speech material will be structured in a manner allowing flexible and precise access, with acoustic-phonetic and linguistic-phonetic evaluation forming an integral part of it.
The GHO data repository is WHO's gateway to health-related statistics for its 194 Member States. It provides access to over 1000 indicators on priority health topics including mortality and burden of diseases, the Millennium Development Goals (child nutrition, child health, maternal and reproductive health, immunization, HIV/AIDS, tuberculosis, malaria, neglected diseases, water and sanitation), non communicable diseases and risk factors, epidemic-prone diseases, health systems, environmental health, violence and injuries, equity among others. In addition, the GHO provides on-line access to WHO's annual summary of health-related data for its Member states: the World Health Statistics.
Country
RepOD is a general-purpose repository for open research data, offering all members of the academic community in Poland the possibility to deposit their work. It is intended for scientific data from all disciplines of knowledge and in all formats. The purpose of RepOD is to create a place where research data can be safely stored and openly shared with others.
Knoema is a knowledge platform. The basic idea is to connect data with analytical and presentation tools. As a result, we end with one uniformed platform for users to access, present and share data-driven content. Within Knoema, we capture most aspects of a typical data use cycle: accessing data from multiple sources, bringing relevant indicators into a common space, visualizing figures, applying analytical functions, creating a set of dashboards, and presenting the outcome.
Country
The CosmoSim database provides results from cosmological simulations performed within different projects: the MultiDark and Bolshoi project, and the CLUES project. The CosmoSim webpage provides access to several cosmological simulations, with a separate database for each simulation. Simulations overview: https://www.cosmosim.org/cms/simulations/simulations-overview/ . CosmoSim is a contribution to the German Astrophysical Virtual Observatory.
Country
Paris Astronomical Data Centre aims at providing VO access to its data collections, at participating to international standards developments, at implementing VO compliant simulation codes, data visualization and analysis software. This centre hosts high level permanent activities for tools and data distribution under the format of reference services. These sustainable services are recognized at the national level as CNRS labeled services. The various activities are organised as portals whose functions are to provide visibility and information on the projects and to encourage collaboration.
Country
With more than 60 years of experience, Toronto and Region Conservation Authority (TRCA) is one of 36 Conservation Authorities in Ontario, created to safeguard and enhance the health and well-being of watershed communities through the protection and restoration of the natural environment and the ecological services the environment provides. At TRCA, we are working towards providing free and open access to our data and information, in both accessible and machine readable formats, to ensure it’s available and easy to consume. Improving access to TRCA’s data and information will provide transparency into the decision making process and will improve accountability while increasing the public’s understanding and engagement with the organization.
<<<!!!<<< The demand for high-value environmental data and information has dramatically increased in recent years. To improve our ability to meet that demand, NOAA’s former three data centers—the National Climatic Data Center, the National Geophysical Data Center, and the National Oceanographic Data Center, which includes the National Coastal Data Development Center—have merged into the National Centers for Environmental Information (NCEI). >>>!!!>>> The National Oceanographic Data Center includes the National Coastal Data Development Center (NCDDC) and the NOAA Central Library, which are integrated to provide access to the world's most comprehensive sources of marine environmental data and information. NODC maintains and updates a national ocean archive with environmental data acquired from domestic and foreign activities and produces products and research from these data which help monitor global environmental changes. These data include physical, biological and chemical measurements derived from in situ oceanographic observations, satellite remote sensing of the oceans, and ocean model simulations.
When published in 2005, the Millennium Run was the largest ever simulation of the formation of structure within the ΛCDM cosmology. It uses 10(10) particles to follow the dark matter distribution in a cubic region 500h(−1)Mpc on a side, and has a spatial resolution of 5h−1kpc. Application of simplified modelling techniques to the stored output of this calculation allows the formation and evolution of the ~10(7) galaxies more luminous than the Small Magellanic Cloud to be simulated for a variety of assumptions about the detailed physics involved. As part of the activities of the German Astrophysical Virtual Observatory we have created relational databases to store the detailed assembly histories both of all the haloes and subhaloes resolved by the simulation, and of all the galaxies that form within these structures for two independent models of the galaxy formation physics. We have implemented a Structured Query Language (SQL) server on these databases. This allows easy access to many properties of the galaxies and halos, as well as to the spatial and temporal relations between them. Information is output in table format compatible with standard Virtual Observatory tools. With this announcement (from 1/8/2006) we are making these structures fully accessible to all users. Interested scientists can learn SQL and test queries on a small, openly accessible version of the Millennium Run (with volume 1/512 that of the full simulation). They can then request accounts to run similar queries on the databases for the full simulations. In 2008 and 2012 the simulations were repeated.
The Stanford Digital Repository (SDR) is Stanford Libraries' digital preservation system. The core repository provides “back-office” preservation services – data replication, auditing, media migration, and retrieval -- in a secure, sustainable, scalable stewardship environment. Scholars and researchers across disciplines at Stanford use SDR repository services to provide ongoing, persistent, reliable access to their research outputs.
The United States Census Bureau (officially the Bureau of the Census, as defined in Title 13 U.S.C. § 11) is the government agency that is responsible for the United States Census. It also gathers other national demographic and economic data. As a part of the United States Department of Commerce, the Census Bureau serves as a leading source of data about America's people and economy. The most visible role of the Census Bureau is to perform the official decennial (every 10 years) count of people living in the U.S. The most important result is the reallocation of the number of seats each state is allowed in the House of Representatives, but the results also affect a range of government programs received by each state. The agency director is a political appointee selected by the President of the United States.
Data.gov increases the ability of the public to easily find, download, and use datasets that are generated and held by the Federal Government. Data.gov provides descriptions of the Federal datasets (metadata), information about how to access the datasets, and tools that leverage government datasets
The Marine Geoscience Data System (MGDS) is a trusted data repository that provides free public access to a curated collection of marine geophysical data products and complementary data related to understanding the formation and evolution of the seafloor and sub-seafloor. Developed and operated by domain scientists and technical specialists with deep knowledge about the creation, analysis and scientific interpretation of marine geoscience data, the system makes available a digital library of data files described by a rich curated metadata catalog. MGDS provides tools and services for the discovery and download of data collected throughout the global oceans. Primary data types are geophysical field data including active source seismic data, potential field, bathymetry, sidescan sonar, near-bottom imagery, other seafloor senor data as well as a diverse array of processed data and interpreted data products (e.g. seismic interpretations, microseismicity catalogs, geologic maps and interpretations, photomosaics and visualizations). Our data resources support scientists working broadly on solid earth science problems ranging from mid-ocean ridge, subduction zone and hotspot processes, to geohazards, continental margin evolution, sediment transport at glaciated and unglaciated margins.
The Ensembl genome annotation system, developed jointly by the EBI and the Wellcome Trust Sanger Institute, has been used for the annotation, analysis and display of vertebrate genomes since 2000. Since 2009, the Ensembl site has been complemented by the creation of five new sites, for bacteria, protists, fungi, plants and invertebrate metazoa, enabling users to use a single collection of (interactive and programatic) interfaces for accessing and comparing genome-scale data from species of scientific interest from across the taxonomy. In each domain, we aim to bring the integrative power of Ensembl tools for comparative analysis, data mining and visualisation across genomes of scientific interest, working in collaboration with scientific communities to improve and deepen genome annotation and interpretation.
The SuiteSparse Matrix Collection is a large and actively growing set of sparse matrices that arise in real applications. The Collection is widely used by the numerical linear algebra community for the development and performance evaluation of sparse matrix algorithms. It allows for robust and repeatable experiments. Its matrices cover a wide spectrum of domains, include those arising from problems with underlying 2D or 3D geometry (as structural engineering, computational fluid dynamics, model reduction, electromagnetics, semiconductor devices, thermodynamics, materials, acoustics, computer graphics/vision, robotics/kinematics, and other discretizations) and those that typically do not have such geometry (optimization, circuit simulation, economic and financial modeling, theoretical and quantum chemistry, chemical process simulation, mathematics and statistics, power networks, and other networks and graphs.
The Ensembl project produces genome databases for vertebrates and other eukaryotic species. Ensembl is a joint project between the European Bioinformatics Institute (EBI) and the Wellcome Trust Sanger Institute (WTSI) to develop a software system that produces and maintains automatic annotation on selected genomes.The Ensembl project was started in 1999, some years before the draft human genome was completed. Even at that early stage it was clear that manual annotation of 3 billion base pairs of sequence would not be able to offer researchers timely access to the latest data. The goal of Ensembl was therefore to automatically annotate the genome, integrate this annotation with other available biological data and make all this publicly available via the web. Since the website's launch in July 2000, many more genomes have been added to Ensembl and the range of available data has also expanded to include comparative genomics, variation and regulatory data. Ensembl is a joint project between European Bioinformatics Institute (EBI), an outstation of the European Molecular Biology Laboratory (EMBL), and the Wellcome Trust Sanger Institute (WTSI). Both institutes are located on the Wellcome Trust Genome Campus in Hinxton, south of the city of Cambridge, United Kingdom.
EIDA, an initiative within ORFEUS, is a distributed data centre established to (a) securely archive seismic waveform data and related metadata, gathered by European research infrastructures, and (b) provide transparent access to the archives by the geosciences research communities. EIDA nodes are data centres which collect and archive data from seismic networks deploying broad-band sensors, short period sensors, accelerometers, infrasound sensors and other geophysical instruments. Networks contributing data to EIDA are listed in the ORFEUS EIDA networklist (http://www.orfeus-eu.org/data/eida/networks/). Data from the ORFEUS Data Center (ODC), hosted by KNMI, are available through EIDA. Technically, EIDA is based on an underlying architecture developed by GFZ to provide transparent access to all nodes' data. Data within the distributed archives are accessible via the ArcLink protocol (http://www.seiscomp3.org/wiki/doc/applications/arclink).