Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 28 result(s)
The Alternative Fuels Data Center (AFDC) is a comprehensive clearinghouse of information about advanced transportation technologies. The AFDC offers transportation decision makers unbiased information, data, and tools related to the deployment of alternative fuels and advanced vehicles. The AFDC launched in 1991 in response to the Alternative Motor Fuels Act of 1988 and the Clean Air Act Amendments of 1990. It originally served as a repository for alternative fuel performance data. The AFDC has since evolved to offer a broad array of information resources that support efforts to reduce petroleum use in transportation. The AFDC serves Clean Cities stakeholders, fleets regulated by the Energy Policy Act, businesses, policymakers, government agencies, and the general public.
>>> !!!!! The Portal is no longer available. !!!! >>> The CARMEN pilot project seeks to create a virtual laboratory for experimental neurophysiology, enabling the sharing and collaborative exploitation of data, analysis code and expertise. This study by the DCC contributes to an understanding of the data curation requirements of the eScience community, through its extended observation of the CARMEN neurophysiology community’s specification and selection of solutions for the organisation, access and curation of digital research output.
CRAWDAD is the Community Resource for Archiving Wireless Data, a wireless network data resource for the research community. This archive has the capacity to store wireless trace data from many contributing locations, and staff to develop better tools for collecting, anonymizing, and analyzing the data. We work with community leaders to ensure that the archive meets the needs of the research community.
VegBank is the vegetation plot database of the Ecological Society of America's Panel on Vegetation Classification. VegBank consists of three linked databases that contain the actual plot records, vegetation types recognized in the U.S. National Vegetation Classification and other vegetation types submitted by users, and all plant taxa recognized by ITIS/USDA as well as all other plant taxa recorded in plot records. Vegetation records, community types and plant taxa may be submitted to VegBank and may be subsequently searched, viewed, annotated, revised, interpreted, downloaded, and cited. VegBank receives its data from the VegBank community of users.
Kaggle is a platform for predictive modelling and analytics competitions in which statisticians and data miners compete to produce the best models for predicting and describing the datasets uploaded by companies and users. This crowdsourcing approach relies on the fact that there are countless strategies that can be applied to any predictive modelling task and it is impossible to know beforehand which technique or analyst will be most effective.
Country
The CDPP is the French national data centre for natural plasmas of the solar system. The CDPP assures the long term preservation of data obtained primarily from instruments built using French resources, and renders them readily accessible and exploitable by the international community. The CDPP also provides services to enable on-line data analysis (AMDA), 3D data visualization in context (3DView), and a propagation tool which bridges solar perturbations to in-situ measurements. The CDPP is involved in the development of interoperability, participates in several Virtual Observatory projects, and supports data distribution for scientific missions (Solar Orbiter, JUICE).
Established in 1965, the CSD is the world’s repository for small-molecule organic and metal-organic crystal structures. Containing the results of over one million x-ray and neutron diffraction analyses this unique database of accurate 3D structures has become an essential resource to scientists around the world. The CSD records bibliographic, chemical and crystallographic information for:organic molecules, metal-organic compounds whose 3D structures have been determined using X-ray diffraction, neutron diffraction. The CSD records results of: single crystal studies, powder diffraction studies which yield 3D atomic coordinate data for at least all non-H atoms. In some cases the CCDC is unable to obtain coordinates, and incomplete entries are archived to the CSD. The CSD includes crystal structure data arising from: publications in the open literature and Private Communications to the CSD (via direct data deposition). The CSD contains directly deposited data that are not available anywhere else, known as CSD Communications.
OpenWorm aims to build the first comprehensive computational model of the Caenorhabditis elegans (C. elegans), a microscopic roundworm. With only a thousand cells, it solves basic problems such as feeding, mate-finding and predator avoidance. Despite being extremely well studied in biology, this organism still eludes a deep, principled understanding of its biology. We are using a bottom-up approach, aimed at observing the worm behaviour emerge from a simulation of data derived from scientific experiments carried out over the past decade. To do so we are incorporating the data available in the scientific community into software models. We are engineering Geppetto and Sibernetic, open-source simulation platforms, to be able to run these different models in concert. We are also forging new collaborations with universities and research institutes to collect data that fill in the gaps All the code we produce in the OpenWorm project is Open Source and available on GitHub.
The Database contains all publicly available HMS LINCS datasets and information for each dataset about experimental reagents (small molecule perturbagens, cells, antibodies, and proteins) and experimental and data analysis protocols.
The Earth System Grid Federation (ESGF) is an international collaboration with a current focus on serving the World Climate Research Programme's (WCRP) Coupled Model Intercomparison Project (CMIP) and supporting climate and environmental science in general. Data is searchable and available for download at the Federated ESGF-CoG Nodes https://esgf.llnl.gov/nodes.html
BrainMaps.org, launched in May 2005, is an interactive multiresolution next-generation brain atlas that is based on over 20 million megapixels of sub-micron resolution, annotated, scanned images of serial sections of both primate and non-primate brains and that is integrated with a high-speed database for querying and retrieving data about brain structure and function over the internet. Currently featured are complete brain atlas datasets for various species, including Macaca mulatta, Chlorocebus aethiops, Felis catus, Mus musculus, Rattus norvegicus, and Tyto alba.
Europeana is the trusted source of cultural heritage brought to you by the Europeana Foundation and a large number of European cultural institutions, projects and partners. It’s a real piece of team work. Ideas and inspiration can be found within the millions of items on Europeana. These objects include: Images - paintings, drawings, maps, photos and pictures of museum objects Texts - books, newspapers, letters, diaries and archival papers Sounds - music and spoken word from cylinders, tapes, discs and radio broadcasts Videos - films, newsreels and TV broadcasts All texts are CC BY-SA, images and media licensed individually.
Knoema is a knowledge platform. The basic idea is to connect data with analytical and presentation tools. As a result, we end with one uniformed platform for users to access, present and share data-driven content. Within Knoema, we capture most aspects of a typical data use cycle: accessing data from multiple sources, bringing relevant indicators into a common space, visualizing figures, applying analytical functions, creating a set of dashboards, and presenting the outcome.
Weed Images is a project of the University of Georgia’s Center for Invasive Species and Ecosystem Health and one of the four major parts of BugwoodImages. The Focus is on damages of weed. It provides an easily accessible archive of high quality images for use in educational applications. In most cases, the images found in this system were taken by and loaned to us by photographers other than ourselves. Most are in the realm of public sector images. The photographs are in this system to be used.
GitHub is the best place to share code with friends, co-workers, classmates, and complete strangers. Over three million people use GitHub to build amazing things together. With the collaborative features of GitHub.com, our desktop and mobile apps, and GitHub Enterprise, it has never been easier for individuals and teams to write better code, faster. Originally founded by Tom Preston-Werner, Chris Wanstrath, and PJ Hyett to simplify sharing code, GitHub has grown into the largest code host in the world.
Public Data Sets on AWS provides a centralized repository of public data sets that can be seamlessly integrated into AWS cloud-based applications. AWS is hosting the public data sets at no charge to their users. Anyone can access these data sets from their Amazon Elastic Compute Cloud (Amazon EC2) instances and start computing on the data within minutes. Users can also leverage the entire AWS ecosystem and easily collaborate with other AWS users.
IRIS offers free and open access to a comprehensive data store of raw geophysical time-series data collected from a large variety of sensors, courtesy of a vast array of US and International scientific networks, including seismometers (permanent and temporary), tilt and strain meters, infrasound, temperature, atmospheric pressure and gravimeters, to support basic research aimed at imaging the Earth's interior.
The KNB Data Repository is an international repository intended to facilitate ecological, environmental and earth science research in the broadest senses. For scientists, the KNB Data Repository is an efficient way to share, discover, access and interpret complex ecological, environmental, earth science, and sociological data and the software used to create and manage those data. Due to rich contextual information provided with data in the KNB, scientists are able to integrate and analyze data with less effort. The data originate from a highly-distributed set of field stations, laboratories, research sites, and individual researchers. The KNB supports rich, detailed metadata to promote data discovery as well as automated and manual integration of data into new projects. The KNB supports a rich set of modern repository services, including the ability to assign Digital Object Identifiers (DOIs) so data sets can be confidently referenced in any publication, the ability to track the versions of datasets as they evolve through time, and metadata to establish the provenance relationships between source and derived data.
When published in 2005, the Millennium Run was the largest ever simulation of the formation of structure within the ΛCDM cosmology. It uses 10(10) particles to follow the dark matter distribution in a cubic region 500h(−1)Mpc on a side, and has a spatial resolution of 5h−1kpc. Application of simplified modelling techniques to the stored output of this calculation allows the formation and evolution of the ~10(7) galaxies more luminous than the Small Magellanic Cloud to be simulated for a variety of assumptions about the detailed physics involved. As part of the activities of the German Astrophysical Virtual Observatory we have created relational databases to store the detailed assembly histories both of all the haloes and subhaloes resolved by the simulation, and of all the galaxies that form within these structures for two independent models of the galaxy formation physics. We have implemented a Structured Query Language (SQL) server on these databases. This allows easy access to many properties of the galaxies and halos, as well as to the spatial and temporal relations between them. Information is output in table format compatible with standard Virtual Observatory tools. With this announcement (from 1/8/2006) we are making these structures fully accessible to all users. Interested scientists can learn SQL and test queries on a small, openly accessible version of the Millennium Run (with volume 1/512 that of the full simulation). They can then request accounts to run similar queries on the databases for the full simulations. In 2008 and 2012 the simulations were repeated.