Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 66 result(s)
The National Archives and Records Administration (NARA) is the nation's record keeper. Of all documents and materials created in the course of business conducted by the United States Federal government, only 1%-3% are so important for legal or historical reasons that they are kept by us forever. Those valuable records are preserved and are available to you, whether you want to see if they contain clues about your family’s history, need to prove a veteran’s military service, or are researching an historical topic that interests you.
<<<!!!<<< The demand for high-value environmental data and information has dramatically increased in recent years. To improve our ability to meet that demand, NOAA’s former three data centers—the National Climatic Data Center, the National Geophysical Data Center, and the National Oceanographic Data Center, which includes the National Coastal Data Development Center—have merged into the National Centers for Environmental Information (NCEI). >>>!!!>>> NOAA's National Climatic Data Center (NCDC) is responsible for preserving, monitoring, assessing, and providing public access to the Nation's treasure of climate and historical weather data and information.
FungiDB belongs to the EuPathDB family of databases and is an integrated genomic and functional genomic database for the kingdom Fungi. FungiDB was first released in early 2011 as a collaborative project between EuPathDB and the group of Jason Stajich (University of California, Riverside). At the end of 2015, FungiDB was integrated into the EuPathDB bioinformatic resource center. FungiDB integrates whole genome sequence and annotation and also includes experimental and environmental isolate sequence data. The database includes comparative genomics, analysis of gene expression, and supplemental bioinformatics analyses and a web interface for data-mining.
The European Bioinformatics Institute (EBI) has a long-standing mission to collect, organise and make available databases for biomolecular science. It makes available a collection of databases along with tools to search, download and analyse their content. These databases include DNA and protein sequences and structures, genome annotation, gene expression information, molecular interactions and pathways. Connected to these are linking and descriptive data resources such as protein motifs, ontologies and many others. In many of these efforts, the EBI is a European node in global data-sharing agreements involving, for example, the USA and Japan.
Copernicus is a European system for monitoring the Earth. Copernicus consists of a complex set of systems which collect data from multiple sources: earth observation satellites and in situ sensors such as ground stations, airborne and sea-borne sensors. It processes these data and provides users with reliable and up-to-date information through a set of services related to environmental and security issues. The services address six thematic areas: land monitoring, marine monitoring, atmosphere monitoring, climate change, emergency management and security. The main users of Copernicus services are policymakers and public authorities who need the information to develop environmental legislation and policies or to take critical decisions in the event of an emergency, such as a natural disaster or a humanitarian crisis. Based on the Copernicus services and on the data collected through the Sentinels and the contributing missions , many value-added services can be tailored to specific public or commercial needs, resulting in new business opportunities. In fact, several economic studies have already demonstrated a huge potential for job creation, innovation and growth.
bugwood.org is the host website of the Center for Invasive Species and Ecosystem Health at the University of Georgia (Formerly: Bugwood Network). The Center aims to develop, consolidate and disseminate information and programmes focused on invasive species, forest health, natural resources and agricultural management through technology development, programmes implementation, training, applied research and public awareness at state, regional, national and international levels. The site gives details of its products (Bugwood Image Database; Early Detection and Distribution Mapping and Bugwoodwiki). Details of its projects, services and personnel are provided. Users can also access image databases on Forestry, Insects, IPM, Invasive Species, Forest Pests, weed and Bark Beetle.
The Global Hydrology Resource Center (GHRC) provides both historical and current Earth science data, information, and products from satellite, airborne, and surface-based instruments. GHRC acquires basic data streams and produces derived products from many instruments spread across a variety of instrument platforms.
The Old Bailey Proceedings Online makes available a fully searchable, digitised collection of all surviving editions of the Old Bailey Proceedings from 1674 to 1913, and of the Ordinary of Newgate's Accounts between 1676 and 1772. It allows access to over 197,000 trials and biographical details of approximately 2,500 men and women executed at Tyburn, free of charge for non-commercial use. In addition to the text, accessible through both keyword and structured searching, this website provides digital images of all 190,000 original pages of the Proceedings, 4,000 pages of Ordinary's Accounts, advice on methods of searching this resource, information on the historical and legal background to the Old Bailey court and its Proceedings, and descriptions of published and manuscript materials relating to the trials covered. Contemporary maps, and images have also been provided.
The IMPC is a confederation of international mouse phenotyping projects working towards the agreed goals of the consortium: To undertake the phenotyping of 20,000 mouse mutants over a ten year period, providing the first functional annotation of a mammalian genome. Maintain and expand a world-wide consortium of institutions with capacity and expertise to produce germ line transmission of targeted knockout mutations in embryonic stem cells for 20,000 known and predicted mouse genes. Test each mutant mouse line through a broad based primary phenotyping pipeline in all the major adult organ systems and most areas of major human disease. Through this activity and employing data annotation tools, systematically aim to discover and ascribe biological function to each gene, driving new ideas and underpinning future research into biological systems; Maintain and expand collaborative “networks” with specialist phenotyping consortia or laboratories, providing standardized secondary level phenotyping that enriches the primary dataset, and end-user, project specific tertiary level phenotyping that adds value to the mammalian gene functional annotation and fosters hypothesis driven research; and Provide a centralized data centre and portal for free, unrestricted access to primary and secondary data by the scientific community, promoting sharing of data, genotype-phenotype annotation, standard operating protocols, and the development of open source data analysis tools. Members of the IMPC may include research centers, funding organizations and corporations.
Pubchem contains 3 databases. 1. PubChem BioAssay: The PubChem BioAssay Database contains bioactivity screens of chemical substances described in PubChem Substance. It provides searchable descriptions of each bioassay, including descriptions of the conditions and readouts specific to that screening procedure. 2. PubChem Compound: The PubChem Compound Database contains validated chemical depiction information provided to describe substances in PubChem Substance. Structures stored within PubChem Compounds are pre-clustered and cross-referenced by identity and similarity groups. 3. PubChem Substance. The PubChem Substance Database contains descriptions of samples, from a variety of sources, and links to biological screening results that are available in PubChem BioAssay. If the chemical contents of a sample are known, the description includes links to PubChem Compound.
PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system that produces content concerning the impact of significant earthquakes around the world, informing emergency responders, government and aid agencies, and the media of the scope of the potential disaster. PAGER rapidly assesses earthquake impacts by comparing the population exposed to each level of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world. Earthquake alerts – which were formerly sent based only on event magnitude and location, or population exposure to shaking – now will also be generated based on the estimated range of fatalities and economic losses. PAGER uses these earthquake parameters to calculate estimates of ground shaking by using the methodology and software developed for ShakeMaps. ShakeMap sites provide near-real-time maps of ground motion and shaking intensity following significant earthquakes. These maps are used by federal, state, and local organizations, both public and private, for post-earthquake response and recovery, public and scientific information, as well as for preparedness exercises and disaster planning.
Scripps Institute of Oceanography (SIO) Explorer includes five federated collections: SIO Cruises, SIO Historic Photographs, the Seamounts, Marine Geological Samples, and the Educator’s Collection, all part of the US National Science Digital Library (NSDL). Each collection represents a unique resource of irreplaceable scientific research. The effort is collaboration among researchers at Scripps, computer scientists from the San Diego Supercomputer Center (SDSC), and archivists and librarians from the UCSD Libraries. In 2005 SIOExplorer was extended to the Woods Hole Oceanographic Institution with the Multi-Institution Scalable Digital Archiving project, funded through the joint NSF/Library of Congress digital archiving and preservation program, creating a harvesting methodology and a prototype collection of cruises, Alvin submersible dives and Jason ROV lowerings.
Content type(s)
Results from time-series analysis of Landsat images in characterizing global forest extent and change from 2000 through 2016.
The Antarctic and Southern Ocean Data Portal, part of the US Antarctic Data Consortium, provides access to geoscience data, primarily marine, from the Antarctic region. The synthesis began in 2003 as the Antarctic Multibeam Bathymetry and Geophysical Data Synthesis (AMBS) with a focus on multibeam bathymetry field data and other geophysical data from the Southern Ocean collected with the R/V N. B. Palmer. In 2005, the effort was expanded to include all routine underway geophysical and oceanographic data collected with both the R/V N. B. Palmer and R/V L. Gould, the two primary research vessels serving the US Antarctic Program.
Country
BLLAST is a research programme aimed at exploring the late afternoon transition of the atmospheric boundary layer. The late afternoon period of the diurnal cycle of the boundary layer is poorly understood. This is yet an important transition period that impacts the transport and dillution of water vapour and trace species. The main questions adressed by the project are: - How the turbulence activity fades when heating by the surface decreases? - What is the impact on the transport of chemical species? - How relevant processes can be represented in numerical models? To answer all these questions, a field campaign was carried out during the summer of 2011 (from June 14 to July 8). Many observation systems were then deployed and operated by research teams coming from France and abroad. They were spanning a large spectrum of space and time scales in order to achieve a comprehensive description of the boundary layer processes. The observation strategy consisted in intensifying the operations in the late afternoon with tethered balloons, resarch aircrafts and UAVs.
GENCODE is a scientific project in genome research and part of the ENCODE (ENCyclopedia Of DNA Elements) scale-up project. The GENCODE consortium was initially formed as part of the pilot phase of the ENCODE project to identify and map all protein-coding genes within the ENCODE regions (approx. 1% of Human genome). Given the initial success of the project, GENCODE now aims to build an “Encyclopedia of genes and genes variants” by identifying all gene features in the human and mouse genome using a combination of computational analysis, manual annotation, and experimental validation, and annotating all evidence-based gene features in the entire human genome at a high accuracy.
DEIMS-SDR (Dynamic Ecological Information Management System - Site and dataset registry) is an information management system that allows you to discover long-term ecosystem research sites around the globe, along with the data gathered at those sites and the people and networks associated with them. DEIMS-SDR describes a wide range of sites, providing a wealth of information, including each site’s location, ecosystems, facilities, parameters measured and research themes. It is also possible to access a growing number of datasets and data products associated with the sites. All sites and dataset records can be referenced using unique identifiers that are generated by DEIMS-SDR. It is possible to search for sites via keyword, predefined filters or a map search. By including accurate, up to date information in DEIMS, site managers benefit from greater visibility for their LTER site, LTSER platform and datasets, which can help attract funding to support site investments. The aim of DEIMS-SDR is to be the globally most comprehensive catalogue of environmental research and monitoring facilities, featuring foremost but not exclusively information about all LTER sites on the globe and providing that information to science, politics and the public in general.
Country
The City of Victoria’s Open Data Portal allows you to explore and download open data, discover and analyze datasets using maps, and develop new web and mobile applications.
The Protein Data Bank (PDB) archive is the single worldwide repository of information about the 3D structures of large biological molecules, including proteins and nucleic acids. These are the molecules of life that are found in all organisms including bacteria, yeast, plants, flies, other animals, and humans. Understanding the shape of a molecule helps to understand how it works. This knowledge can be used to help deduce a structure's role in human health and disease, and in drug development. The structures in the archive range from tiny proteins and bits of DNA to complex molecular machines like the ribosome.
LAADS DAAC is the web interface to the Level 1 and Atmosphere Archive and Distribution System (LAADS). The mission of LAADS is to provide quick and easy access to MODIS Level 1, Atmosphere and Land data products, VIIRS Level 1 and Land data products MAS and MERIS data products. MODIS (or Moderate Resolution Imaging Spectroradiometer) is a key instrument aboard the Terra (EOS AM) and Aqua (EOS PM) satellites.
The MGDS MediaBank contains high quality images, illustrations, animations and video clips that are organized into galleries. Media can be sorted by category, and keyword and map-based search options are provided. Each item in the MediaBank is accompanied by metadata that provides access into our cruise catalog and data repository.
Protectedplanet.net combines crowd sourcing and authoritative sources to enrich and provide data for protected areas around the world. Data are provided in partnership with the World Database on Protected Areas (WDPA). The data include the location, designation type, status year, and size of the protected areas, as well as species information.