Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 50 result(s)
The Database explores the interactions of chemicals and proteins. It integrates information about interactions from metabolic pathways, crystal structures, binding experiments and drug-target relationships. Inferred information from phenotypic effects, text mining and chemical structure similarity is used to predict relations between chemicals. STITCH further allows exploring the network of chemical relations, also in the context of associated binding proteins.
Galaxies, made up of billions of stars like our Sun, are the beacons that light up the structure of even the most distant regions in space. Not all galaxies are alike, however. They come in very different shapes and have very different properties; they may be large or small, old or young, red or blue, regular or confused, luminous or faint, dusty or gas-poor, rotating or static, round or disky, and they live either in splendid isolation or in clusters. In other words, the universe contains a very colourful and diverse zoo of galaxies. For almost a century, astronomers have been discussing how galaxies should be classified and how they relate to each other in an attempt to attack the big question of how galaxies form. Galaxy Zoo (Lintott et al. 2008, 2011) pioneered a novel method for performing large-scale visual classifications of survey datasets. This webpage allows anyone to download the resulting GZ classifications of galaxies in the project.
The Language Bank features text and speech corpora with different kinds of annotations in over 60 languages. There is also a selection of tools for working with them, from linguistic analyzers to programming environments. Corpora are also available via web interfaces, and users can be allowed to download some of them. The IP holders can monitor the use of their resources and view user statistics.
The African Development Bank Group (AfDB) is committed to supporting statistical development in Africa as a sound basis for designing and managing effective development policies for reducing poverty on the continent. Reliable and timely data is critical to setting goals and targets as well as evaluating project impact. Reliable data constitutes the single most convincing way of getting the people involved in what their leaders and institutions are doing. It also helps them to get involved in the development process, thus giving them a sense of ownership of the entire development process. The AfDB has a large team of researchers who focus on the production of statistical data on economic and social situations. The data produced by the institution’s statistics department constitutes the background information in the Bank’s flagship development publications. Besides its own publication, the AfDB also finances studies in collaboration with its partners. The Statistics Department aims to stand as the primary source of relevant, reliable and timely data on African development processes, starting with the data generated from its current management of the Africa component of the International Comparison Program (ICP-Africa). The Department discharges its responsibilities through two divisions: The Economic and Social Statistics Division (ESTA1); The Statistical Capacity Building Division (ESTA2)
Country
The Coriolis Data Centre handles operational oceanography measurements made in situ, complementing the measurement of the ocean surface made using instruments aboard satellites. This work is realised through the establishment of permanent networks with data collected by ships or autonomous systems that are either fixed or drifting. This data can be used to construct a snapshot of water mass structure and current intensity.
SeaBASS, the publicly shared archive of in situ oceanographic and atmospheric data maintained by the NASA Ocean Biology Processing Group (OBPG). High quality in situ measurements are prerequisite for satellite data product validation, algorithm development, and many climate-related inquiries. As such, the NASA Ocean Biology Processing Group (OBPG) maintains a local repository of in situ oceanographic and atmospheric data to support their regular scientific analyses. The SeaWiFS Project originally developed this system, SeaBASS, to catalog radiometric and phytoplankton pigment data used their calibration and validation activities. To facilitate the assembly of a global data set, SeaBASS was expanded with oceanographic and atmospheric data collected by participants in the SIMBIOS Program, under NASA Research Announcements NRA-96 and NRA-99, which has aided considerably in minimizing spatial bias and maximizing data acquisition rates. Archived data include measurements of apparent and inherent optical properties, phytoplankton pigment concentrations, and other related oceanographic and atmospheric data, such as water temperature, salinity, stimulated fluorescence, and aerosol optical thickness. Data are collected using a number of different instrument packages, such as profilers, buoys, and hand-held instruments, and manufacturers on a variety of platforms, including ships and moorings.
The California Coastal Atlas is an experiment in the creation of a new information resource for the description, analysis and understanding of natural and human processes affecting the coast of California.
The VDC is a public, web-based search engine for accessing worldwide earthquake strong ground motion data. While the primary focus of the VDC is on data of engineering interest, it is also an interactive resource for scientific research and government and emergency response professionals.
This database provides theoretical values of energy levels of hydrogen and deuterium for principle quantum numbers n = 1 to 200 and all allowed orbital angular momenta l and total angular momenta j. The values are based on current knowledge of the revelant theoretical contributions including relativistic, quantum electrodynamic, recoil, and nuclear size effects.
The Metropolitan Travel Survey Archive (MTSA) includes travel surveys from numerous public agencies across the United States. The Transportation Secure Data Center has archived these surveys to ensure their continued public availability. The survey data have been converted to a standard file format and cleansed to remove personally identifiable information, including any detailed spatial data regarding individual trips.
The National Weather Service, Fairbanks provides weather data relating to and observed in the Fairbanks, AK area. Data includes current, past, and future weather. Databases are organized primarily by the type of data (e.g. weather data, climate data, hydrology, warning/hazard alerts) and then are searchable by research location.
<<<!!!<<< The demand for high-value environmental data and information has dramatically increased in recent years. To improve our ability to meet that demand, NOAA’s former three data centers—the National Climatic Data Center, the National Geophysical Data Center, and the National Oceanographic Data Center, which includes the National Coastal Data Development Center—have merged into the National Centers for Environmental Information (NCEI). >>>!!!>>> The National Coastal Data Development Center, a division of the National Oceanographic Data Center, is dedicated to building the long-term coastal data record to support environmental prediction, scientific analysis, and formulation of public policy.
PISCO researchers collect biological, chemical, and physical data about ocean ecosystems in the nearshore portions of the California Current Large Marine Ecosystem. Data are archived and used to create summaries and graphics, in order to ensure that the data can be used and understood by a diverse audience of managers, policy makers, scientists and the general public.
Western Regional Climate Center (WRCC) provides historical and current climate data for the western United States. WRCC is one of six regional climate centers partnering with NOAA research institutes to promote climate research and data stewardship.
The EZRC at KIT houses the largest experimental fish facility in Europe with a capacity of more than 300,000 fish. Zebrafish stocks are maintained mostly as frozen sperm. Frequently requested lines are also kept alive as well as a selection of wildtype strains. Several thousand mutations in protein coding genes generated by TILLING in the Stemple lab of the Sanger Centre, Hinxton, UK and lines generated by ENU mutagenesis by the Nüsslein-Volhard lab in addition to transgenic lines and mutants generated by KIT groups or brought in through collaborations. We also accept submissions on an individual basis and ship fish upon request to PIs in Europe and elsewhere. EZRC also provides screening services and technologies such as imaging and high-throughput sequencing. Key areas include automation of embryo handling and automated image acquisition and processing. Our platform also involves the development of novel microscopy techniques (e.g. SPIM, DSLM, robotic macroscope) to permit high-resolution, real-time imaging in 4D. By association with the ComPlat platform, we can support also chemical screens and offer libraries with up to 20,000 compounds in total for external users. As another service to the community the EZRC provides plasmids (cDNAs, transgenes, Talen, Crispr/cas9) maintained by the Helmholtz repository of Bioparts (HERBI) to the scientific community. In addition the fish facility keeps a range of medaka stocks, maintained by the Loosli group.
Scripps Institute of Oceanography (SIO) Explorer includes five federated collections: SIO Cruises, SIO Historic Photographs, the Seamounts, Marine Geological Samples, and the Educator’s Collection, all part of the US National Science Digital Library (NSDL). Each collection represents a unique resource of irreplaceable scientific research. The effort is collaboration among researchers at Scripps, computer scientists from the San Diego Supercomputer Center (SDSC), and archivists and librarians from the UCSD Libraries. In 2005 SIOExplorer was extended to the Woods Hole Oceanographic Institution with the Multi-Institution Scalable Digital Archiving project, funded through the joint NSF/Library of Congress digital archiving and preservation program, creating a harvesting methodology and a prototype collection of cruises, Alvin submersible dives and Jason ROV lowerings.
The Minnesota Population Center (MPC) is a University-wide interdisciplinary cooperative for demographic research. The MPC serves more than 80 faculty members and research scientists from eight colleges and institutes at the University of Minnesota. As a leading developer and disseminator of demographic data, we also serve a broader audience of some 50,000 demographic researchers worldwide. MPC is a DataONE member node: https://search.dataone.org/#profile/US_MPC
The ClinicalCodes repository aims to hold code lists for all published electronic medical record studies, irrespective of code type (e.g. Read, ICD9-10, SNOMED) and database (CPRD, QResearch, THIN etc.). Once deposited, code lists will be freely available, with no login needed to download codes.
The Fungal Genetics Stock Center has preserved and distributed strains of genetically characterized fungi since 1960. The collection includes over 20,000 accessioned strains of classical and genetically engineered mutants of key model, human, and plant pathogenic fungi. These materials are distributed as living stocks to researchers around the world.
SureChemOpen is a free resource for researchers who want to search, view and link to patent chemistry. For end-users with professional search and analysis needs, we offer the fully-featured SureChemPro. For enterprise users, SureChemDirect provides all our patent chemistry via an API or a data feed. The SureChem family of products is built upon the Claims® Global Patent Database, a comprehensive international patent collection provided by IFI Claims®. This state of the art database is normalized and curated to provide unprecedented consistency and quality.
The Paleobiology Database (PaleoBioDB) is a non-governmental, non-profit public resource for paleontological data. It has been organized and operated by a multi-disciplinary, multi-institutional, international group of paleobiological researchers. Its purpose is to provide global, collection-based occurrence and taxonomic data for organisms of all geological ages, as well data services to allow easy access to data for independent development of analytical tools, visualization software, and applications of all types. The Database’s broader goal is to encourage and enable data-driven collaborative efforts that address large-scale paleobiological questions.
dbEST is a division of GenBank that contains sequence data and other information on "single-pass" cDNA sequences, or "Expressed Sequence Tags", from a number of organisms. Expressed Sequence Tags (ESTs) are short (usually about 300-500 bp), single-pass sequence reads from mRNA (cDNA). Typically they are produced in large batches. They represent a snapshot of genes expressed in a given tissue and/or at a given developmental stage. They are tags (some coding, others not) of expression for a given cDNA library. Most EST projects develop large numbers of sequences. These are commonly submitted to GenBank and dbEST as batches of dozens to thousands of entries, with a great deal of redundancy in the citation, submitter and library information. To improve the efficiency of the submission process for this type of data, we have designed a special streamlined submission process and data format. dbEST also includes sequences that are longer than the traditional ESTs, or are produced as single sequences or in small batches. Among these sequences are products of differential display experiments and RACE experiments. The thing that these sequences have in common with traditional ESTs, regardless of length, quality, or quantity, is that there is little information that can be annotated in the record. If a sequence is later characterized and annotated with biological features such as a coding region, 5'UTR, or 3'UTR, it should be submitted through the regular GenBank submissions procedure (via BankIt or Sequin), even if part of the sequence is already in dbEST. dbEST is reserved for single-pass reads. Assembled sequences should not be submitted to dbEST. GenBank will accept assembled EST submissions for the forthcoming TSA (Transcriptome Shotgun Assembly) division. The individual reads which make up the assembly should be submitted to dbEST, the Trace archive or the Short Read Archive (SRA) prior to the submission of the assemblies.
When published in 2005, the Millennium Run was the largest ever simulation of the formation of structure within the ΛCDM cosmology. It uses 10(10) particles to follow the dark matter distribution in a cubic region 500h(−1)Mpc on a side, and has a spatial resolution of 5h−1kpc. Application of simplified modelling techniques to the stored output of this calculation allows the formation and evolution of the ~10(7) galaxies more luminous than the Small Magellanic Cloud to be simulated for a variety of assumptions about the detailed physics involved. As part of the activities of the German Astrophysical Virtual Observatory we have created relational databases to store the detailed assembly histories both of all the haloes and subhaloes resolved by the simulation, and of all the galaxies that form within these structures for two independent models of the galaxy formation physics. We have implemented a Structured Query Language (SQL) server on these databases. This allows easy access to many properties of the galaxies and halos, as well as to the spatial and temporal relations between them. Information is output in table format compatible with standard Virtual Observatory tools. With this announcement (from 1/8/2006) we are making these structures fully accessible to all users. Interested scientists can learn SQL and test queries on a small, openly accessible version of the Millennium Run (with volume 1/512 that of the full simulation). They can then request accounts to run similar queries on the databases for the full simulations. In 2008 and 2012 the simulations were repeated.