Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 57 result(s)
The Lunar Orbiter Photographic Atlas of the Moon by Bowker and Hughes (NASA SP-206) is considered the definitive reference manual to the global photographic coverage of the Moon. The images contained within the atlas are excellent for studying lunar morphology because they were obtained at low to moderate Sun angles. The digital Lunar Orbiter Atlas of the Moon is a reproduction of the 675 plates contained in Bowker and Hughes. The digital archive, however, offers many improvements upon its original hardbound predecessor. Multiple search capabilities were added to the database to expedite locating images and features of interest. For accuracy and usability, surface feature information has been updated and improved. Lastly, to aid in feature identification, a companion image containing feature annotation has been included. The symbols on the annotated overlays, however, should only be used as locators and not for precise measurements. More detailed information about the digital archive process can be read in abstracts presented at the 30th and 31st Lunar and Planetary Science Conferences.
The National Archives and Records Administration (NARA) is the nation's record keeper. Of all documents and materials created in the course of business conducted by the United States Federal government, only 1%-3% are so important for legal or historical reasons that they are kept by us forever. Those valuable records are preserved and are available to you, whether you want to see if they contain clues about your family’s history, need to prove a veteran’s military service, or are researching an historical topic that interests you.
Pubchem contains 3 databases. 1. PubChem BioAssay: The PubChem BioAssay Database contains bioactivity screens of chemical substances described in PubChem Substance. It provides searchable descriptions of each bioassay, including descriptions of the conditions and readouts specific to that screening procedure. 2. PubChem Compound: The PubChem Compound Database contains validated chemical depiction information provided to describe substances in PubChem Substance. Structures stored within PubChem Compounds are pre-clustered and cross-referenced by identity and similarity groups. 3. PubChem Substance. The PubChem Substance Database contains descriptions of samples, from a variety of sources, and links to biological screening results that are available in PubChem BioAssay. If the chemical contents of a sample are known, the description includes links to PubChem Compound.
---<<< This repository is no longer available. This record is out-dated >>>--- The ONS challenge contains open solubility data, experiments with raw data from different scientists and institutions. It is part of the The Open Notebook Science wiki community, ideally suited for community-wide collaborative research projects involving mathematical modeling and computer simulation work, as it allows researchers to document model development in a step-by-step fashion, then link model prediction to experiments that test the model, and in turn, use feeback from experiments to evolve the model. By making our laboratory notebooks public, the evolutionary process of a model can be followed in its totality by the interested reader. Researchers from laboratories around the world can now follow the progress of our research day-to-day, borrow models at various stages of development, comment or advice on model developments, discuss experiments, ask questions, provide feedback, or otherwise contribute to the progress of science in any manner possible.
GPO’s govinfo system is an ISO 16363 certified Trustworthy Digital Repository that ensures free online access to current and historical information from all three branches of the United States Federal Government today and into the future.
The Digital Archaeological Record (tDAR) is an international digital repository for the digital records of archaeological investigations. tDAR’s use, development, and maintenance are governed by Digital Antiquity, an organization dedicated to ensuring the long-term preservation of irreplaceable archaeological data and to broadening the access to these data.
<<<!!!<<< The demand for high-value environmental data and information has dramatically increased in recent years. To improve our ability to meet that demand, NOAA’s former three data centers—the National Climatic Data Center, the National Geophysical Data Center, and the National Oceanographic Data Center, which includes the National Coastal Data Development Center—have merged into the National Centers for Environmental Information (NCEI). >>>!!!>>> NOAA's National Climatic Data Center (NCDC) is responsible for preserving, monitoring, assessing, and providing public access to the Nation's treasure of climate and historical weather data and information.
The Roper Center has made available its entire collection of Primary exit polls. Primary exit polls datasets include standard demographic makeup of interviewee and questions pertinent to the issues of each state.
ModelDB is a curated database of published models in the broad domain of computational neuroscience. It addresses the need for access to such models in order to evaluate their validity and extend their use. It can handle computational models expressed in any textual form, including procedural or declarative languages (e.g. C++, XML dialects) and source code written for any simulation environment. The model source code doesn't even have to reside inside ModelDB; it just has to be available from some publicly accessible online repository or WWW site.
As with most biomedical databases, the first step is to identify relevant data from the research community. The Monarch Initiative is focused primarily on phenotype-related resources. We bring in data associated with those phenotypes so that our users can begin to make connections among other biological entities of interest. We import data from a variety of data sources. With many resources integrated into a single database, we can join across the various data sources to produce integrated views. We have started with the big players including ClinVar and OMIM, but are equally interested in boutique databases. You can learn more about the sources of data that populate our system from our data sources page https://monarchinitiative.org/about/sources.
The WashU Research Data repository accepts any publishable research data set, including textual, tabular, geospatial, imagery, computer code, or 3D data files, from researchers affiliated with Washington University in St. Louis. Datasets include metadata and are curated and assigned a DOI to align with FAIR data principles.
Open access repository for digital research created at the University of Minnesota. U of M researchers may deposit data to the Libraries’ Data Repository for U of M (DRUM), subject to our collection policies. All data is publicly accessible. Data sets submitted to the Data Repository are reviewed by data curation staff to ensure that data is in a format and structure that best facilitates long-term access, discovery, and reuse.
The Census of Agriculture provides extensive data about U.S. agriculture at the country, state and county level. The census is conducted every 5 years, and it gathers uniform, detailed data about U.S. farms and ranches and their operators. Data from recent censuses are available in different formats, but historical censuses (back to 1840) are available in pdf format.
MINDS@UW is designed to gather, distribute, and preserve digital materials related to the University of Wisconsin's research and instructional mission. Content, which is deposited directly by UW faculty and staff, may include research papers and reports, pre-prints and post-prints, datasets and other primary research materials, learning objects, theses, student projects, conference papers and presentations, and other born-digital or digitized research and instructional materials.
Launched in 2000, WormBase is an international consortium of biologists and computer scientists dedicated to providing the research community with accurate, current, accessible information concerning the genetics, genomics and biology of C. elegans and some related nematodes. In addition to their curation work, all sites have ongoing programs in bioinformatics research to develop the next generations of WormBase structure, content and accessibility
Gemma is a database for the meta-analysis, re-use and sharing of genomics data, currently primarily targeted at the analysis of gene expression profiles. Gemma contains data from thousands of public studies, referencing thousands of published papers. Users can search, access and visualize co-expression and differential expression results.
IEEE DataPort™ is a universally accessible online data repository created, owned, and supported by IEEE, the world’s largest technical professional organization. It enables all researchers and data owners to upload their dataset without cost. IEEE DataPort makes data available in three ways: standard datasets, open access datasets, and data competition datasets. By default, all "standard" datasets that are uploaded are accessible to paid IEEE DataPort subscribers. Data owners have an option to pay a fee to make their dataset “open access”, so it is available to all IEEE DataPort users (no subscription required). The third option is to host a "data competition" and make a dataset accessible for free for a specific duration with instructions for the data competition and how to participate. IEEE DataPort provides workflows for uploading data, searching, and accessing data, and initiating or participating in data competitions. All datasets are stored on Amazon AWS S3, and each dataset uploaded by an individual can be up to 2TB in size. Institutional subscriptions are available to the platform to make it easy for all members of a given institution to utilize the platform and upload datasets.
The NCBI Short Genetic Variations database, commonly known as dbSNP, catalogs short variations in nucleotide sequences from a wide range of organisms. These variations include single nucleotide variations, short nucleotide insertions and deletions, short tandem repeats and microsatellites. Short Genetic Variations may be common, thus representing true polymorphisms, or they may be rare. Some rare human entries have additional information associated withthem, including disease associations, genotype information and allele origin, as some variations are somatic rather than germline events. ***NCBI will phase out support for non-human organism data in dbSNP and dbVar beginning on September 1, 2017***
<<<!!!<<< This repository is no longer available>>>!!!>>>. Although the web pages are no longer available, you will still be able to download the final UniGene builds as static content from the FTP site https://ftp.ncbi.nlm.nih.gov/repository/UniGene/. You will also be able to match UniGene cluster numbers to Gene records by searching Gene with UniGene cluster numbers. For best results, restrict to the “UniGene Cluster Number” field rather than all fields in Gene. For example, a search with Mm.2108[UniGene Cluster Number] finds the mouse transthyretin Gene record (Ttr). You can use the advanced search page https://www.ncbi.nlm.nih.gov/gene/advanced to help construct these searches. Keep in mind that the Gene record contains selected Reference Sequences and GenBank mRNA sequences rather than the larger set of expressed sequences in the UniGene cluster.
The HomoloGene database provides a system for the automated detection of homologs among annotated genes of genomes across multiple species. These homologs are fully documented and organized by homology group. HomoloGene processing uses proteins from input organisms to compare and sequence homologs, mapping back to corresponding DNA sequences.
The central mission of the NACJD is to facilitate and encourage research in the criminal justice field by sharing data resources. Specific goals include providing computer-readable data for the quantitative study of crime and the criminal justice system through the development of a central data archive, supplying technical assistance in the selection of data collections and computer hardware and software for data analysis, and training in quantitative methods of social science research to facilitate secondary analysis of criminal justice data
The Linguistic Data Consortium (LDC) is an open consortium of universities, libraries, corporations and government research laboratories. It was formed in 1992 to address the critical data shortage then facing language technology research and development. Initially, LDC's primary role was as a repository and distribution point for language resources. Since that time, and with the help of its members, LDC has grown into an organization that creates and distributes a wide array of language resources. LDC also supports sponsored research programs and language-based technology evaluations by providing resources and contributing organizational expertise. LDC is hosted by the University of Pennsylvania and is a center within the University’s School of Arts and Sciences.