Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 33 result(s)
>>>>!!!<<< As stated 2017-06-27 The website http://researchcompendia.org is no longer available; repository software is archived on github https://github.com/researchcompendia >>>!!!<<< The ResearchCompendia platform is an attempt to use the web to enhance the reproducibility and verifiability—and thus the reliability—of scientific research. we provide the tools to publish the "actual scholarship" by hosting data, code, and methods in a form that is accessible, trackable, and persistent. Some of our short term goals include: To expand and enhance the platform including adding executability for a greater variety of coding languages and frameworks, and enhancing output presentation. To expand usership and to test the ResearchCompendia model in a number of additional fields, including computational mathematics, statistics, and biostatistics. To pilot integration with existing scholarly platforms, enabling researchers to discover relevant Research Compendia websites when looking at online articles, code repositories, or data archives.
NIAID’s TB Portals Program is a multi-national collaboration for TB data sharing and analysis to advance TB research. As a global consortium of clinicians, scientists, and IT professionals from 40 sites in 16 countries throughout eastern Europe, Asia, and sub-Saharan Africa, the TB Portals Program is a web-based, open-access repository of multi-domain TB data and tools for its analysis. Researchers can find linked socioeconomic/geographic, clinical, laboratory, radiological, and genomic data from over 7,500 international published TB patient cases with an emphasis on drug-resistant tuberculosis.
Welcome to the largest bibliographic database dedicated to Economics and available freely on the Internet. This site is part of a large volunteer effort to enhance the free dissemination of research in Economics, RePEc, which includes bibliographic metadata from over 1,800 participating archives, including all the major publishers and research outlets. IDEAS is just one of several services that use RePEc data. Authors are invited to register with RePEc to create an online profile. Then, anyone finding some of your research here can find your latest contact details and a listing of your other research. You will also receive a monthly mailing about the popularity of your works, your ranking and newly found citations. Besides that IDEAS provides software and public accessible data from Federal Reserve Bank.
RAVE (RAdial Velocity Experiment) is a multi-fiber spectroscopic astronomical survey of stars in the Milky Way using the 1.2-m UK Schmidt Telescope of the Anglo-Australian Observatory (AAO). The RAVE collaboration consists of researchers from over 20 institutions around the world and is coordinated by the Leibniz-Institut für Astrophysik Potsdam. As a southern hemisphere survey covering 20,000 square degrees of the sky, RAVE's primary aim is to derive the radial velocity of stars from the observed spectra. Additional information is also derived such as effective temperature, surface gravity, metallicity, photometric parallax and elemental abundance data for the stars. The survey represents a giant leap forward in our understanding of our own Milky Way galaxy; with RAVE's vast stellar kinematic database the structure, formation and evolution of our Galaxy can be studied.
The Index to Marine and Lacustrine Geological Samples is a tool to help scientists locate and obtain geologic material from sea floor and lakebed cores, grabs, and dredges archived by participating institutions around the world. Data and images related to the samples are prepared and contributed by the institutions for access via the IMLGS and long-term archive at NGDC. Before proposing research on any sample, please contact the curator for sample condition and availability. A consortium of Curators guides the IMLGS, maintained on behalf of the group by NGDC, since 1977.
MGI is the international database resource for the laboratory mouse, providing integrated genetic, genomic, and biological data to facilitate the study of human health and disease. The projects contributing to this resource are: Mouse Genome Database (MGD) Project, Gene Expression Database (GXD) Project, Mouse Tumor Biology (MTB) Database Project, Gene Ontology (GO) Project at MGI, MouseMine Project, MouseCyc Project at MGI
The Universal Protein Resource (UniProt) is a comprehensive resource for protein sequence and annotation data. The UniProt databases are the UniProt Knowledgebase (UniProtKB), the UniProt Reference Clusters (UniRef), and the UniProt Archive (UniParc).
NCEP delivers national and global weather, water, climate and space weather guidance, forecasts, warnings and analyses to its Partners and External User Communities. The National Centers for Environmental Prediction (NCEP), an arm of the NOAA's National Weather Service (NWS), is comprised of nine distinct Centers, and the Office of the Director, which provide a wide variety of national and international weather guidance products to National Weather Service field offices, government agencies, emergency managers, private sector meteorologists, and meteorological organizations and societies throughout the world. NCEP is a critical national resource in national and global weather prediction. NCEP is the starting point for nearly all weather forecasts in the United States. The Centers are: Aviation Weather Center (AWC), Climate Prediction Center (CPC), Environmental Modeling Center (EMC), NCEP Central Operations (NCO), National Hurricane Center (NHC), Ocean Prediction Center (OPC), Storm Prediction Center (SPC), Space Weather Prediction Center (SWPC), Weather Prediction Center (WPC)
The Southern California Earthquake Data Center (SCEDC) operates at the Seismological Laboratory at Caltech and is the primary archive of seismological data for southern California. The 1932-to-present Caltech/USGS catalog maintained by the SCEDC is the most complete archive of seismic data for any region in the United States. Our mission is to maintain an easily accessible, well-organized, high-quality, searchable archive for research in seismology and earthquake engineering.
TERN provides open data, research and management tools, data infrastructure and site-based research equipment. The open access ecosystem data is provided by TERN Data Discovery Portal , see https://www.re3data.org/repository/r3d100012013
Kaggle is a platform for predictive modelling and analytics competitions in which statisticians and data miners compete to produce the best models for predicting and describing the datasets uploaded by companies and users. This crowdsourcing approach relies on the fact that there are countless strategies that can be applied to any predictive modelling task and it is impossible to know beforehand which technique or analyst will be most effective.
VertNet is a NSF-funded collaborative project that makes biodiversity data free and available on the web. VertNet is a tool designed to help people discover, capture, and publish biodiversity data. It is also the core of a collaboration between hundreds of biocollections that contribute biodiversity data and work together to improve it. VertNet is an engine for training current and future professionals to use and build upon best practices in data quality, curation, research, and data publishing. Yet, VertNet is still the aggregate of all of the information that it mobilizes. To us, VertNet is all of these things and more.
The Maize Genetics and Genomics Database focuses on collecting data related to the crop plant and model organism Zea mays. The project's goals are to synthesize, display, and provide access to maize genomics and genetics data, prioritizing mutant and phenotype data and tools, structural and genetic map sets, and gene models. MaizeGDB also aims to make the Maize Newsletter available, and provide support services to the community of maize researchers. MaizeGDB is working with the Schnable lab, the Panzea project, The Genome Reference Consortium, and iPlant Collaborative to create a plan for archiving, dessiminating, visualizing, and analyzing diversity data. MMaizeGDB is short for Maize Genetics/Genomics Database. It is a USDA/ARS funded project to integrate the data found in MaizeDB and ZmDB into a single schema, develop an effective interface to access this data, and develop additional tools to make data analysis easier. Our goal in the long term is a true next-generation online maize database.aize genetics and genomics database.
The Biological and Chemical Oceanography Data Management Office (BCO-DMO) is a publicly accessible earth science data repository created to curate, publicly serve (publish), and archive digital data and information from biological, chemical and biogeochemical research conducted in coastal, marine, great lakes and laboratory environments. The BCO-DMO repository works closely with investigators funded through the NSF OCE Division’s Biological and Chemical Sections and the Division of Polar Programs Antarctic Organisms & Ecosystems. The office provides services that span the full data life cycle, from data management planning support and DOI creation, to archive with appropriate national facilities.
The Earth System Grid Federation (ESGF) is an international collaboration with a current focus on serving the World Climate Research Programme's (WCRP) Coupled Model Intercomparison Project (CMIP) and supporting climate and environmental science in general. Data is searchable and available for download at the Federated ESGF-CoG Nodes https://esgf.llnl.gov/nodes.html
The Harvard Dataverse is open to all scientific data from all disciplines worldwide. It includes the world's largest collection of social science research data. It is hosting data for projects, archives, researchers, journals, organizations, and institutions.
TreeGenes is a genomic, phenotypic, and environmental data resource for forest tree species. The TreeGenes database and Dendrome project provide custom informatics tools to manage the flood of information.The database contains several curated modules that support the storage of data and provide the foundation for web-based searches and visualization tools. GMOD GUI tools such as CMAP for genetic maps and GBrowse for genome and transcriptome assemblies are implemented here. A sample tracking system, known as the Forest Tree Genetic Stock Center, sits at the forefront of most large-scale projects. Barcode identifiers assigned to the trees during sample collection are maintained in the database to identify an individual through DNA extraction, resequencing, genotyping and phenotyping. DiversiTree, a user-friendly desktop-style interface, queries the TreeGenes database and is designed for bulk retrieval of resequencing data. CartograTree combines geo-referenced individuals with relevant ecological and trait databases in a user-friendly map-based interface. ---- The Conifer Genome Network (CGN) is a virtual nexus for researchers working in conifer genomics. The CGN web site is maintained by the Dendrome Project at the University of California, Davis.
The Protein Data Bank (PDB) is an archive of experimentally determined three-dimensional structures of biological macromolecules that serves a global community of researchers, educators, and students. The data contained in the archive include atomic coordinates, crystallographic structure factors and NMR experimental data. Aside from coordinates, each deposition also includes the names of molecules, primary and secondary structure information, sequence database references, where appropriate, and ligand and biological assembly information, details about data collection and structure solution, and bibliographic citations. The Worldwide Protein Data Bank (wwPDB) consists of organizations that act as deposition, data processing and distribution centers for PDB data. Members are: RCSB PDB (USA), PDBe (Europe) and PDBj (Japan), and BMRB (USA). The wwPDB's mission is to maintain a single PDB archive of macromolecular structural data that is freely and publicly available to the global community.
HITRAN is an acronym for high-resolution transmission molecular absorption database. The HITRAN compilation of the SAO (HIgh resolution TRANmission molecular absorption database) is used for predicting and simulating transmission and emission of light in atmospheres. It is the world-standard database in molecular spectroscopy. The journal article describing it is the most cited reference in the geosciences. There are presently about 5000 HITRAN users world-wide. Its associated database HITEMP (high-temperature spectroscopic absorption parameters) is accessible by the HITRAN website.
The KNB Data Repository is an international repository intended to facilitate ecological, environmental and earth science research in the broadest senses. For scientists, the KNB Data Repository is an efficient way to share, discover, access and interpret complex ecological, environmental, earth science, and sociological data and the software used to create and manage those data. Due to rich contextual information provided with data in the KNB, scientists are able to integrate and analyze data with less effort. The data originate from a highly-distributed set of field stations, laboratories, research sites, and individual researchers. The KNB supports rich, detailed metadata to promote data discovery as well as automated and manual integration of data into new projects. The KNB supports a rich set of modern repository services, including the ability to assign Digital Object Identifiers (DOIs) so data sets can be confidently referenced in any publication, the ability to track the versions of datasets as they evolve through time, and metadata to establish the provenance relationships between source and derived data.
<<<!!!<<< USHIK was archived because some of the metadata are maintained by other sites and there is no need for duplication. The USHIK metadata registry was a neutral repository of metadata from an authoritative source used to promote interoperability and reuse of data. The registry did not attempt to change the metadata content but rather provided a structured way to view data for the technical or casual user. Complete information see: https://www.ahrq.gov/data/ushik.html >>>!!!>>>
IEDA2 is currently undergoing a website reconstruction and will be back soon. IEDA is a community-based facility that serves to support, sustain, and advance the geosciences by providing data services for observational Geoscience data from the Ocean, Earth, and Polar Sciences. IEDA welcomes and encourages investigators to contribute their data to the IEDA collections so that the data can be discovered and reused by a diverse community now and in the future. The IEDA collections are: EarthChem, Geochron, System for Earth Sample Registration (SESAR), Marine Geoscience Data System (MGDS), and USAP Data Center. Meta-Search provided on the portal through IEDA Data Browser http://www.iedadata.org/databrowser .
PhysioBank is a large and growing archive of well-characterized digital recordings of physiologic signals and related data for use by the biomedical research community.
The Registry of Open Data on AWS provides a centralized repository of public data sets that can be seamlessly integrated into AWS cloud-based applications. AWS is hosting the public data sets at no charge to their users. Anyone can access these data sets from their Amazon Elastic Compute Cloud (Amazon EC2) instances and start computing on the data within minutes. Users can also leverage the entire AWS ecosystem and easily collaborate with other AWS users.