Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 24 result(s)
>>>>!!!<<< As stated 2017-06-27 The website http://researchcompendia.org is no longer available; repository software is archived on github https://github.com/researchcompendia >>>!!!<<< The ResearchCompendia platform is an attempt to use the web to enhance the reproducibility and verifiability—and thus the reliability—of scientific research. we provide the tools to publish the "actual scholarship" by hosting data, code, and methods in a form that is accessible, trackable, and persistent. Some of our short term goals include: To expand and enhance the platform including adding executability for a greater variety of coding languages and frameworks, and enhancing output presentation. To expand usership and to test the ResearchCompendia model in a number of additional fields, including computational mathematics, statistics, and biostatistics. To pilot integration with existing scholarly platforms, enabling researchers to discover relevant Research Compendia websites when looking at online articles, code repositories, or data archives.
!!! >>> intrepidbio.com expired <<< !!!! Intrepid Bioinformatics serves as a community for genetic researchers and scientific programmers who need to achieve meaningful use of their genetic research data – but can’t spend tremendous amounts of time or money in the process. The Intrepid Bioinformatics system automates time consuming manual processes, shortens workflow, and eliminates the threat of lost data in a faster, cheaper, and better environment than existing solutions. The system also provides the functionality and community features needed to analyze the large volumes of Next Generation Sequencing and Single Nucleotide Polymorphism data, which is generated for a wide range of purposes from disease tracking and animal breeding to medical diagnosis and treatment.
The Federal Interagency Traumatic Brain Injury Research (FITBIR) informatics system was developed to share data across the entire TBI research field and to facilitate collaboration between laboratories, as well as interconnectivity with other informatics platforms. Sharing data, methodologies, and associated tools, rather than summaries or interpretations of this information, can accelerate research progress by allowing re-analysis of data, as well as re-aggregation, integration, and rigorous comparison with other data, tools, and methods. This community-wide sharing requires common data definitions and standards, as well as comprehensive and coherent informatics approaches.
Project Data Sphere, LLC, operates a free digital library-laboratory where the research community can broadly share, integrate and analyze historical, de-identified, patient-level data from academic and industry cancer Phase II-III clinical trials. These patient-level datasets are available through the Project Data Sphere platform to researchers affiliated with life science companies, hospitals and institutions, as well as independent researchers, at no cost and without requiring a research proposal.
WorldData.AI comes with a built-in workspace – the next-generation hyper-computing platform powered by a library of 3.3 billion curated external trends. WorldData.AI allows you to save your models in its “My Models Trained” section. You can make your models public and share them on social media with interesting images, model features, summary statistics, and feature comparisons. Empower others to leverage your models. For example, if you have discovered a previously unknown impact of interest rates on new-housing demand, you may want to share it through “My Models Trained.” Upload your data and combine it with external trends to build, train, and deploy predictive models with one click! WorldData.AI inspects your raw data, applies feature processors, chooses the best set of algorithms, trains and tunes multiple models, and then ranks model performance.
Surface air temperature change is a primary measure of global climate change. The GISTEMP project started in the late 1970s to provide an estimate of the changing global surface air temperature which could be compared with the estimates obtained from climate models simulating the effect of changes in atmospheric carbon dioxide, volcanic aerosols, and solar irradiance. The continuing analysis updates global temperature change from the late 1800s to the present.
>>>!!!<<< This site is going away on April 1, 2021. General access to the site has been disabled and community users will see an error upon login. >>>!!!<<< Socrata’s cloud-based solution allows government organizations to put their data online, make data-driven decisions, operate more efficiently, and share insights with citizens.
The NCI's Genomic Data Commons (GDC) provides the cancer research community with a unified data repository that enables data sharing across cancer genomic studies in support of precision medicine. The GDC obtains validated datasets from NCI programs in which the strategies for tissue collection couples quantity with high quality. Tools are provided to guide data submissions by researchers and institutions.
Kaggle is a platform for predictive modelling and analytics competitions in which statisticians and data miners compete to produce the best models for predicting and describing the datasets uploaded by companies and users. This crowdsourcing approach relies on the fact that there are countless strategies that can be applied to any predictive modelling task and it is impossible to know beforehand which technique or analyst will be most effective.
Protectedplanet.net combines crowd sourcing and authoritative sources to enrich and provide data for protected areas around the world. Data are provided in partnership with the World Database on Protected Areas (WDPA). The data include the location, designation type, status year, and size of the protected areas, as well as species information.
Originally named the Radiation Belt Storm Probes (RBSP), the mission was re-named the Van Allen Probes, following successful launch and commissioning. For simplicity and continuity, the RBSP short-form has been retained for existing documentation, file naming, and data product identification purposes. The RBSPICE investigation including the RBSPICE Instrument SOC maintains compliance with requirements levied in all applicable mission control documents.
The Fungal Genetics Stock Center has preserved and distributed strains of genetically characterized fungi since 1960. The collection includes over 20,000 accessioned strains of classical and genetically engineered mutants of key model, human, and plant pathogenic fungi. These materials are distributed as living stocks to researchers around the world.
The Google Code Archive contains the data found on the Google Code Project Hosting Service, which turned down in early 2016. This archive contains over 1.4 million projects, 1.5 million downloads, and 12.6 million issues. Google Project Hosting powers Project Hosting on Google Code and Eclipse Labs. Project Hosting on Google Code Eclipse Labs. It provides a fast, reliable, and easy open source hosting service with the following features: Instant project creation on any topic; Git, Mercurial and Subversion code hosting with 2 gigabyte of storage space and download hosting support with 2 gigabytes of storage space; Integrated source code browsing and code review tools to make it easy to view code, review contributions, and maintain a high quality code base; An issue tracker and project wiki that are simple, yet flexible and powerful, and can adapt to any development process; Starring and update streams that make it easy to keep track of projects and developers that you care about.
The CONP portal is a web interface for the Canadian Open Neuroscience Platform (CONP) to facilitate open science in the neuroscience community. CONP simplifies global researcher access and sharing of datasets and tools. The portal internalizes the cycle of a typical research project: starting with data acquisition, followed by processing using already existing/published tools, and ultimately publication of the obtained results including a link to the original dataset. From more information on CONP, please visit https://conp.ca
In 2003, the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) at NIH established Data, Biosample, and Genetic Repositories to increase the impact of current and previously funded NIDDK studies by making their data and biospecimens available to the broader scientific community. These Repositories enable scientists not involved in the original study to test new hypotheses without any new data or biospecimen collection, and they provide the opportunity to pool data across several studies to increase the power of statistical analyses. In addition, most NIDDK-funded studies are collecting genetic biospecimens and carrying out high-throughput genotyping making it possible for other scientists to use Repository resources to match genotypes to phenotypes and to perform informative genetic analyses.
MEASURE DHS is advancing global understanding of health and population trends in developing countries through nationally-representative household surveys that provide data for a wide range of monitoring and impact evaluation indicators in the areas of population, health, HIV, and nutrition. The database collects, analyzes, and disseminates data from more than 300 surveys in over 90 countries. MEASURE DHS distributes, at no cost, survey data files for legitimate academic research.
myExperiment is a collaborative environment where scientists can safely publish their workflows and in silico experiments, share them with groups and find those of others. Workflows, other digital objects and bundles (called Packs) can now be swapped, sorted and searched like photos and videos on the Web. Unlike Facebook or MySpace, myExperiment fully understands the needs of the researcher and makes it really easy for the next generation of scientists to contribute to a pool of scientific methods, build communities and form relationships — reducing time-to-experiment, sharing expertise and avoiding reinvention. myExperiment is now the largest public repository of scientific workflows.
FRED is an online database consisting of hundreds of thousands of economic data time series from scores of national, international, public, and private sources. FRED, created and maintained by the Research Department at the Federal Reserve Bank of St. Louis, goes far beyond simply providing data: It combines data with a powerful mix of tools that help the user understand, interact with, display, and disseminate the data. In essence, FRED helps users tell their data stories.
The Registry of Open Data on AWS provides a centralized repository of public data sets that can be seamlessly integrated into AWS cloud-based applications. AWS is hosting the public data sets at no charge to their users. Anyone can access these data sets from their Amazon Elastic Compute Cloud (Amazon EC2) instances and start computing on the data within minutes. Users can also leverage the entire AWS ecosystem and easily collaborate with other AWS users.
The Open Exoplanet Catalogue is a catalogue of all discovered extra-solar planets. It is a new kind of astronomical database. It is decentralized and completely open. We welcome contributions and corrections from both professional astronomers and the general public.
CottonGen is a new cotton community genomics, genetics and breeding database being developed to enable basic, translational and applied research in cotton. It is being built using the open-source Tripal database infrastructure. CottonGen consolidates and expands the data from CottonDB and the Cotton Marker Database, providing enhanced tools for easy querying, visualizing and downloading research data.
Global Ocean Ecosystem Dynamics (GLOBEC) is the International Geosphere-Biosphere Programme (IGBP) core project responsible for understanding how global change will affect the abundance, diversity and productivity of marine populations. The programme was initiated by SCOR and the IOC of UNESCO in 1991, to understand how global change will affect the abundance, diversity and productivity of marine populations comprising a major component of oceanic ecosystems. The aim of GLOBEC is to advance our understanding of the structure and functioning of the global ocean ecosystem, its major subsystems, and its response to physical forcing so that a capability can be developed to forecast the responses of the marine ecosystem to global change. U.S. GLOBEC Programm includes the Georges Bank / NW Atlantic Programm, the Northeast Pacific Programm and the Southern Ocean Program.
PathCards is an integrated database of human biological pathways and their annotations. Human pathways were clustered into SuperPaths based on gene content similarity. Each PathCard provides information on one SuperPath which represents one or more human pathways.