Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 38 result(s)
>>>>!!!<<< As stated 2017-06-27 The website http://researchcompendia.org is no longer available; repository software is archived on github https://github.com/researchcompendia >>>!!!<<< The ResearchCompendia platform is an attempt to use the web to enhance the reproducibility and verifiability—and thus the reliability—of scientific research. we provide the tools to publish the "actual scholarship" by hosting data, code, and methods in a form that is accessible, trackable, and persistent. Some of our short term goals include: To expand and enhance the platform including adding executability for a greater variety of coding languages and frameworks, and enhancing output presentation. To expand usership and to test the ResearchCompendia model in a number of additional fields, including computational mathematics, statistics, and biostatistics. To pilot integration with existing scholarly platforms, enabling researchers to discover relevant Research Compendia websites when looking at online articles, code repositories, or data archives.
“B-Clear” stands for Bloomington Clear, or Be Clear about what we’re up to. B-Clear is a one-stop place to build an ever-growing assembly of useful data. We’re organizing it as open, accessible data so everyone can see and use it and manipulate it.
HepSim is a public repository with Monte Carlo simulations for particle-collision experiments. It contains predictions from leading-order (LO) parton shower models, next-to-leading order (NLO) and NLO with matched parton showers. It also includes Monte Carlo events after fast ("parametric") and full (Geant4) detector simulations and event reconstruction.
OBIS strives to document the ocean's diversity, distribution and abundance of life. Created by the Census of Marine Life, OBIS is now part of the Intergovernmental Oceanographic Commission (IOC) of UNESCO, under its International Oceanographic Data and Information Exchange (IODE) programme
The Purdue University Research Repository (PURR) provides a virtual research environment and data publication and archiving platform for its campuses. Also supports the publication and online execution of software tools with DataCite DOIs.
Dataverse to host followup observations of galaxy clusters identified in South Pole Telescope SZ Surveys. This includes: 1) GMOS spectroscopy of low to moderate redshift galaxy clusters taken as a part of NOAO Large Survey Program 11A-0034 (PI: Christopher Stubbs).
NCBI Datasets is a continually evolving platform designed to provide easy and intuitive access to NCBI’s sequence data and metadata. NCBI Datasets is part of the NIH Comparative Genomics Resource (CGR). CGR facilitates reliable comparative genomics analyses for all eukaryotic organisms through an NCBI Toolkit and community collaboration.
A data repository and social network so that researchers can interact and collaborate, also offers tutorials and datasets for data science learning. "data.world is designed for data and the people who work with data. From professional projects to open data, data.world helps you host and share your data, collaborate with your team, and capture context and conclusions as you work."
A place where researchers can publicly store and share unthresholded statistical maps, parcellations, and atlases produced by MRI and PET studies.
The Genomic Observatories Meta-Database (GEOME) is a web-based database that captures the who, what, where, and when of biological samples and associated genetic sequences. GEOME helps users with the following goals: ensure the metadata from your biological samples is findable, accessible, interoperable, and reusable; improve the quality of your data and comply with global data standards; and integrate with R, ease publication to NCBI's sequence read archive, and work with an associated LIMS. The initial use case for GEOME came from the Diversity of the Indo-Pacific Network (DIPnet) resource.
Open access to macromolecular X-ray diffraction and MicroED datasets. The repository complements the Worldwide Protein Data Bank. SBDG also hosts reference collection of biomedical datasets contributed by members of SBGrid, Harvard and pilot communities.
Kaggle is a platform for predictive modelling and analytics competitions in which statisticians and data miners compete to produce the best models for predicting and describing the datasets uploaded by companies and users. This crowdsourcing approach relies on the fact that there are countless strategies that can be applied to any predictive modelling task and it is impossible to know beforehand which technique or analyst will be most effective.
The Henry A. Murray Research Archive is Harvard's endowed, permanent repository for quantitative and qualitative research data at the Institute for Quantitative Social Science, and provides physical storage for the entire IQSS Dataverse Network. Our collection comprises over 100 terabytes of data, audio, and video. We preserve in perpetuity all types of data of interest to the research community, including numerical, video, audio, interview notes, and other data. We accept data deposits through this web site, which is powered by our Dataverse Network software
FactGrid is a Wikibase instance designed to be used by historians with a focus on international projects. The database is hosted by the University of Erfurt and coordinated at the Gotha Research Centre. Partners in joint ventures are Wikimedia Germany as the software provider and the German National Library in a project to open the GND to international research.
WikiPathways was established to facilitate the contribution and maintenance of pathway information by the biology community. WikiPathways is an open, collaborative platform dedicated to the curation of biological pathways. WikiPathways thus presents a new model for pathway databases that enhances and complements ongoing efforts, such as KEGG, Reactome and Pathway Commons. Building on the same MediaWiki software that powers Wikipedia, we added a custom graphical pathway editing tool and integrated databases covering major gene, protein, and small-molecule systems. The familiar web-based format of WikiPathways greatly reduces the barrier to participate in pathway curation. More importantly, the open, public approach of WikiPathways allows for broader participation by the entire community, ranging from students to senior experts in each field. This approach also shifts the bulk of peer review, editorial curation, and maintenance to the community.
The Harvard Dataverse is open to all scientific data from all disciplines worldwide. It includes the world's largest collection of social science research data. It is hosting data for projects, archives, researchers, journals, organizations, and institutions.
Content type(s)
A machine learning data repository with interactive visual analytic techniques. This project is the first to combine the notion of a data repository with real-time visual analytics for interactive data mining and exploratory analysis on the web. State-of-the-art statistical techniques are combined with real-time data visualization giving the ability for researchers to seamlessly find, explore, understand, and discover key insights in a large number of public donated data sets. This large comprehensive collection of data is useful for making significant research findings as well as benchmark data sets for a wide variety of applications and domains and includes relational, attributed, heterogeneous, streaming, spatial, and time series data as well as non-relational machine learning data. All data sets are easily downloaded into a standard consistent format. We also have built a multi-level interactive visual analytics engine that allows users to visualize and interactively explore the data in a free-flowing manner.
Reference anatomies of the brain and corresponding atlases play a central role in experimental neuroimaging workflows and are the foundation for reporting standardized results. The choice of such references —i.e., templates— and atlases is one relevant source of methodological variability across studies, which has recently been brought to attention as an important challenge to reproducibility in neuroscience. TemplateFlow is a publicly available framework for human and nonhuman brain models. The framework combines an open database with software for access, management, and vetting, allowing scientists to distribute their resources under FAIR —findable, accessible, interoperable, reusable— principles. TemplateFlow supports a multifaceted insight into brains across species, and enables multiverse analyses testing whether results generalize across standard references, scales, and in the long term, species, thereby contributing to increasing the reliability of neuroimaging results.
The NIH 3D Print Exchange (the “Exchange”) is an open, comprehensive, and interactive website for searching, browsing, downloading, and sharing biomedical 3D print files, modeling tutorials, and educational material. "Biomedical" includes models of cells, bacteria, or viruses, molecules like proteins or DNA, and anatomical models of organs, tissue, and body parts. The NIH 3D Print Exchange provides models in formats that are readily compatible with 3D printers, and offers a unique set of tools to create and share 3D-printable models related to biomedical science.
The Society of American Archivists (SAA) Dataverse is an SAA data service that was established to support the needs and interests of SAA’s members and the broader archives community. The SAA Dataverse supports the reuse of datasets for purposes of fostering knowledge, insights, and a deeper understanding of archival organizations, the status of archivists, and the impact of archives and archival work on the broader society. Deposited datasets should be “actionable” in that they should support direct analysis and interpretation. The SAA Dataverse welcomes deposits of collections of quantitative or qualitative data and associated documentation. SAA membership is not required to deposit or use data in the SAA Dataverse.
The EarthChem Library is a data repository that archives, publishes and makes accessible data and other digital content from geoscience research (analytical data, data syntheses, models, technical reports, etc.)
The OpenNeuro project (formerly known as the OpenfMRI project) was established in 2010 to provide a resource for researchers interested in making their neuroimaging data openly available to the research community. It is managed by Russ Poldrack and Chris Gorgolewski of the Center for Reproducible Neuroscience at Stanford University. The project has been developed with funding from the National Science Foundation, National Institute of Drug Abuse, and the Laura and John Arnold Foundation.
The KNB Data Repository is an international repository intended to facilitate ecological, environmental and earth science research in the broadest senses. For scientists, the KNB Data Repository is an efficient way to share, discover, access and interpret complex ecological, environmental, earth science, and sociological data and the software used to create and manage those data. Due to rich contextual information provided with data in the KNB, scientists are able to integrate and analyze data with less effort. The data originate from a highly-distributed set of field stations, laboratories, research sites, and individual researchers. The KNB supports rich, detailed metadata to promote data discovery as well as automated and manual integration of data into new projects. The KNB supports a rich set of modern repository services, including the ability to assign Digital Object Identifiers (DOIs) so data sets can be confidently referenced in any publication, the ability to track the versions of datasets as they evolve through time, and metadata to establish the provenance relationships between source and derived data.