Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 46 result(s)
“B-Clear” stands for Bloomington Clear, or Be Clear about what we’re up to. B-Clear is a one-stop place to build an ever-growing assembly of useful data. We’re organizing it as open, accessible data so everyone can see and use it and manipulate it.
The Purdue University Research Repository (PURR) provides a virtual research environment and data publication and archiving platform for its campuses. Also supports the publication and online execution of software tools with DataCite DOIs.
The Deep Blue Data repository is a means for University of Michigan researchers to make their research data openly accessible to anyone in the world, provided they meet collections criteria. Submitted data sets undergo a curation review by librarians to support discovery, understanding, and reuse of the data.
A data repository and social network so that researchers can interact and collaborate, also offers tutorials and datasets for data science learning. "data.world is designed for data and the people who work with data. From professional projects to open data, data.world helps you host and share your data, collaborate with your team, and capture context and conclusions as you work."
Government of Yukon open data provides an easy way to find, access and reuse the government's public datasets. This service brings all of the government's data together in one searchable website. Our datasets are created and managed by different government departments. We cannot guarantee the quality or timeliness of all data. If you have any feedback you can get in touch with the department that produced the dataset. This is a pilot project. We are in the process of adding a quality framework to make it easier for you to access high quality, reliable data.
The National Sleep Research Resource (NSRR) is an NHLBI-supported repository for sharing large amounts of sleep data (polysomnography, actigraphy and questionnaire-based) from multiple cohorts, clinical trials, and other data sources. Launched in April 2014, the mission of the NSRR is to advance sleep and circadian science by supporting secondary data analysis, algorithmic development, and signal processing through the sharing of high-quality data sets.
Open access repository for digital research created at the University of Minnesota. U of M researchers may deposit data to the Libraries’ Data Repository for U of M (DRUM), subject to our collection policies. All data is publicly accessible. Data sets submitted to the Data Repository are reviewed by data curation staff to ensure that data is in a format and structure that best facilitates long-term access, discovery, and reuse.
The Brown Digital Repository (BDR) is a place to gather, index, store, preserve, and make available digital assets produced via the scholarly, instructional, research, and administrative activities at Brown.
MorphoSource is a data repository specialized for 3D representing physical objects used in research in education (e.g., from museum or laboratory collections). It allows researchers and museum collection staff to store and organize, share, and distribute their own 3d data. Furthermore any registered user can immediately search for and download 3d morphological data sets that have been made accessible through the consent of data authors.
Kaggle is a platform for predictive modelling and analytics competitions in which statisticians and data miners compete to produce the best models for predicting and describing the datasets uploaded by companies and users. This crowdsourcing approach relies on the fact that there are countless strategies that can be applied to any predictive modelling task and it is impossible to know beforehand which technique or analyst will be most effective.
Brainlife promotes engagement and education in reproducible neuroscience. We do this by providing an online platform where users can publish code (Apps), Data, and make it "alive" by integragrate various HPC and cloud computing resources to run those Apps. Brainlife also provide mechanisms to publish all research assets associated with a scientific project (data and analyses) embedded in a cloud computing environment and referenced by a single digital-object-identifier (DOI). The platform is unique because of its focus on supporting scientific reproducibility beyond open code and open data, by providing fundamental smart mechanisms for what we refer to as “Open Services.”
Academic Torrents is a distributed data repository. The academic torrents network is built for researchers, by researchers. Its distributed peer-to-peer library system automatically replicates your datasets on many servers, so you don't have to worry about managing your own servers or file availability. Everyone who has data becomes a mirror for those data so the system is fault-tolerant.
FLOSSmole is a collaborative collection of free, libre, and open source software (FLOSS) data. FLOSSmole contains nearly 1 TB of data covering the period 2004 until now, about more than 500,000 different open source projects.
The Harvard Dataverse is open to all scientific data from all disciplines worldwide. It includes the world's largest collection of social science research data. It is hosting data for projects, archives, researchers, journals, organizations, and institutions.
IDEALS is an institutional repository that collects, disseminates, and provides persistent and reliable access to the research and scholarship of faculty, staff, and students at the University of Illinois at Urbana-Champaign. Faculty, staff, graduate students, and in some cases undergraduate students, can deposit their research and scholarship directly into IDEALS. Departments can use IDEALS to distribute their working papers, technical reports, or other research material. Contact us at https://www.ideals.illinois.edu/feedback for more information.
Content type(s)
A machine learning data repository with interactive visual analytic techniques. This project is the first to combine the notion of a data repository with real-time visual analytics for interactive data mining and exploratory analysis on the web. State-of-the-art statistical techniques are combined with real-time data visualization giving the ability for researchers to seamlessly find, explore, understand, and discover key insights in a large number of public donated data sets. This large comprehensive collection of data is useful for making significant research findings as well as benchmark data sets for a wide variety of applications and domains and includes relational, attributed, heterogeneous, streaming, spatial, and time series data as well as non-relational machine learning data. All data sets are easily downloaded into a standard consistent format. We also have built a multi-level interactive visual analytics engine that allows users to visualize and interactively explore the data in a free-flowing manner.
GeneWeaver combines cross-species data and gene entity integration, scalable hierarchical analysis of user data with a community-built and curated data archive of gene sets and gene networks, and tools for data driven comparison of user-defined biological, behavioral and disease concepts. Gene Weaver allows users to integrate gene sets across species, tissue and experimental platform. It differs from conventional gene set over-representation analysis tools in that it allows users to evaluate intersections among all combinations of a collection of gene sets, including, but not limited to annotations to controlled vocabularies. There are numerous applications of this approach. Sets can be stored, shared and compared privately, among user defined groups of investigators, and across all users.
UltraViolet is part of a suite of repositories at New York University that provide a home for research materials, operated as a partnership of the Division of Libraries and NYU IT's Research and Instruction Technology. UltraViolet provides faculty, students, and researchers within our university community with a place to deposit scholarly materials for open access and long-term preservation. UltraViolet also houses some NYU Libraries collections, including proprietary data collections.
The Common Cold Project began in 2011 with the aim of creating, documenting, and archiving a database that combines final research data from 5 prospective viral-challenge studies that were conducted over the preceding 25 years: the British Cold Study (BCS); the three Pittsburgh Cold Studies (PCS1, PCS2, and PCS3); and the Pittsburgh Mind-Body Center Cold Study (PMBC). These unique studies assessed predictor (and hypothesized mediating) variables in healthy adults aged 18 to 55 years, experimentally exposed them to a virus that causes the common cold, and then monitored them for development of infection and signs and symptoms of illness.
RunMyCode is a novel cloud-based platform that enables scientists to openly share the code and data that underlie their research publications. The web service only requires a web browser as all calculations are done on a dedicated cloud computer. Once the results are ready, they are automatically displayed to the user.