Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 20 result(s)
Academic Commons provides open, persistent access to the scholarship produced by researchers at Columbia University, Barnard College, Jewish Theological Seminary, Teachers College, and Union Theological Seminary. Academic Commons is a program of the Columbia University Libraries. Academic Commons accepts articles, dissertations, research data, presentations, working papers, videos, and more.
CaltechDATA is an institutional data repository for Caltech. Caltech library runs the repository to preserve the accomplishments of Caltech researchers and share their results with the world. Caltech-associated researchers can upload data, link data with their publications, and assign a permanent DOI so that others can reference the data set. The repository also preserves software and has automatic Github integration. All files present in the repository are open access or embargoed, and all metadata is always available to the public.
The Duke Research Data Repository is a service of the Duke University Libraries that provides curation, access, and preservation of research data produced by the Duke community. Duke's RDR is a discipline agnostic institutional data repository that is intended to preserve and make public data related to the teaching and research mission of Duke University including data linked to a publication, research project, and/or class, as well as supplementary software code and documentation used to provide context for the data.
The Information Marketplace for Policy and Analysis of Cyber-risk & Trust (IMPACT) program supports global cyber risk research & development by coordinating, enhancing and developing real world data, analytics and information sharing capabilities, tools, models, and methodologies. In order to accelerate solutions around cyber risk issues and infrastructure security, IMPACT makes these data sharing components broadly available as national and international resources to support the three-way partnership among cyber security researchers, technology developers and policymakers in academia, industry and the government.
The WashU Research Data repository accepts any publishable research data set, including textual, tabular, geospatial, imagery, computer code, or 3D data files, from researchers affiliated with Washington University in St. Louis. Datasets include metadata and are curated and assigned a DOI to align with FAIR data principles.
Merritt is a curation repository for the preservation of and access to the digital research data of the ten campus University of California system and external project collaborators. Merritt is supported by the University of California Curation Center (UC3) at the California Digital Library (CDL). While Merritt itself is content agnostic, accepting digital content regardless of domain, format, or structure, it is being used for management of research data, and it forms the basis for a number of domain-specific repositories, such as the ONEShare repository for earth and environmental science and the DataShare repository for life sciences. Merritt provides persistent identifiers, storage replication, fixity audit, complete version history, REST API, a comprehensive metadata catalog for discovery, ATOM-based syndication, and curatorially-defined collections, access control rules, and data use agreements (DUAs). Merritt content upload and download may each be curatorially-designated as public or restricted. Merritt DOIs are provided by UC3's EZID service, which is integrated with DataCite. All DOIs and associated metadata are automatically registered with DataCite and are harvested by Ex Libris PRIMO and Thomson Reuters Data Citation Index (DCI) for high-level discovery. Merritt is also a member node in the DataONE network; curatorially-designated data submitted to Merritt are automatically registered with DataONE for additional replication and federated discovery through the ONEMercury search/browse interface.
Kaggle is a platform for predictive modelling and analytics competitions in which statisticians and data miners compete to produce the best models for predicting and describing the datasets uploaded by companies and users. This crowdsourcing approach relies on the fact that there are countless strategies that can be applied to any predictive modelling task and it is impossible to know beforehand which technique or analyst will be most effective.
Brainlife promotes engagement and education in reproducible neuroscience. We do this by providing an online platform where users can publish code (Apps), Data, and make it "alive" by integragrate various HPC and cloud computing resources to run those Apps. Brainlife also provide mechanisms to publish all research assets associated with a scientific project (data and analyses) embedded in a cloud computing environment and referenced by a single digital-object-identifier (DOI). The platform is unique because of its focus on supporting scientific reproducibility beyond open code and open data, by providing fundamental smart mechanisms for what we refer to as “Open Services.”
FLOSSmole is a collaborative collection of free, libre, and open source software (FLOSS) data. FLOSSmole contains nearly 1 TB of data covering the period 2004 until now, about more than 500,000 different open source projects.
The CONP portal is a web interface for the Canadian Open Neuroscience Platform (CONP) to facilitate open science in the neuroscience community. CONP simplifies global researcher access and sharing of datasets and tools. The portal internalizes the cycle of a typical research project: starting with data acquisition, followed by processing using already existing/published tools, and ultimately publication of the obtained results including a link to the original dataset. From more information on CONP, please visit https://conp.ca
UltraViolet is part of a suite of repositories at New York University that provide a home for research materials, operated as a partnership of the Division of Libraries and NYU IT's Research and Instruction Technology. UltraViolet provides faculty, students, and researchers within our university community with a place to deposit scholarly materials for open access and long-term preservation. UltraViolet also houses some NYU Libraries collections, including proprietary data collections.
The DesignSafe Data Depot Repository (DDR) is the platform for curation and publication of datasets generated in the course of natural hazards research. The DDR is an open access data repository that enables data producers to safely store, share, organize, and describe research data, towards permanent publication, distribution, and impact evaluation. The DDR allows data consumers to discover, search for, access, and reuse published data in an effort to accelerate research discovery. It is a component of the DesignSafe cyberinfrastructure, which represents a comprehensive research environment that provides cloud-based tools to manage, analyze, curate, and publish critical data for research to understand the impacts of natural hazards. DesignSafe is part of the NSF-supported Natural Hazards Engineering Research Infrastructure (NHERI), and aligns with its mission to provide the natural hazards research community with open access, shared-use scholarship, education, and community resources aimed at supporting civil and social infrastructure prior to, during, and following natural disasters. It serves a broad national and international audience of natural hazard researchers (both engineers and social scientists), students, practitioners, policy makers, as well as the general public. It has been in operation since 2016, and also provides access to legacy data dating from about 2005. These legacy data were generated as part of the NSF-supported Network for Earthquake Engineering Simulation (NEES), a predecessor to NHERI. Legacy data and metadata belonging to NEES were transferred to the DDR for continuous preservation and access.
SourceForge is dedicated to making open source projects successful. We thrive on community collaboration to help us create the leading resource for open source software development and distribution. IT professionals come to Sourceforge to develop, download, review, and publish open source software. Sourceforge is the largest, most trusted destination for Open Source Software discovery and development on the web.
XSEDE is a single virtual system that scientists can use to interactively share computing resources, data and expertise. People around the world use these resources and services — things like supercomputers, collections of data and new tools — to improve our planet. The Extreme Science and Engineering Discovery Environment (XSEDE) is the most advanced, powerful, and robust collection of integrated advanced digital resources and services in the world. It is a single virtual system that scientists can use to interactively share computing resources, data, and expertise.
The Energy Data eXchange (EDX) is an online collection of capabilities and resources that advance research and customize energy-related needs. EDX is developed and maintained by NETL-RIC researchers and technical computing teams to support private collaboration for ongoing research efforts, and tech transfer of finalized DOE NETL research products. EDX supports NETL-affiliated research by: Coordinating historical and current data and information from a wide variety of sources to facilitate access to research that crosscuts multiple NETL projects/programs; Providing external access to technical products and data published by NETL-affiliated research teams; Collaborating with a variety of organizations and institutions in a secure environment through EDX’s ;Collaborative Workspaces
Data products developed and distributed by the National Institute of Standards and Technology span multiple disciplines of research and are widely used in research and development programs by industry and academia. NIST's publicly available data sets showcase its committment to providing accurate, well-curated measurements of physical properties, exemplified by the Standard Reference Data program, as well as its committment to advancing basic research. In accordance with U.S. Government Open Data Policy and the NIST Plan for providing public access to the results of federally funded research data, NIST maintains a publicly accessible listing of available data, the NIST Public Dataset List (json). Additionally, these data are assigned a Digital Object Identifier (DOI) to increase the discovery and access to research output; these DOIs are registered with DataCite and provide globally unique persistent identifiers. The NIST Science Data Portal provides a user-friendly discovery and exploration tool for publically available datasets at NIST. This portal is designed and developed with data.gov Project Open Data standards and principles. The portal software is hosted in the usnistgov github repository.
California Digital Library (CDL) seeks to be a catalyst for deeply collaborative solutions providing a rich, intuitive and seamless environment for publishing, sharing and preserving our scholars’ increasingly diverse outputs, as well as for acquiring and accessing information critical to the University of California’s scholarly enterprise. University of California Curation Center (UC3) is the digital curation program within CDL. The mission of UC3 is to provide transformative preservation, curation, and research data management systems, services, and initiatives that sustain and promote open scholarship.