Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 14 result(s)
A data repository and social network so that researchers can interact and collaborate, also offers tutorials and datasets for data science learning. "data.world is designed for data and the people who work with data. From professional projects to open data, data.world helps you host and share your data, collaborate with your team, and capture context and conclusions as you work."
DBpedia is a crowd-sourced community effort to extract structured information from Wikipedia and make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia, and to link the different data sets on the Web to Wikipedia data. We hope that this work will make it easier for the huge amount of information in Wikipedia to be used in some new interesting ways. Furthermore, it might inspire new mechanisms for navigating, linking, and improving the encyclopedia itself.
OEDI is a centralized repository of high-value energy research datasets aggregated from the U.S. Department of Energy’s Programs, Offices, and National Laboratories. Built to enable data discoverability, OEDI facilitates access to a broad network of findings, including the data available in technology-specific catalogs like the Geothermal Data Repository and Marine Hydrokinetic Data Repository.
Kaggle is a platform for predictive modelling and analytics competitions in which statisticians and data miners compete to produce the best models for predicting and describing the datasets uploaded by companies and users. This crowdsourcing approach relies on the fact that there are countless strategies that can be applied to any predictive modelling task and it is impossible to know beforehand which technique or analyst will be most effective.
IEEE DataPort™ is a universally accessible online data repository created, owned, and supported by IEEE, the world’s largest technical professional organization. It enables all researchers and data owners to upload their dataset without cost. IEEE DataPort makes data available in three ways: standard datasets, open access datasets, and data competition datasets. By default, all "standard" datasets that are uploaded are accessible to paid IEEE DataPort subscribers. Data owners have an option to pay a fee to make their dataset “open access”, so it is available to all IEEE DataPort users (no subscription required). The third option is to host a "data competition" and make a dataset accessible for free for a specific duration with instructions for the data competition and how to participate. IEEE DataPort provides workflows for uploading data, searching, and accessing data, and initiating or participating in data competitions. All datasets are stored on Amazon AWS S3, and each dataset uploaded by an individual can be up to 2TB in size. Institutional subscriptions are available to the platform to make it easy for all members of a given institution to utilize the platform and upload datasets.
The Lens is building an open platform for Innovation Cartography. Specifically, the Lens serves nearly all of the patent documents in the world as open, annotatable digital public goods that are integrated with scholarly and technical literature along with regulatory and business data.
The Google Code Archive contains the data found on the Google Code Project Hosting Service, which turned down in early 2016. This archive contains over 1.4 million projects, 1.5 million downloads, and 12.6 million issues. Google Project Hosting powers Project Hosting on Google Code and Eclipse Labs. Project Hosting on Google Code Eclipse Labs. It provides a fast, reliable, and easy open source hosting service with the following features: Instant project creation on any topic; Git, Mercurial and Subversion code hosting with 2 gigabyte of storage space and download hosting support with 2 gigabytes of storage space; Integrated source code browsing and code review tools to make it easy to view code, review contributions, and maintain a high quality code base; An issue tracker and project wiki that are simple, yet flexible and powerful, and can adapt to any development process; Starring and update streams that make it easy to keep track of projects and developers that you care about.
myExperiment is a collaborative environment where scientists can safely publish their workflows and in silico experiments, share them with groups and find those of others. Workflows, other digital objects and bundles (called Packs) can now be swapped, sorted and searched like photos and videos on the Web. Unlike Facebook or MySpace, myExperiment fully understands the needs of the researcher and makes it really easy for the next generation of scientists to contribute to a pool of scientific methods, build communities and form relationships — reducing time-to-experiment, sharing expertise and avoiding reinvention. myExperiment is now the largest public repository of scientific workflows.
The Registry of Open Data on AWS provides a centralized repository of public data sets that can be seamlessly integrated into AWS cloud-based applications. AWS is hosting the public data sets at no charge to their users. Anyone can access these data sets from their Amazon Elastic Compute Cloud (Amazon EC2) instances and start computing on the data within minutes. Users can also leverage the entire AWS ecosystem and easily collaborate with other AWS users.
The Open Science Framework (OSF) is part network of research materials, part version control system, and part collaboration software. The purpose of the software is to support the scientist's workflow and help increase the alignment between scientific values and scientific practices. Document and archive studies. Move the organization and management of study materials from the desktop into the cloud. Labs can organize, share, and archive study materials among team members. Web-based project management reduces the likelihood of losing study materials due to computer malfunction, changing personnel, or just forgetting where you put the damn thing. Share and find materials. With a click, make study materials public so that other researchers can find, use and cite them. Find materials by other researchers to avoid reinventing something that already exists. Detail individual contribution. Assign citable, contributor credit to any research material - tools, analysis scripts, methods, measures, data. Increase transparency. Make as much of the scientific workflow public as desired - as it is developed or after publication of reports. Find public projects here. Registration. Registering materials can certify what was done in advance of data analysis, or confirm the exact state of the project at important points of the lifecycle such as manuscript submission or at the onset of data collection. Discover public registrations here. Manage scientific workflow. A structured, flexible system can provide efficiency gain to workflow and clarity to project objectives, as pictured.
The UCD Digital Library is a platform for exploring cultural heritage, engaging with digital scholarship, and accessing research data. The UCD Digital Library allows you to search, browse and explore a growing collection of historical materials, photographs, art, interviews, letters, and other exciting content, that have been digitised and made freely available.
>>>!!!<<< As stated 2017-05-16 The BIRN project was finished a few years ago. The web portal is no longer live.>>>!!!<<< BIRN is a national initiative to advance biomedical research through data sharing and online collaboration. It supports multi-site, and/or multi-institutional, teams by enabling researchers to share significant quantities of data across geographic distance and/or incompatible computing systems. BIRN offers a library of data-sharing software tools specific to biomedical research, best practice references, expert advice and other resources.
Data.gov increases the ability of the public to easily find, download, and use datasets that are generated and held by the Federal Government. Data.gov provides descriptions of the Federal datasets (metadata), information about how to access the datasets, and tools that leverage government datasets