Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 16 result(s)
A community platform to Share Data, Publish Data with a DOI, and get Citations. Advancing Spinal Cord Injury research through sharing of data from basic and clinical research.
D-PLACE contains cultural, linguistic, environmental and geographic information for over 1400 human ‘societies’. A ‘society’ in D-PLACE represents a group of people in a particular locality, who often share a language and cultural identity. All cultural descriptions are tagged with the date to which they refer and with the ethnographic sources that provided the descriptions. The majority of the cultural descriptions in D-PLACE are based on ethnographic work carried out in the 19th and early-20th centuries (pre-1950).
A data repository and social network so that researchers can interact and collaborate, also offers tutorials and datasets for data science learning. "data.world is designed for data and the people who work with data. From professional projects to open data, data.world helps you host and share your data, collaborate with your team, and capture context and conclusions as you work."
DBpedia is a crowd-sourced community effort to extract structured information from Wikipedia and make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia, and to link the different data sets on the Web to Wikipedia data. We hope that this work will make it easier for the huge amount of information in Wikipedia to be used in some new interesting ways. Furthermore, it might inspire new mechanisms for navigating, linking, and improving the encyclopedia itself.
EarthWorks is a discovery tool for geospatial (a.k.a. GIS) data. It allows users to search and browse the GIS collections owned by Stanford University Libraries, as well as data collections from many other institutions. Data can be searched spatially, by manipulating a map; by keyword search; by selecting search limiting facets (e.g., limit to a given format type); or by combining these options.
TERN provides open data, research and management tools, data infrastructure and site-based research equipment. The open access ecosystem data is provided by TERN Data Discovery Portal , see https://www.re3data.org/repository/r3d100012013
Kaggle is a platform for predictive modelling and analytics competitions in which statisticians and data miners compete to produce the best models for predicting and describing the datasets uploaded by companies and users. This crowdsourcing approach relies on the fact that there are countless strategies that can be applied to any predictive modelling task and it is impossible to know beforehand which technique or analyst will be most effective.
This centre receives and archives precipitation chemistry data and complementary information from stations around the world. Data archived by this centre are accessible via connections with the WDCPC database. Freely available data from regional and national programmes with their own Web sites are accessible via links to these sites. The WDCPC is one of six World Data Centres in the World Meteorological Organization Global Atmosphere Watch (GAW). The focus on precipitation chemistry is described in the GAW Precipitation Chemistry Programme. Guidance on all aspects of collecting precipitation for chemical analysis is provided in the Manual for the GAW Precipitation Chemistry Programme (WMO-GAW Report No. 160).
IEEE DataPort™ is a universally accessible online data repository created, owned, and supported by IEEE, the world’s largest technical professional organization. It enables all researchers and data owners to upload their dataset without cost. IEEE DataPort makes data available in three ways: standard datasets, open access datasets, and data competition datasets. By default, all "standard" datasets that are uploaded are accessible to paid IEEE DataPort subscribers. Data owners have an option to pay a fee to make their dataset “open access”, so it is available to all IEEE DataPort users (no subscription required). The third option is to host a "data competition" and make a dataset accessible for free for a specific duration with instructions for the data competition and how to participate. IEEE DataPort provides workflows for uploading data, searching, and accessing data, and initiating or participating in data competitions. All datasets are stored on Amazon AWS S3, and each dataset uploaded by an individual can be up to 2TB in size. Institutional subscriptions are available to the platform to make it easy for all members of a given institution to utilize the platform and upload datasets.
The Google Code Archive contains the data found on the Google Code Project Hosting Service, which turned down in early 2016. This archive contains over 1.4 million projects, 1.5 million downloads, and 12.6 million issues. Google Project Hosting powers Project Hosting on Google Code and Eclipse Labs. Project Hosting on Google Code Eclipse Labs. It provides a fast, reliable, and easy open source hosting service with the following features: Instant project creation on any topic; Git, Mercurial and Subversion code hosting with 2 gigabyte of storage space and download hosting support with 2 gigabytes of storage space; Integrated source code browsing and code review tools to make it easy to view code, review contributions, and maintain a high quality code base; An issue tracker and project wiki that are simple, yet flexible and powerful, and can adapt to any development process; Starring and update streams that make it easy to keep track of projects and developers that you care about.
The GSA Data Repository is an open file in which authors of articles in our journals can place information that supplements and expands on their article. These supplements will not appear in print but may be obtained from GSA.
The KNB Data Repository is an international repository intended to facilitate ecological, environmental and earth science research in the broadest senses. For scientists, the KNB Data Repository is an efficient way to share, discover, access and interpret complex ecological, environmental, earth science, and sociological data and the software used to create and manage those data. Due to rich contextual information provided with data in the KNB, scientists are able to integrate and analyze data with less effort. The data originate from a highly-distributed set of field stations, laboratories, research sites, and individual researchers. The KNB supports rich, detailed metadata to promote data discovery as well as automated and manual integration of data into new projects. The KNB supports a rich set of modern repository services, including the ability to assign Digital Object Identifiers (DOIs) so data sets can be confidently referenced in any publication, the ability to track the versions of datasets as they evolve through time, and metadata to establish the provenance relationships between source and derived data.
The Registry of Open Data on AWS provides a centralized repository of public data sets that can be seamlessly integrated into AWS cloud-based applications. AWS is hosting the public data sets at no charge to their users. Anyone can access these data sets from their Amazon Elastic Compute Cloud (Amazon EC2) instances and start computing on the data within minutes. Users can also leverage the entire AWS ecosystem and easily collaborate with other AWS users.
The Open Science Framework (OSF) is part network of research materials, part version control system, and part collaboration software. The purpose of the software is to support the scientist's workflow and help increase the alignment between scientific values and scientific practices. Document and archive studies. Move the organization and management of study materials from the desktop into the cloud. Labs can organize, share, and archive study materials among team members. Web-based project management reduces the likelihood of losing study materials due to computer malfunction, changing personnel, or just forgetting where you put the damn thing. Share and find materials. With a click, make study materials public so that other researchers can find, use and cite them. Find materials by other researchers to avoid reinventing something that already exists. Detail individual contribution. Assign citable, contributor credit to any research material - tools, analysis scripts, methods, measures, data. Increase transparency. Make as much of the scientific workflow public as desired - as it is developed or after publication of reports. Find public projects here. Registration. Registering materials can certify what was done in advance of data analysis, or confirm the exact state of the project at important points of the lifecycle such as manuscript submission or at the onset of data collection. Discover public registrations here. Manage scientific workflow. A structured, flexible system can provide efficiency gain to workflow and clarity to project objectives, as pictured.
>>>!!!<<< As stated 2017-05-16 The BIRN project was finished a few years ago. The web portal is no longer live.>>>!!!<<< BIRN is a national initiative to advance biomedical research through data sharing and online collaboration. It supports multi-site, and/or multi-institutional, teams by enabling researchers to share significant quantities of data across geographic distance and/or incompatible computing systems. BIRN offers a library of data-sharing software tools specific to biomedical research, best practice references, expert advice and other resources.
<<<!!!<<< As of 2017-05-17 the data catalog is no longer available >>>!!!>>> DataFed is a web services-based software that non-intrusively mediates between autonomous, distributed data providers and users. The main goals of DataFed are: Aid air quality management and science by effective use of relevant data - Facilitate the access and flow of atmospheric data from provider to users - Support the development of user-driven data processing value chains. DataFed Catalog links searchable Datafed applications worldwide.