Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 23 result(s)
<<<!!!<<< CRAWDAD has moved to IEEE-Dataport https://www.re3data.org/repository/r3d100012569 The datasets in the Community Resource for Archiving Wireless Data at Dartmouth (CRAWDAD) repository are now hosted as the CRAWDAD Collection on IEEE Dataport. After nearly two decades as a stand-alone archive at crawdad.org, the migration of the collection to IEEE DataPort provides permanence and new visibility. >>>!!!>>>
nanoHUB.org is the premier place for computational nanotechnology research, education, and collaboration. Our site hosts a rapidly growing collection of Simulation Programs for nanoscale phenomena that run in the cloud and are accessible through a web browser. In addition to simulation devices, nanoHUB provides Online Presentations, Courses, Learning Modules, Podcasts, Animations, Teaching Materials, and more. These resources help users learn about our simulation programs and about nanotechnology in general. Our site offers researchers a venue to explore, collaborate, and publish content, as well. Much of these collaborative efforts occur via Workspaces and User groups.
The CLARIN­/Text+ repository at the Saxon Academy of Sciences and Humanities in Leipzig offers long­term preservation of digital resources, along with their descriptive metadata. The mission of the repository is to ensure the availability and long­term preservation of resources, to preserve knowledge gained in research, to aid the transfer of knowledge into new contexts, and to integrate new methods and resources into university curricula. Among the resources currently available in the Leipzig repository are a set of corpora of the Leipzig Corpora Collection (LCC), based on newspaper, Wikipedia and Web text. Furthermore several REST-based webservices are provided for a variety of different NLP-relevant tasks The repository is part of the CLARIN infrastructure and part of the NFDI consortium Text+. It is operated by the Saxon Academy of Sciences and Humanities in Leipzig.
The UK Data Archive, based at the University of Essex, is curator of the largest collection of digital data in the social sciences and humanities in the United Kingdom. With several thousand datasets relating to society, both historical and contemporary, our Archive is a vital resource for researchers, teachers and learners. We are an internationally acknowledged centre of expertise in the areas of acquiring, curating and providing access to data. We are the lead partner in the UK Data Service (https://service.re3data.org/repository/r3d100010230) through which data users can browse collections online and register to analyse and download them. Open Data collections are available for anyone to use. The UK Data Archive is a Trusted Digital Repository (TDR) certified against the CoreTrustSeal (https://www.coretrustseal.org/) and certified against ISO27001 for Information Security (https://www.iso.org/isoiec-27001-information-security.html).
The Information Marketplace for Policy and Analysis of Cyber-risk & Trust (IMPACT) program supports global cyber risk research & development by coordinating, enhancing and developing real world data, analytics and information sharing capabilities, tools, models, and methodologies. In order to accelerate solutions around cyber risk issues and infrastructure security, IMPACT makes these data sharing components broadly available as national and international resources to support the three-way partnership among cyber security researchers, technology developers and policymakers in academia, industry and the government.
Kaggle is a platform for predictive modelling and analytics competitions in which statisticians and data miners compete to produce the best models for predicting and describing the datasets uploaded by companies and users. This crowdsourcing approach relies on the fact that there are countless strategies that can be applied to any predictive modelling task and it is impossible to know beforehand which technique or analyst will be most effective.
Brainlife promotes engagement and education in reproducible neuroscience. We do this by providing an online platform where users can publish code (Apps), Data, and make it "alive" by integragrate various HPC and cloud computing resources to run those Apps. Brainlife also provide mechanisms to publish all research assets associated with a scientific project (data and analyses) embedded in a cloud computing environment and referenced by a single digital-object-identifier (DOI). The platform is unique because of its focus on supporting scientific reproducibility beyond open code and open data, by providing fundamental smart mechanisms for what we refer to as “Open Services.”
<<<!!!<<< The repository is no longer available. This record is out-dated. >>>!!!>>>- GEON is an open collaborative project that is developing cyberinfrastructure for integration of 3 and 4 dimensional earth science data. GEON will develop services for data integration and model integration, and associated model execution and visualization. Mid-Atlantic test bed will focus on tectonothermal, paleogeographic, and biotic history from the late-Proterozoicto mid-Paleozoic. Rockies test bed will focus on integration of data with dynamic models, to better understand deformation history. GEON will develop the most comprehensive regional datasets in test bed areas.
The CONP portal is a web interface for the Canadian Open Neuroscience Platform (CONP) to facilitate open science in the neuroscience community. CONP simplifies global researcher access and sharing of datasets and tools. The portal internalizes the cycle of a typical research project: starting with data acquisition, followed by processing using already existing/published tools, and ultimately publication of the obtained results including a link to the original dataset. From more information on CONP, please visit https://conp.ca
<<<!!!<<< All user content from this site has been deleted. Visit SeedMeLab (https://seedmelab.org/) project as a new option for data hosting. >>>!!!>>> SeedMe is a result of a decade of onerous experience in preparing and sharing visualization results from supercomputing simulations with many researchers at different geographic locations using different operating systems. It’s been a labor–intensive process, unsupported by useful tools and procedures for sharing information. SeedMe provides a secure and easy-to-use functionality for efficiently and conveniently sharing results that aims to create transformative impact across many scientific domains.
Country
Ocean Networks Canada maintains several observatories installed in three different regions in the world's oceans. All three observatories are cabled systems that can provide power and high bandwidth communiction paths to sensors in the ocean. The infrastructure supports near real-time observations from multiple instruments and locations distributed across the Arctic, NEPTUNE and VENUS observatory networks. These observatories collect data on physical, chemical, biological, and geological aspects of the ocean over long time periods, supporting research on complex Earth processes in ways not previously possible.
ETH Data Archive is ETH Zurich's long-term preservation solution for digital information such as research data, digitised content, archival records, or images. It serves as the backbone of data curation and for most of its content, it is a “dark archive” without public access. In this capacity, the ETH Data Archive also archives the content of ETH Zurich’s Research Collection which is the primary repository for members of the university and the first point of contact for publication of data at ETH Zurich. All data that was produced in the context of research at the ETH Zurich, can be published and archived in the Research Collection. An automated connection to the ETH Data Archive in the background ensures the medium to long-term preservation of all publications and research data. Direct access to the ETH Data Archive is intended only for customers who need to deposit software source code within the framework of ETH transfer Software Registration. Open Source code packages and other content from legacy workflows can be accessed via ETH Library @ swisscovery (https://library.ethz.ch/en/).
LINDAT/CLARIN is designed as a Czech “node” of Clarin ERIC (Common Language Resources and Technology Infrastructure). It also supports the goals of the META-NET language technology network. Both networks aim at collection, annotation, development and free sharing of language data and basic technologies between institutions and individuals both in science and in all types of research. The Clarin ERIC infrastructural project is more focused on humanities, while META-NET aims at the development of language technologies and applications. The data stored in the repository are already being used in scientific publications in the Czech Republic. In 2019 LINDAT/CLARIAH-CZ was established as a unification of two research infrastructures, LINDAT/CLARIN and DARIAH-CZ.
Country
The Informatics Research Data Repository is a Japanese data repository that collects data on disciplines within informatics. Such sub-categories are things like consumerism and information diffusion. The primary data within these data sets is from experiments run by IDR on how one group is linked to another.
Polish CLARIN node – CLARIN-PL Language Technology Centre – is being built at Wrocław University of Technology. The LTC is addressed to scholars in the humanities and social sciences. Registered users are granted free access to digital language resources and advanced tools to explore them. They can also archive and share their own language data (in written, spoken, video or multimodal form).
The Open Science Framework (OSF) is part network of research materials, part version control system, and part collaboration software. The purpose of the software is to support the scientist's workflow and help increase the alignment between scientific values and scientific practices. Document and archive studies. Move the organization and management of study materials from the desktop into the cloud. Labs can organize, share, and archive study materials among team members. Web-based project management reduces the likelihood of losing study materials due to computer malfunction, changing personnel, or just forgetting where you put the damn thing. Share and find materials. With a click, make study materials public so that other researchers can find, use and cite them. Find materials by other researchers to avoid reinventing something that already exists. Detail individual contribution. Assign citable, contributor credit to any research material - tools, analysis scripts, methods, measures, data. Increase transparency. Make as much of the scientific workflow public as desired - as it is developed or after publication of reports. Find public projects here. Registration. Registering materials can certify what was done in advance of data analysis, or confirm the exact state of the project at important points of the lifecycle such as manuscript submission or at the onset of data collection. Discover public registrations here. Manage scientific workflow. A structured, flexible system can provide efficiency gain to workflow and clarity to project objectives, as pictured.
The Energy Data eXchange (EDX) is an online collection of capabilities and resources that advance research and customize energy-related needs. EDX is developed and maintained by NETL-RIC researchers and technical computing teams to support private collaboration for ongoing research efforts, and tech transfer of finalized DOE NETL research products. EDX supports NETL-affiliated research by: Coordinating historical and current data and information from a wide variety of sources to facilitate access to research that crosscuts multiple NETL projects/programs; Providing external access to technical products and data published by NETL-affiliated research teams; Collaborating with a variety of organizations and institutions in a secure environment through EDX’s ;Collaborative Workspaces
US Department of Energy’s Atmospheric Radiation Measurement (ARM) Data Center is a long-term archive and distribution facility for various ground-based, aerial and model data products in support of atmospheric and climate research. ARM facility currently operates over 400 instruments at various observatories (https://www.arm.gov/capabilities/observatories/). ARM Data Center (ADC) Archive currently holds over 11,000 data products with a total holding of over 3 petabytes of data that dates back to 1993, these include data from instruments, value added products, model outputs, field campaign and PI contributed data. The data center archive also includes data collected by ARM from related program (e.g., external data such as NASA satellite).
OpenML is an open ecosystem for machine learning. By organizing all resources and results online, research becomes more efficient, useful and fun. OpenML is a platform to share detailed experimental results with the community at large and organize them for future reuse. Moreover, it will be directly integrated in today’s most popular data mining tools (for now: R, KNIME, RapidMiner and WEKA). Such an easy and free exchange of experiments has tremendous potential to speed up machine learning research, to engender larger, more detailed studies and to offer accurate advice to practitioners. Finally, it will also be a valuable resource for education in machine learning and data mining.
The NASA Space Science Data Coordinated Archive serves as the permanent archive for NASA space science mission data. "Space science" means astronomy and astrophysics, solar and space plasma physics, and planetary and lunar science. As permanent archive, NSSDCA teams with NASA's discipline-specific space science "active archives" which provide access to data to researchers and, in some cases, to the general public. NSSDCA also serves as NASA's permanent archive for space physics mission data. It provides access to several geophysical models and to data from some non-NASA mission data. In addition to supporting active space physics and astrophysics researchers, NSSDCA also supports the general public both via several public-interest web-based services (e.g., the Photo Gallery) and via the offline mailing of CD-ROMs, photoprints, and other items.