Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 40 result(s)
The CLARIN­/Text+ repository at the Saxon Academy of Sciences and Humanities in Leipzig offers long­term preservation of digital resources, along with their descriptive metadata. The mission of the repository is to ensure the availability and long­term preservation of resources, to preserve knowledge gained in research, to aid the transfer of knowledge into new contexts, and to integrate new methods and resources into university curricula. Among the resources currently available in the Leipzig repository are a set of corpora of the Leipzig Corpora Collection (LCC), based on newspaper, Wikipedia and Web text. Furthermore several REST-based webservices are provided for a variety of different NLP-relevant tasks The repository is part of the CLARIN infrastructure and part of the NFDI consortium Text+. It is operated by the Saxon Academy of Sciences and Humanities in Leipzig.
The Research Collection is ETH Zurich's publication platform. It unites the functions of a university bibliography, an open access repository and a research data repository within one platform. Researchers who are affiliated with ETH Zurich, the Swiss Federal Institute of Technology, may deposit research data from all domains. They can publish data as a standalone publication, publish it as supplementary material for an article, dissertation or another text, share it with colleagues or a research group, or deposit it for archiving purposes. Research-data-specific features include flexible access rights settings, DOI registration and a DOI preview workflow, content previews for zip- and tar-containers, as well as download statistics and altmetrics for published data. All data uploaded to the Research Collection are also transferred to the ETH Data Archive, ETH Zurich’s long-term archive.
CLARIN is a European Research Infrastructure for the Humanities and Social Sciences, focusing on language resources (data and tools). It is being implemented and constantly improved at leading institutions in a large and growing number of European countries, aiming at improving Europe's multi-linguality competence. CLARIN provides several services, such as access to language data and tools to analyze data, and offers to deposit research data, as well as direct access to knowledge about relevant topics in relation to (research on and with) language resources. The main tool is the 'Virtual Language Observatory' providing metadata and access to the different national CLARIN centers and their data.
-----<<<<< The repository is no longer available. This record is out-dated. >>>>>----- GEON is an open collaborative project that is developing cyberinfrastructure for integration of 3 and 4 dimensional earth science data. GEON will develop services for data integration and model integration, and associated model execution and visualization. Mid-Atlantic test bed will focus on tectonothermal, paleogeographic, and biotic history from the late-Proterozoicto mid-Paleozoic. Rockies test bed will focus on integration of data with dynamic models, to better understand deformation history. GEON will develop the most comprehensive regional datasets in test bed areas.
The Information Marketplace for Policy and Analysis of Cyber-risk & Trust (IMPACT) program supports global cyber risk research & development by coordinating, enhancing and developing real world data, analytics and information sharing capabilities, tools, models, and methodologies. In order to accelerate solutions around cyber risk issues and infrastructure security, IMPACT makes these data sharing components broadly available as national and international resources to support the three-way partnership among cyber security researchers, technology developers and policymakers in academia, industry and the government.
Country
DataverseNO (https://dataverse.no) is a curated, FAIR-aligned national generic repository for open research data from all academic disciplines. DataverseNO commits to facilitate that published data remain accessible and (re)usable in a long-term perspective. The repository is owned and operated by UiT The Arctic University of Norway. DataverseNO accepts submissions from researchers primarily from Norwegian research institutions. Datasets in DataverseNO are grouped into institutional collections as well as special collections. The technical infrastructure of the repository is based on the open source application Dataverse (https://dataverse.org), which is developed by an international developer and user community led by Harvard University.
ROSA P is the United States Department of Transportation (US DOT) National Transportation Library's (NTL) Repository and Open Science Access Portal (ROSA P). The name ROSA P was chosen to honor the role public transportation played in the civil rights movement, along with one of the important figures, Rosa Parks. To meet the requirements outlined in its legislative mandate, NTL collects research and resources across all modes of transportation and related disciplines, with specific focus on research, data, statistics, and information produced by USDOT, state DOTs, and other transportation organizations. Content types found in ROSA P include textual works, datasets, still image works, moving image works, other multimedia, and maps. These resources have value to federal, state, and local transportation decision makers, transportation analysts, and researchers.
Lab Notes Online presents historic scientific data from the Caltech Archives' collections in digital facsimile. Beginning in the fall of 2008, the first publication in the series is Robert A. Millikan's notebooks for his oil drop experiments to measure the charge of the electron, dating from October 1911 to April 1912. Other laboratory, field, or research notes will be added to the archive over time.
FLOSSmole is a collaborative collection of free, libre, and open source software (FLOSS) data. FLOSSmole contains nearly 1 TB of data covering the period 2004 until now, about more than 500,000 different open source projects.
-----<<<<< The repository is no longer available. This record is out-dated. The Matter lab provides the archived database version of 2012 and 2013 at https://www.matter.toronto.edu/basic-content-page/data-download. Data linked from the World Community Grid - The Clean Energy Project see at https://www.worldcommunitygrid.org/research/cep1/overview.do and on fighshare https://figshare.com/articles/dataset/moldata_csv/9640427 >>>>>----- The Clean Energy Project Database (CEPDB) is a massive reference database for organic semiconductors with a particular emphasis on photovoltaic applications. It was created to store and provide access to data from computational as well as experimental studies, on both known and virtual compounds. It is a free and open resource designed to support researchers in the field of organic electronics in their scientific pursuits. The CEPDB was established as part of the Harvard Clean Energy Project (CEP), a virtual high-throughput screening initiative to identify promising new candidates for the next generation of carbon-based solar cell materials.
Content type(s)
A machine learning data repository with interactive visual analytic techniques. This project is the first to combine the notion of a data repository with real-time visual analytics for interactive data mining and exploratory analysis on the web. State-of-the-art statistical techniques are combined with real-time data visualization giving the ability for researchers to seamlessly find, explore, understand, and discover key insights in a large number of public donated data sets. This large comprehensive collection of data is useful for making significant research findings as well as benchmark data sets for a wide variety of applications and domains and includes relational, attributed, heterogeneous, streaming, spatial, and time series data as well as non-relational machine learning data. All data sets are easily downloaded into a standard consistent format. We also have built a multi-level interactive visual analytics engine that allows users to visualize and interactively explore the data in a free-flowing manner.
NKN is now Research Computing and Data Services (RCDS)! We provide data management support for UI researchers and their regional, national, and international collaborators. This support keeps researchers at the cutting-edge of science and increases our institution's competitiveness for external research grants. Quality data and metadata developed in research projects and curated by RCDS (formerly NKN) is a valuable, long-term asset upon which to develop and build new research and science.
CLARIN-LV is a national node of Clarin ERIC (Common Language Resources and Technology Infrastructure). The mission of the repository is to ensure the availability and long­ term preservation of language resources. The data stored in the repository are being actively used and cited in scientific publications.
The figshare service for The Open University was launched in 2016 and allows researchers to store, share and publish research data. It helps the research data to be accessible by storing metadata alongside datasets. Additionally, every uploaded item receives a Digital Object Identifier (DOI), which allows the data to be citable and sustainable. If there are any ethical or copyright concerns about publishing a certain dataset, it is possible to publish the metadata associated with the dataset to help discoverability while sharing the data itself via a private channel through manual approval.
The UK Data Archive, based at the University of Essex, is curator of the largest collection of digital data in the social sciences and humanities in the United Kingdom. With several thousand datasets relating to society, both historical and contemporary, our Archive is a vital resource for researchers, teachers and learners. We are an internationally acknowledged centre of expertise in the areas of acquiring, curating and providing access to data. We are the lead partner in the UK Data Service (https://service.re3data.org/repository/r3d100010230) through which data users can browse collections online and register to analyse and download them. Open Data collections are available for anyone to use. The UK Data Archive is a Trusted Digital Repository (TDR) certified against the CoreTrustSeal (https://www.coretrustseal.org/) and certified against ISO27001 for Information Security (https://www.iso.org/isoiec-27001-information-security.html).
GigaDB primarily serves as a repository to host data and tools associated with articles published by GigaScience Press; GigaScience and GigaByte (both are online, open-access journals). GigaDB defines a dataset as a group of files (e.g., sequencing data, analyses, imaging files, software programs) that are related to and support a unit-of-work (article or study). GigaDB allows the integration of manuscript publication with supporting data and tools.
Country
Ocean Networks Canada maintains several observatories installed in three different regions in the world's oceans. All three observatories are cabled systems that can provide power and high bandwidth communiction paths to sensors in the ocean. The infrastructure supports near real-time observations from multiple instruments and locations distributed across the Arctic, NEPTUNE and VENUS observatory networks. These observatories collect data on physical, chemical, biological, and geological aspects of the ocean over long time periods, supporting research on complex Earth processes in ways not previously possible.
ETH Data Archive is ETH Zurich's long-term preservation solution for digital information such as research data, digitised content, archival records, or images. It serves as the backbone of data curation and for most of its content, it is a “dark archive” without public access. In this capacity, the ETH Data Archive also archives the content of ETH Zurich’s Research Collection which is the primary repository for members of the university and the first point of contact for publication of data at ETH Zurich. All data that was produced in the context of research at the ETH Zurich, can be published and archived in the Research Collection. An automated connection to the ETH Data Archive in the background ensures the medium to long-term preservation of all publications and research data. Direct access to the ETH Data Archive is intended only for customers who need to deposit software source code within the framework of ETH transfer Software Registration. Open Source code packages and other content from legacy workflows can be accessed via ETH Library @ swisscovery (https://library.ethz.ch/en/).
Country
The version 1.0 of the open database contains 1,151,268 brain signals of 2 seconds each, captured with the stimulus of seeing a digit (from 0 to 9) and thinking about it, over the course of almost 2 years between 2014 & 2015, from a single Test Subject David Vivancos. All the signals have been captured using commercial EEGs (not medical grade), NeuroSky MindWave, Emotiv EPOC, Interaxon Muse & Emotiv Insight, covering a total of 19 Brain (10/20) locations. In 2014 started capturing brain signals and released the first versions of the "MNIST" of brain digits, and in 2018 released another open dataset with a subset of the "IMAGENET" of The Brain. Version 0.05 (last update 09/28/2021) of the open database contains 24,000 brain signals of 2 seconds each, captured with the stimulus of seeing a real MNIST digit (from 0 to 9) 6,000 so far and thinking about it, + the same amout of signals with another 2 seconds of seeing a black screen, shown in between the digits, from a single Test Subject David Vivancos in a controlled still experiment to reduce noise from EMG & avoiding blinks.