Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 56 result(s)
Climate Data Record (CDR) is a time series of measurements of sufficient length, consistency and continuity to determine climate variability and change. The fundamental CDRs include sensor data, such as calibrated radiances and brightness temperatures, that scientists have improved and quality-controlled along with the data used to calibrate them. The thematic CDRs include geophysical variables derived from the fundamental CDRs, such as sea surface temperature and sea ice concentration, and they are specific to various disciplines.
<<< openresearchdata.ch has been discontinued !!! >>> Openresearchdata.ch (ORD@CH) has been developed as a publication platform for open research data in Switzerland. It currently offers a metadata catalogue of the data available at the participating institutions (ETH Zurich Scientific IT Services, FORS Lausanne, Digital Humanities Lab at the University of Basel). In addition, metadata from other institutions is continuously added, with the goal to develop a comprehensive metadata infrastructure for open research data in Switzerland. The ORD@CH project is part of the program „Scientific information: access, processing and safeguarding“, initiated by the Rectors’ Conference of Swiss Universities (Program SUC 2013-2016 P-2). The portal is currently hosted and developed by ETH Zurich Scientific IT Services.
Country
DataverseNO is an archive platform for open research data, owned and operated by UiT The Arctic University of Norway. DataverseNO is open for researchers and organizations associated with Norwegian universities and research institutions, as well as independent researchers from Norway. All kind of open research data from all academic disciplines may be archived.
The Research Collection is ETH Zurich's publication platform. It unites the functions of a university bibliography, an open access repository and a research data repository within one platform. Researchers who are affiliated with ETH Zurich, the Swiss Federal Institute of Technology, may deposit research data from all domains. They can publish data as a standalone publication, publish it as supplementary material for an article, dissertation or another text, share it with colleagues or a research group, or deposit it for archiving purposes. Research-data-specific features include flexible access rights settings, DOI registration and a DOI preview workflow, content previews for zip- and tar-containers, as well as download statistics and altmetrics for published data. All data uploaded to the Research Collection are also transferred to the ETH Data Archive, ETH Zurich’s long-term archive.
Country
Research Data Australia is the data discovery service of the Australian National Data Service (ANDS). We do not store the data itself here but provide descriptions of, and links to, the data from our data publishing partners. ANDS is funded by the Australian Government through the National Collaborative Research Infrastructure Strategy (NCRIS).
Academic Commons provides open, persistent access to the scholarship produced by researchers at Columbia University, Barnard College, Jewish Theological Seminary, Teachers College, and Union Theological Seminary. Academic Commons is a program of the Columbia University Libraries. Academic Commons accepts articles, dissertations, research data, presentations, working papers, videos, and more.
Country
The edoc-Server, start 1998, is the Institutional Repository of the Humboldt-Universität zu Berlin and offers the posibility of text- and data-publications. Every item is published for Open-Access with an optional embargo period of up to five years. Data publications since 01.01.2018.
Monash.figshare is Monash University’s institutional data repository. It allows researchers to store, manage and showcase their data while retaining control over access rights and re-use conditions. Monash.figshare offers the latest in cloud-based technology, ensures valuable research data is stored securely, and supports long-term citations with Digital Object Identifiers (DOIs).
Country
BExIS is the online data repository and information system of the Biodiversity Exploratories Project (BE). The BE is a German network of biodiversity related working groups from areas such as vegetation and soil science, zoology and forestry. Up to three years after data acquisition, the data use is restricted to members of the BE. Thereafter, the data is usually public available (https://www.bexis.uni-jena.de/PublicData/PublicDataDefault.aspx).
The Duke Research Data Repository is a service of the Duke University Libraries that provides curation, access, and preservation of research data produced by the Duke community. Duke's RDR is a discipline agnostic institutional data repository that is intended to preserve and make public data related to the teaching and research mission of Duke University including data linked to a publication, research project, and/or class, as well as supplementary software code and documentation used to provide context for the data.
Country
Research Data Finder is QUT’s discovery service for research data created or collected by QUT researchers. Designed to promote the visibility of QUT research datasets, Research Data Finder provides descriptions about shareable, reusable datasets available via open or mediated access.
LINDAT/CLARIN is designed as a Czech “node” of Clarin ERIC (Common Language Resources and Technology Infrastructure). It also supports the goals of the META-NET language technology network. Both networks aim at collection, annotation, development and free sharing of language data and basic technologies between institutions and individuals both in science and in all types of research. The Clarin ERIC infrastructural project is more focused on humanities, while META-NET aims at the development of language technologies and applications. The data stored in the repository are already being used in scientific publications in the Czech Republic.
Country
Thousands of circular RNAs (circRNAs) have recently been shown to be expressed in eukaryotic cells [Salzman et al. 2012, Jeck et al. 2013, Memczak et al. 2013, Salzman et al. 2013]. Here you can explore public circRNA datasets and download the custom python scripts needed to discover circRNAs in your own (ribominus) RNA-seq data.
ETH Data Archive is ETH Zurich's long-term preservation solution for digital information such as research data, documents or images. It serves as the backbone of data curation and for most of its content, it is a “dark archive” without public access. In this capacity, the ETH Data Archive also archives the content of ETH Zurich’s Research Collection which is the primary repository for members of the university and the first point of contact for publication of data at ETH Zurich. All data that was produced in the context of research at the ETH Zurich, can be published and archived in the Research Collection. In the following cases, a direct data upload into the ETH Data Archive though, has to be considered: - Upload and registration of software code according to ETH transfer’s requirements for Software Disclosure. - A substantial number of files, have to be regularly submitted for long-term archiving and/or publishing and browser-based upload is not an option: the ETH Data Archive may offer automated data and metadata transfers from source applications (e.g. from a LIMS) via API. - Files for a project on a local computer have to be collected and metadata has to be added before uploading the data to the ETH Data Archive: -- we provide you with the local file editor docuteam packer. Docuteam packer allows to structure, describe, and organise data for an upload into the ETH Data Archive and the depositor decides when submission is due.
Country
HIstome: The Histone Infobase is a database of human histones, their post-translational modifications and modifying enzymes. HIstome is a combined effort of researchers from two institutions, Advanced Center for Treatment, Research and Education in Cancer (ACTREC), Navi Mumbai and Center of Excellence in Epigenetics, Indian Institute of Science Education and Research (IISER), Pune.
The Earth System Grid Federation (ESGF) is an international collaboration with a current focus on serving the World Climate Research Programme's (WCRP) Coupled Model Intercomparison Project (CMIP) and supporting climate and environmental science in general. Data is searchable and available for download at the Federated ESGF-CoG Nodes https://esgf.llnl.gov/nodes.html
The Linguistic Data Consortium (LDC) is an open consortium of universities, libraries, corporations and government research laboratories. It was formed in 1992 to address the critical data shortage then facing language technology research and development. Initially, LDC's primary role was as a repository and distribution point for language resources. Since that time, and with the help of its members, LDC has grown into an organization that creates and distributes a wide array of language resources. LDC also supports sponsored research programs and language-based technology evaluations by providing resources and contributing organizational expertise. LDC is hosted by the University of Pennsylvania and is a center within the University’s School of Arts and Sciences.
!! OFFLINE !! A recent computer security audit has revealed security flaws in the legacy HapMap site that require NCBI to take it down immediately. We regret the inconvenience, but we are required to do this. That said, NCBI was planning to decommission this site in the near future anyway (although not quite so suddenly), as the 1,000 genomes (1KG) project has established itself as a research standard for population genetics and genomics. NCBI has observed a decline in usage of the HapMap dataset and website with its available resources over the past five years and it has come to the end of its useful life. The International HapMap Project is a multi-country effort to identify and catalog genetic similarities and differences in human beings. Using the information in the HapMap, researchers will be able to find genes that affect health, disease, and individual responses to medications and environmental factors. The Project is a collaboration among scientists and funding agencies from Japan, the United Kingdom, Canada, China, Nigeria, and the United States. All of the information generated by the Project will be released into the public domain. The goal of the International HapMap Project is to compare the genetic sequences of different individuals to identify chromosomal regions where genetic variants are shared. By making this information freely available, the Project will help biomedical researchers find genes involved in disease and responses to therapeutic drugs. In the initial phase of the Project, genetic data are being gathered from four populations with African, Asian, and European ancestry. Ongoing interactions with members of these populations are addressing potential ethical issues and providing valuable experience in conducting research with identified populations. Public and private organizations in six countries are participating in the International HapMap Project. Data generated by the Project can be downloaded with minimal constraints. The Project officially started with a meeting in October 2002 (https://www.genome.gov/10005336/) and is expected to take about three years.
BioVeL is a virtual e-laboratory that supports research on biodiversity issues using large amounts of data from cross-disciplinary sources. BioVeL supports the development and use of workflows to process data. It offers the possibility to either use already made workflows or create own. BioVeL workflows are stored in MyExperiment - Biovel Group http://www.myexperiment.org/groups/643/content. They are underpinned by a range of analytical and data processing functions (generally provided as Web Services or R scripts) to support common biodiversity analysis tasks. You can find the Web Services catalogued in the BiodiversityCatalogue.
The University of Reading Research Data Archive (the Archive) is a multidisciplinary online service for the registration, preservation and publication of research datasets produced or collected at the University of Reading.