Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 162 result(s)
Country
The Research Data Gouv platform is the French national federated platform for open and shared research data serving the national scientific community. This platform was an integral part of the Second National Plan for Open Science (PNSO) and offers a multidisciplinary data repository, a registry which reports data hosted in other repositories and a web portal. The multidisciplinary repository is a sovereign publishing solution for sharing and opening up data for communities which are yet to set up their own recognised thematic repository.
Country
PsychArchives is a disciplinary repository for psychological science and neighboring disciplines. Accommodating 20 different digital research object (DRO) types, including articles, preprints, research data, code, supplements, preregistrations, tests and multimedia objects, PsychArchives provides a digital space that integrates all research-related content relevant to psychology. PsychArchives is committed to the FAIR principles, facilitating the findability, accessibility, interoperability and reusability of research and research data.
The main goal of the CLUES-project is to provide constrained simulations of the local universe designed to be used as a numerical laboratory of the current paradigm. The simulations will be used for unprecedented analysis of the complex dark matter and gasdynamical processes which govern the formation of galaxies. The predictions of these experiments can be easily compared with the detailed observations of our galactic neighborhood. Some of the CLUES data is now publicly available via the CosmoSim database (https://www.cosmosim.org/). This includes AHF halo catalogues from the Box 64, WMAP3 resimulations of the Local Group with 40963 particle resolution.
PISCO researchers collect biological, chemical, and physical data about ocean ecosystems in the nearshore portions of the California Current Large Marine Ecosystem. Data are archived and used to create summaries and graphics, in order to ensure that the data can be used and understood by a diverse audience of managers, policy makers, scientists and the general public.
This database is a global archive and describes plant traits from throughout the globe. TRY is a network of vegetation scientists headed by DIVERSITAS, IGBP, iDiv, the Max Planck Institute for Biogeochemistry and an international Advisory Board. About half of the data are geo-referenced, providing a global coverage of more than 8000 measurement sites.
Country
The GML contributes to the continual improvement of access to and information about official microdata; provides a service and research infrastructure for these data; adopts the function of an intermediary between the Federal Statistical Office and empirical research; conducts exemplary research based upon official data. The GML is an integral part of the German data infrastructure and features as one of six institutions funded by the German Council of Social and Economic Data.
The Cancer Cell Line Encyclopedia project is a collaboration between the Broad Institute, and the Novartis Institutes for Biomedical Research and its Genomics Institute of the Novartis Research Foundation to conduct a detailed genetic and pharmacologic characterization of a large panel of human cancer models, to develop integrated computational analyses that link distinct pharmacologic vulnerabilities to genomic patterns and to translate cell line integrative genomics into cancer patient stratification. The CCLE provides public access to genomic data, analysis and visualization for about 1000 cell lines.
Country
The main focus of tambora.org is Historical Climatology. Years of meticulous work in this field in research groups around the world have resulted in large data collections on climatic parameters such as temperature, precipitation, storms, floods, etc. with different regional, temporal and thematic foci. tambora.org enables researchers to collaboratively interpret the information derived from historical sources. It provides a database for original text quotations together with bibliographic references and the extracted places, dates and coded climate and environmental information.
Europeana is the trusted source of cultural heritage brought to you by the Europeana Foundation and a large number of European cultural institutions, projects and partners. It’s a real piece of team work. Ideas and inspiration can be found within the millions of items on Europeana. These objects include: Images - paintings, drawings, maps, photos and pictures of museum objects Texts - books, newspapers, letters, diaries and archival papers Sounds - music and spoken word from cylinders, tapes, discs and radio broadcasts Videos - films, newsreels and TV broadcasts All texts are CC BY-SA, images and media licensed individually.
The Atmospheric Science Data Center (ASDC) at NASA Langley Research Center is responsible for processing, archiving, and distribution of NASA Earth science data in the areas of radiation budget, clouds, aerosols, and tropospheric chemistry.The ASDC specializes in atmospheric data important to understanding the causes and processes of global climate change and the consequences of human activities on the climate.
Country
NAKALA is a repository dedicated to SSH research data in France. Given its generalist and multi-disciplinary nature, all types of data are accepted, although certain formats are recommended to ensure longterm data preservation. It has been developed and is hosted by Huma-Num, the French national research infrastructure for digital humanities.
The CCHDO provides access to standard, well-described datasets from reference-quality repeat hydrography expeditions. It curates high quality full water column Conductivity-Temperature-Depth (CTD), hydrographic, carbon and tracer data from over 2,500 cruises from ~30 countries. It is the official data center for CTD and water sample profile data from the Global Ocean Ship-Based Hydrographic Investigations Program (GO-SHIP), as well as for WOCE, US Hydro, and other high quality repeat hydrography lines (e.g. SOCCOM, HOT, BATS, WOCE, CARINA.)
The aim of the Freshwater Biodiversity Data Portal is to integrate and provide open and free access to freshwater biodiversity data from all possible sources. To this end, we offer tools and support for scientists interested in documenting/advertising their dataset in the metadatabase, in submitting or publishing their primary biodiversity data (i.e. species occurrence records) or having their dataset linked to the Freshwater Biodiversity Data Portal. This information portal serves as a data discovery tool, and allows scientists and managers to complement, integrate, and analyse distribution data to elucidate patterns in freshwater biodiversity. The Freshwater Biodiversity Data Portal was initiated under the EU FP7 BioFresh project and continued through the Freshwater Information Platform (http://www.freshwaterplatform.eu). To ensure the broad availability of biodiversity data and integration in the global GBIF index, we strongly encourages scientists to submit any primary biodiversity data published in a scientific paper to national nodes of GBIF or to thematic initiatives such as the Freshwater Biodiversity Data Portal.
Country
mediaTUM – the media and publications repository of the Technical University of Munich: mediaTUM supports the publication of digital documents and research data as well as the use of multimedia content in research and teaching.
LSE Research Online is the institutional repository for the London School of Economics and Political Science. LSE Research Online contains research produced by LSE staff, including journal articles, book chapters, books, working papers, conference papers and more.
FRED is an online database consisting of hundreds of thousands of economic data time series from scores of national, international, public, and private sources. FRED, created and maintained by the Research Department at the Federal Reserve Bank of St. Louis, goes far beyond simply providing data: It combines data with a powerful mix of tools that help the user understand, interact with, display, and disseminate the data. In essence, FRED helps users tell their data stories.
Modern signal processing and machine learning methods have exciting potential to generate new knowledge that will impact both physiological understanding and clinical care. Access to data - particularly detailed clinical data - is often a bottleneck to progress. The overarching goal of PhysioNet is to accelerate research progress by freely providing rich archives of clinical and physiological data for analysis. The PhysioNet resource has three closely interdependent components: An extensive archive ("PhysioBank"), a large and growing library of software ("PhysioToolkit"), and a collection of popular tutorials and educational materials
OpenKIM is an online suite of open source tools for molecular simulation of materials. These tools help to make molecular simulation more accessible and more reliable. Within OpenKIM, you will find an online resource for standardized testing and long-term warehousing of interatomic models and data, and an application programming interface (API) standard for coupling atomistic simulation codes and interatomic potential subroutines.
This hub supports the geospatial modeling, data analysis and visualization needs of the broad research and education communities through hosting of groups, datasets, tools, training materials, and educational contents.
The DesignSafe Data Depot Repository (DDR) is the platform for curation and publication of datasets generated in the course of natural hazards research. The DDR is an open access data repository that enables data producers to safely store, share, organize, and describe research data, towards permanent publication, distribution, and impact evaluation. The DDR allows data consumers to discover, search for, access, and reuse published data in an effort to accelerate research discovery. It is a component of the DesignSafe cyberinfrastructure, which represents a comprehensive research environment that provides cloud-based tools to manage, analyze, curate, and publish critical data for research to understand the impacts of natural hazards. DesignSafe is part of the NSF-supported Natural Hazards Engineering Research Infrastructure (NHERI), and aligns with its mission to provide the natural hazards research community with open access, shared-use scholarship, education, and community resources aimed at supporting civil and social infrastructure prior to, during, and following natural disasters. It serves a broad national and international audience of natural hazard researchers (both engineers and social scientists), students, practitioners, policy makers, as well as the general public. It has been in operation since 2016, and also provides access to legacy data dating from about 2005. These legacy data were generated as part of the NSF-supported Network for Earthquake Engineering Simulation (NEES), a predecessor to NHERI. Legacy data and metadata belonging to NEES were transferred to the DDR for continuous preservation and access.
The World Wide Molecular Matrix (WWMM) is an electronic repository for unpublished chemical data. WWMM is an open collection of information of small molecules. The "Matrix" in WWMM is influenced by William Gibson's vision of a cyberinfrastructure where all knowledge is accessible. The WWMM is an experiment to see how far this can be taken for chemical compounds. Although much of the information for a given compound has been Openly published, very little is available in Open electronic collections. The WWMM is aimed at catalysing this approach for chemistry and the current collection is made available under the Budapest Open Archive Initiative (http://www.budapestopenaccessinitiative.org/read).
Country
>>>!!!<<<As stated 2017-05-23 Cancer GEnome Mine is no longer available >>>!!!<<< Cancer GEnome Mine is a public database for storing clinical information about tumor samples and microarray data, with emphasis on array comparative genomic hybridization (aCGH) and data mining of gene copy number changes.
>>>!!!<<< caArray Retirement Announcement >>>!!!<<< The National Cancer Institute (NCI) Center for Biomedical Informatics and Information Technology (CBIIT) instance of the caArray database was retired on March 31st, 2015. All publicly-accessible caArray data and annotations will be archived and will remain available via FTP download https://wiki.nci.nih.gov/x/UYHeDQ and is also available at GEO http://www.ncbi.nlm.nih.gov/geo/ . >>>!!!<<< While NCI will not be able to provide technical support for the caArray software after the retirement, the source code is available on GitHub https://github.com/NCIP/caarray , and we encourage continued community development. Molecular Analysis of Brain Neoplasia (Rembrandt fine-00037) gene expression data has been loaded into ArrayExpress: http://www.ebi.ac.uk/arrayexpress/experiments/E-MTAB-3073 >>>!!!<<< caArray is an open-source, web and programmatically accessible microarray data management system that supports the annotation of microarray data using MAGE-TAB and web-based forms. Data and annotations may be kept private to the owner, shared with user-defined collaboration groups, or made public. The NCI instance of caArray hosts many cancer-related public datasets available for download.
The changing demographic composition has expanded the scope of the U.S. racial and ethnic mosaic. As a result, interest and research on race and ethnicity has become more complex and expansive. RCMD seeks to assist in the public dissemination and preservation of quality data to generate more "good science" for years to come. Finally, RCMD wants to be part of an interactive community of persons interested and be involved in minority related issues/investigations in order to make possible the broadest scope of research endeavors and examinations.
Research data management is a general term covering how you organize, structure, store, and care for the information used or generated during a research project. The University of Oxford policy mandates the preservation of research data and records for a minimum of 3 years after publication. A place to securely hold digital research materials (data) of any sort along with documentation that helps explain what they are and how to use them (metadata). The application of consistent archiving policies, preservation techniques and discovery tools, further increases the long term availability and usefulness of the data. This is the main difference between storage and archiving of data. ORA-Data is the University of Oxford’s research data archive https://www.re3data.org/repository/r3d100011230