Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 19 result(s)
SAHFOS is an internationally funded independent research non-profit organisation responsible for the operation of the Continuous Plankton Recorder (CPR) Survey. As a large-scale global survey, it provides the scientific and policy communities with a basin-wide and long-term measure of the ecological health of marine plankton. Established in 1931, the CPR Survey is the longest running, most geographically extensive marine ecological survey in the world. It has a considerable database of marine plankton and associated metadata that is used by researchers and policy makers to examine strategically important science pillars such as climate change, human health, fisheries, biodiversity, pathogens, invasive species, ocean acidification and natural capital. The Continuous Plankton Recorder (CPR) Survey has merged with the Marine Biological Association. Today the Survey is operated by the Marine Biological Association, based in Plymouth, UK.
State of the Salmon provides data on abundance, diversity, and ecosystem health of wild salmon populations specific to the Pacific Ocean, North Western North America, and Asia. Data downloads are available using two geographic frameworks: Salmon Ecoregions or Hydro 1K.
Copernicus is a European system for monitoring the Earth. Copernicus consists of a complex set of systems which collect data from multiple sources: earth observation satellites and in situ sensors such as ground stations, airborne and sea-borne sensors. It processes these data and provides users with reliable and up-to-date information through a set of services related to environmental and security issues. The services address six thematic areas: land monitoring, marine monitoring, atmosphere monitoring, climate change, emergency management and security. The main users of Copernicus services are policymakers and public authorities who need the information to develop environmental legislation and policies or to take critical decisions in the event of an emergency, such as a natural disaster or a humanitarian crisis. Based on the Copernicus services and on the data collected through the Sentinels and the contributing missions , many value-added services can be tailored to specific public or commercial needs, resulting in new business opportunities. In fact, several economic studies have already demonstrated a huge potential for job creation, innovation and growth.
Reactome is a manually curated, peer-reviewed pathway database, annotated by expert biologists and cross-referenced to bioinformatics databases. Its aim is to share information in the visual representations of biological pathways in a computationally accessible format. Pathway annotations are authored by expert biologists, in collaboration with Reactome editorial staff and cross-referenced to many bioinformatics databases. These include NCBI Gene, Ensembl and UniProt databases, the UCSC and HapMap Genome Browsers, the KEGG Compound and ChEBI small molecule databases, PubMed, and Gene Ontology.
Country
<<<!!!<<< This repository is no longer available. >>>!!!>>> A human interactome map. The sequencing of the human genome has provided a surprisingly small number of genes, indicating that the complex organization of life is not reflected in the gene number but, rather, in the gene products – that is, in the proteins. These macromolecules regulate the vast majority of cellular processes by their ability to communicate with each other and to assemble into larger functional units. Therefore, the systematic analysis of protein-protein interactions is fundamental for the understanding of protein function, cellular processes and, ultimately, the complexity of life. Moreover, interactome maps are particularly needed to link new proteins to disease pathways and the identification of novel drug targets.
The Cancer Genome Atlas (TCGA) Data Portal provides a platform for researchers to search, download, and analyze data sets generated by TCGA. It contains clinical information, genomic characterization data, and high level sequence analysis of the tumor genomes. The Data Coordinating Center (DCC) is the central provider of TCGA data. The DCC standardizes data formats and validates submitted data.
The Brown Digital Repository (BDR) is a place to gather, index, store, preserve, and make available digital assets produced via the scholarly, instructional, research, and administrative activities at Brown.
The CardioVascular Research Grid (CVRG) project is creating an infrastructure for secure seamless access to study data and analysis tools. CVRG tools are developed using the Software as a Service model, allowing users to access tools through their browser, thus eliminating the need to install and maintain complex software.
Country
This site provides access to over 210,000 digitized or born-digital images in the Koç University collections featuring prints, photographs, slides, maps, newspapers, posters, postcards, manuscripts, streaming video, and more. The collections consist of the materials of the Koç University Libraries and Archives (AKMED, ANAMED, SKL, and VEKAM), Koç University Faculty and Departments, and projects carried out in partnership with the Koç University Libraries. It includes the Koç University Institutional Repository (KU-IR).
>>>>!!!!<<<< The Cancer Genomics Hub mission is now completed. The Cancer Genomics Hub was established in August 2011 to provide a repository to The Cancer Genome Atlas, the childhood cancer initiative Therapeutically Applicable Research to Generate Effective Treatments and the Cancer Genome Characterization Initiative. CGHub rapidly grew to be the largest database of cancer genomes in the world, storing more than 2.5 petabytes of data and serving downloads of nearly 3 petabytes per month. As the central repository for the foundational genome files, CGHub streamlined team science efforts as data became as easy to obtain as downloading from a hard drive. The convenient access to Big Data, and the collaborations that CGHub made possible, are now essential to cancer research. That work continues at the NCI's Genomic Data Commons. All files previously stored at CGHub can be found there. The Website for the Genomic Data Commons is here: https://gdc.nci.nih.gov/ >>>>!!!!<<<< The Cancer Genomics Hub (CGHub) is a secure repository for storing, cataloging, and accessing cancer genome sequences, alignments, and mutation information from the Cancer Genome Atlas (TCGA) consortium and related projects. Access to CGHub Data: All researchers using CGHub must meet the access and use criteria established by the National Institutes of Health (NIH) to ensure the privacy, security, and integrity of participant data. CGHub also hosts some publicly available data, in particular data from the Cancer Cell Line Encyclopedia. All metadata is publicly available and the catalog of metadata and associated BAMs can be explored using the CGHub Data Browser.
Country
depositar — taking the term from the Portuguese/Spanish verb for to deposit — is an online repository for research data. The site is built by the researchers for the researchers. You are free to deposit, discover, and reuse datasets on depositar for all your research purposes.
Country
The Polar Data Catalogue is an online database of metadata and data that describes, indexes and provides access to diverse data sets generated by polar researchers. These records cover a wide range of disciplines from natural sciences and policy, to health, social sciences, and more.
ETH Data Archive is ETH Zurich's long-term preservation solution for digital information such as research data, digitised content, archival records, or images. It serves as the backbone of data curation and for most of its content, it is a “dark archive” without public access. In this capacity, the ETH Data Archive also archives the content of ETH Zurich’s Research Collection which is the primary repository for members of the university and the first point of contact for publication of data at ETH Zurich. All data that was produced in the context of research at the ETH Zurich, can be published and archived in the Research Collection. An automated connection to the ETH Data Archive in the background ensures the medium to long-term preservation of all publications and research data. Direct access to the ETH Data Archive is intended only for customers who need to deposit software source code within the framework of ETH transfer Software Registration. Open Source code packages and other content from legacy workflows can be accessed via ETH Library @ swisscovery (https://library.ethz.ch/en/).
The OpenNeuro project (formerly known as the OpenfMRI project) was established in 2010 to provide a resource for researchers interested in making their neuroimaging data openly available to the research community. It is managed by Russ Poldrack and Chris Gorgolewski of the Center for Reproducible Neuroscience at Stanford University. The project has been developed with funding from the National Science Foundation, National Institute of Drug Abuse, and the Laura and John Arnold Foundation.
TurtleSAT is a new website where communities are mapping the location of freshwater turtles in waterways and wetlands across the country. Australia's unique freshwater turtles are in crisis - their numbers are declining and your help is needed to record where you see turtles in your local area.
LONI’s Image and Data Archive (IDA) is a secure data archiving system. The IDA uses a robust infrastructure to provide researchers with a flexible and simple interface for de-identifying, searching, retrieving, converting, and disseminating their biomedical data. With thousands of investigators across the globe and more than 21 million data downloads to data, the IDA guarantees reliability with a fault-tolerant network comprising multiple switches, routers, and Internet connections to prevent system failure.
Country
The National Population Health Data Center (NPHDC) is one of the 20 national science data center approved by the Ministry of Science and Technology and the Ministry of Finance. The Population Health Data Archive (PHDA) is developed by NPHDC relying on the Institute of Medical Information, Chinese Academy of Medical Sciences. PHDA mainly receives scientific data from science and technology projects supported by the national budget, and also collects data from other multiple sources such as medical and health institutions, research institutions and social individuals, which is oriented to the national big data strategy and the healthy China strategy. The data resources cover basic medicine, clinical medicine, public health, traditional Chinese medicine and pharmacy, pharmacy, population and reproduction. PHDA supports data collection, archiving, processing, storage, curation, verification, certification and release in the field of population health. Provide multiple types of data sharing and application services for different hierarchy users and help them find, access, interoperate and reuse the data in a safe and controlled environment. PHDA provides important support for promoting the open sharing of scientific data of population health and domestic and foreign cooperation.
Data.gov increases the ability of the public to easily find, download, and use datasets that are generated and held by the Federal Government. Data.gov provides descriptions of the Federal datasets (metadata), information about how to access the datasets, and tools that leverage government datasets