Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 60 result(s)
Funded by the National Science Foundation (NSF) and proudly operated by Battelle, the National Ecological Observatory Network (NEON) program provides open, continental-scale data across the United States that characterize and quantify complex, rapidly changing ecological processes. The Observatory’s comprehensive design supports greater understanding of ecological change and enables forecasting of future ecological conditions. NEON collects and processes data from field sites located across the continental U.S., Puerto Rico, and Hawaii over a 30-year timeframe. NEON provides free and open data that characterize plants, animals, soil, nutrients, freshwater, and the atmosphere. These data may be combined with external datasets or data collected by individual researchers to support the study of continental-scale ecological change.
Country
From April 2020 to March 2023, the Covid-19 Immunity Task Force (CITF) supported 120 studies to generate knowledge about immunity to SARS-CoV-2. The subjects addressed by these studies include the extent of SARS-CoV-2 infection in Canada, the nature of immunity, vaccine effectiveness and safety, and the need for booster shots among different communities and priority populations in Canada. The CITF Databank was developed to further enhance the impact of CITF funded studies by allowing additional research using the data collected from CITF-supported studies. The CITF Databank centralizes and harmonizes individual-level data from CITF-funded studies that have met all ethical requirements to deposit data in the CITF Databank and have completed a data sharing agreement. The CITF Databank is an internationally unique resource for sharing epidemiological and laboratory data from studies about SARS-CoV-2 immunity in different populations. The types of research that are possible with data from the CITF Databank include observational epidemiological studies, mathematical modelling research, and comparative evaluation of surveillance and laboratory methods.
GigaDB primarily serves as a repository to host data and tools associated with articles published by GigaScience Press; GigaScience and GigaByte (both are online, open-access journals). GigaDB defines a dataset as a group of files (e.g., sequencing data, analyses, imaging files, software programs) that are related to and support a unit-of-work (article or study). GigaDB allows the integration of manuscript publication with supporting data and tools.
ETH Data Archive is ETH Zurich's long-term preservation solution for digital information such as research data, digitised content, archival records, or images. It serves as the backbone of data curation and for most of its content, it is a “dark archive” without public access. In this capacity, the ETH Data Archive also archives the content of ETH Zurich’s Research Collection which is the primary repository for members of the university and the first point of contact for publication of data at ETH Zurich. All data that was produced in the context of research at the ETH Zurich, can be published and archived in the Research Collection. An automated connection to the ETH Data Archive in the background ensures the medium to long-term preservation of all publications and research data. Direct access to the ETH Data Archive is intended only for customers who need to deposit software source code within the framework of ETH transfer Software Registration. Open Source code packages and other content from legacy workflows can be accessed via ETH Library @ swisscovery (https://library.ethz.ch/en/).
The OpenNeuro project (formerly known as the OpenfMRI project) was established in 2010 to provide a resource for researchers interested in making their neuroimaging data openly available to the research community. It is managed by Russ Poldrack and Chris Gorgolewski of the Center for Reproducible Neuroscience at Stanford University. The project has been developed with funding from the National Science Foundation, National Institute of Drug Abuse, and the Laura and John Arnold Foundation.
TCIA is a service which de-identifies and hosts a large archive of medical images of cancer accessible for public download. The data are organized as “collections”; typically patients’ imaging related by a common disease (e.g. lung cancer), image modality or type (MRI, CT, digital histopathology, etc) or research focus. Supporting data related to the images such as patient outcomes, treatment details, genomics and expert analyses are also provided when available.
The KNB Data Repository is an international repository intended to facilitate ecological, environmental and earth science research in the broadest senses. For scientists, the KNB Data Repository is an efficient way to share, discover, access and interpret complex ecological, environmental, earth science, and sociological data and the software used to create and manage those data. Due to rich contextual information provided with data in the KNB, scientists are able to integrate and analyze data with less effort. The data originate from a highly-distributed set of field stations, laboratories, research sites, and individual researchers. The KNB supports rich, detailed metadata to promote data discovery as well as automated and manual integration of data into new projects. The KNB supports a rich set of modern repository services, including the ability to assign Digital Object Identifiers (DOIs) so data sets can be confidently referenced in any publication, the ability to track the versions of datasets as they evolve through time, and metadata to establish the provenance relationships between source and derived data.
Country
The project brings together national key players providing environmentally related biological data and services to develop the ‘German Federation for Biological Data' (GFBio). The overall goal is to provide a sustainable, service oriented, national data infrastructure facilitating data sharing and stimulating data intensive science in the fields of biological and environmental research.
PDBe is the European resource for the collection, organisation and dissemination of data on biological macromolecular structures. In collaboration with the other worldwide Protein Data Bank (wwPDB) partners - the Research Collaboratory for Structural Bioinformatics (RCSB) and BioMagResBank (BMRB) in the USA and the Protein Data Bank of Japan (PDBj) - we work to collate, maintain and provide access to the global repository of macromolecular structure data. We develop tools, services and resources to make structure-related data more accessible to the biomedical community.
The Barcode of Life Data Systems (BOLD) provides DNA barcode data. BOLD's online workbench supports data validation, annotation, and publication for specimen, distributional, and molecular data. The platform consists of four main modules: a data portal, a database of barcode clusters, an educational portal, and a data collection workbench. BOLD is the go-to site for DNA-based identification. As the central informatics platform for DNA barcoding, BOLD plays a crucial role in assimilating and organizing data gathered by the international barcode research community. Two iBOL (International Barcode of Life) Working Groups are supporting the ongoing development of BOLD.
The Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC) is a team of researchers, data specialists and computer system developers who are supporting the development of a data management system to store scientific data generated by Gulf of Mexico researchers. The Master Research Agreement between BP and the Gulf of Mexico Alliance that established the Gulf of Mexico Research Initiative (GoMRI) included provisions that all data collected or generated through the agreement must be made available to the public. The Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC) is the vehicle through which GoMRI is fulfilling this requirement. The mission of GRIIDC is to ensure a data and information legacy that promotes continual scientific discovery and public awareness of the Gulf of Mexico Ecosystem.
The Australian Drosophila Ecology and Evolution Resource (ADEER) from the Hoffmann lab and other contributors is a nationally significant life science collection. The Drosophila Clinal Data Collection contains data on populations along the eastern coast of Australia. It remains an excellent resource for understanding past and future evolutionary responses to climate change. The Drosophila Genomic Data Collection hosts Drosophila genomes sequenced as part of the Genomic Basis for Adaptation to Climate Change Project. 23 genomes have been sequenced as part of this project. Currently assemblies and annotations are available for Drosophila birchii, D. bunnanda, D. hydei, and D. repleta. The Drosophila Species Distribution Data Collection contains distribution data of nine drosophilid species that have been collected in Australia by the Hoffmann lab and other research groups between 1924 and 2005. More than 300 drosophilid species have been identified in the tropical and temperate forests located on the east coast of Australia. Many species are restricted to the tropics, a few are temperate specialists, and some have broad distributions across climatic regions. Their varied distribution along the tropical - temperate cline provide a powerful tool for studying climate adaptation and species distribution limits.
Country
LIVIVO is an interdisciplinary search engine for literature and information in the field of life sciences. It is run by ZB MED – Information Centre for Life Sciences. LIVIVO automatically searches for the terms you enter in a central index of all the databases. The ZB MED Searchportal already provides a large amount of research data from DataCite data centres (e.g. Beijing Genomics Institute, Natural Environment Research Council) in the field of life sciences. These can be searched directly using the "Documenttype=research data" filter. A further integration of data from life science data repositories is planned.
Synapse is an open source software platform that clinical and biological data scientists can use to carry out, track, and communicate their research in real time. Synapse enables co-location of scientific content (data, code, results) and narrative descriptions of that work.
The FAIRDOMHub is built upon the SEEK software suite, which is an open source web platform for sharing scientific research assets, processes and outcomes. FAIRDOM (Web Site) will establish a support and service network for European Systems Biology. It will serve projects in standardizing, managing and disseminating data and models in a FAIR manner: Findable, Accessible, Interoperable and Reusable. FAIRDOM is an initiative to develop a community, and establish an internationally sustained Data and Model Management service to the European Systems Biology community. FAIRDOM is a joint action of ERA-Net EraSysAPP and European Research Infrastructure ISBE.
Project Tycho is a repository for global health, particularly disease surveillance data. Project Tycho currently includes data for 92 notifiable disease conditions in the US, and up to three dengue-related conditions for 99 countries. Project Tycho has compiled data from reputable sources such as the US Centers for Disease Control, the World Health Organization, and National health agencies for countries around the world. Project Tycho datasets are highly standardized and have rich metadata to improve access, interoperability, and reuse of global health data for research and innovation.
The Immunology Database and Analysis Portal (ImmPort) archives clinical study and trial data generated by NIAID/DAIT-funded investigators. Data types housed in ImmPort include subject assessments i.e., medical history, concomitant medications and adverse events as well as mechanistic assay data such as flow cytometry, ELISA, ELISPOT, etc. --- You won't need an ImmPort account to search for compelling studies, peruse study demographics, interventions and mechanistic assays. But why stop there? What you really want to do is download the study, look at each experiment in detail including individual ELISA results and flow cytometry files. Perhaps you want to take those flow cytometry files for a test drive using FLOCK in the ImmPort flow cytometry module. To download all that interesting data you will need to register for ImmPort access.
Country
The BULERIA Institutional Repository is an open access digital archive that houses the full text of documents generated by members of the University of León in the development of their academic and research activity. The repository also holds the University's institutional documentation. In accordance with the principles of the Open Science movement, the aim is to facilitate the recovery, reuse and preservation of research results, in addition to promoting the dissemination and visibility of the scientific production of the Institution, effectively guaranteeing the advancement of science.
Country
BioMemory is the network of biological collections of the Department of Biology, Agriculture and Food Sciences (DiSBA) of the National Research Council of Italy (CNR) for bio-monitoring, biodiversity conservation, agri-food and environmental sustainability, and human well-being. The project is aimed to create a network of biobanks (i.e., scientific research collections) where data and metadata associated to biological samples of different nature are collected and stored in a systematic and well-organized way. Maintaining the existing collections will allow their future use for a number of purposes, from the genetic improvement of organisms to face environmental changes (climate-ready organisms) to the fight against epidemics and pandemics affecting humans, animals and plants.