Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 32 result(s)
Country
The Ningaloo Atlas was created in response to the need for more comprehensive and accessible information on environmental and socio-economic data on the greater Ningaloo region. As such, the Ningaloo Atlas is a web portal to not only access and share information, but to celebrate and promote the biodiversity, heritage, value, and way of life of the greater Ningaloo region.
DNASU is a central repository for plasmid clones and collections. Currently we store and distribute over 200,000 plasmids including 75,000 human and mouse plasmids, full genome collections, the protein expression plasmids from the Protein Structure Initiative as the PSI: Biology Material Repository (PSI : Biology-MR), and both small and large collections from individual researchers. We are also a founding member and distributor of the ORFeome Collaboration plasmid collection.
The Federal Interagency Traumatic Brain Injury Research (FITBIR) informatics system was developed to share data across the entire TBI research field and to facilitate collaboration between laboratories, as well as interconnectivity with other informatics platforms. Sharing data, methodologies, and associated tools, rather than summaries or interpretations of this information, can accelerate research progress by allowing re-analysis of data, as well as re-aggregation, integration, and rigorous comparison with other data, tools, and methods. This community-wide sharing requires common data definitions and standards, as well as comprehensive and coherent informatics approaches.
The Wolfram Data Repository is a public resource that hosts an expanding collection of computable datasets, curated and structured to be suitable for immediate use in computation, visualization, analysis and more. Building on the Wolfram Data Framework and the Wolfram Language, the Wolfram Data Repository provides a uniform system for storing data and making it immediately computable and useful. With datasets of many types and from many sources, the Wolfram Data Repository is built to be a global resource for public data and data-backed publication.
Country
Datatang is a professional data pre-processing company. We are engaged in data collecting, annotating, and customizing to meet our clients’ various needs. We assist our clients from university research labs and company R&D departments to waive trivial yet necessary data processing procedure and make their approach to the highest-value data in a more efficient way.
The NCI's Genomic Data Commons (GDC) provides the cancer research community with a unified data repository that enables data sharing across cancer genomic studies in support of precision medicine. The GDC obtains validated datasets from NCI programs in which the strategies for tissue collection couples quantity with high quality. Tools are provided to guide data submissions by researchers and institutions.
The Organic Chemistry Portal offers an overview of recent topics, interesting reactions, and information on important chemicals for organic chemists. Searchable index of citations, chemical synthesis and chemical products. We publish 1000 additional citations per year. German version see https://www.organische-chemie.ch/
THIN is a medical data collection scheme that collects anonymised patient data from its members through the healthcare software Vision. The UK Primary Care database contains longitudinal patient records for approximately 6% of the UK Population. The anonymised data collection, which goes back to 1994, is nationally representative of the UK population.
The Fungal Genetics Stock Center has preserved and distributed strains of genetically characterized fungi since 1960. The collection includes over 20,000 accessioned strains of classical and genetically engineered mutants of key model, human, and plant pathogenic fungi. These materials are distributed as living stocks to researchers around the world.
Junar provides a cloud-based open data platform that enables innovative organizations worldwide to quickly, easily and affordably make their data accessible to all. In just a few weeks, your initial datasets can be published, providing greater transparency, encouraging collaboration and citizen engagement, and freeing up precious staff resources.
The Restriction Enzyme Database is a collection of information about restriction enzymes, methylases, the microorganisms from which they have been isolated, recognition sequences, cleavage sites, methylation specificity, the commercial availability of the enzymes, and references - both published and unpublished observations (dating back to 1952). REBASE is updated daily and is constantly expanding.
The Million Song Dataset is a freely-available collection of audio features and metadata for a million contemporary popular music tracks. The core of the dataset is the feature analysis and metadata for one million songs, provided by The Echo Nest. The dataset does not include any audio, only the derived features. Note, however, that sample audio can be fetched from services like 7digital, using code we provide.
Novartis provides the technical results and trial summaries for patients from Phase 1 through 4 interventional trials for innovative products within one year of trial completion. A trial summary for patients is a trial result written in easier to understand language than the technical results.
A planetary-scale platform for Earth science data & analysis. Google Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities. Scientists, researchers, and developers use Earth Engine to detect changes, map trends, and quantify differences on the Earth's surface.
As 3D and reality capture strategies for heritage documentation become more widespread and available, there has emerged a growing need to assist with guiding and facilitating accessibility to data, while maintaining scientific rigor, cultural and ethical sensitivity, discoverability, and archival standards. In response to these areas of need, The Open Heritage 3D Alliance (OHA) has developed as an advisory group governing the Open Heritage 3D initiative. This collaborative advisory group are among some of the earliest adopters of 3D heritage documentation technologies, and offer first-hand guidance for best practices in data management, sharing, and dissemination approaches for 3D cultural heritage projects. The founding members of the OHA, consist of experts and organizational leaders from CyArk, Historic Environment Scotland, and the University of South Florida Libraries, who together have significant repositories of legacy and on-going 3D research and documentation projects. These groups offer unique insight into not only the best practices for 3D data capture and sharing, but also have come together around concerns dealing with standards, formats, approach, ethics, and archive commitment. Together, the OHA has begun the journey to provide open access to cultural heritage 3D data, while maintaining integrity, security, and standards relating to discoverable dissemination. Together, the OHA will work to provide democratized access to primary heritage 3D data submitted from donors and organizations, and will help to facilitate an operation platform, archive, and organization of resources into the future.
The U.S. launched the Joint Global Ocean Flux Study (JGOFS) in the late 1980s to study the ocean carbon cycle. An ambitious goal was set to understand the controls on the concentrations and fluxes of carbon and associated nutrients in the ocean. A new field of ocean biogeochemistry emerged with an emphasis on quality measurements of carbon system parameters and interdisciplinary field studies of the biological, chemical and physical process which control the ocean carbon cycle. As we studied ocean biogeochemistry, we learned that our simple views of carbon uptake and transport were severely limited, and a new "wave" of ocean science was born. U.S. JGOFS has been supported primarily by the U.S. National Science Foundation in collaboration with the National Oceanic and Atmospheric Administration, the National Aeronautics and Space Administration, the Department of Energy and the Office of Naval Research. U.S. JGOFS, ended in 2005 with the conclusion of the Synthesis and Modeling Project (SMP).
Complete Genomics provides free public access to a variety of whole human genome data sets generated from Complete Genomics’ sequencing service. The research community can explore and familiarize themselves with the quality of these data sets, review the data formats provided from our sequencing service, and augment their own research with additional summaries of genomic variation across a panel of diverse individuals. The quality of these data sets is representative of what a customer can expect to receive for their own samples. This public genome repository comprises genome results from both our Standard Sequencing Service (69 standard, non-diseased samples) and the Cancer Sequencing Service (two matched tumor and normal sample pairs). In March 2013 Complete Genomics was acquired by BGI-Shenzhen , the world’s largest genomics services company. BGI is a company headquartered in Shenzhen, China that provides comprehensive sequencing and bioinformatics services for commercial science, medical, agricultural and environmental applications. Complete Genomics is now focused on building a new generation of high-throughput sequencing technology and developing new and exciting research, clinical and consumer applications.
GigaDB primarily serves as a repository to host data and tools associated with articles published by GigaScience Press; GigaScience and GigaByte (both are online, open-access journals). GigaDB defines a dataset as a group of files (e.g., sequencing data, analyses, imaging files, software programs) that are related to and support a unit-of-work (article or study). GigaDB allows the integration of manuscript publication with supporting data and tools.
the Data Hub is a community-run catalogue of useful sets of data on the Internet. You can collect links here to data from around the web for yourself and others to use, or search for data that others have collected. Depending on the type of data (and its conditions of use), the Data Hub may also be able to store a copy of the data or host it in a database, and provide some basic visualisation tools.
GWAS Central (previously the Human Genome Variation database of Genotype-to-Phenotype information) is a database of summary level findings from genetic association studies, both large and small. We actively gather datasets from public domain projects, and encourage direct data submission from the community.
Country
The DSMZ is the most comprehensive biological resource center worldwide. Being one of the world's largest collections, the DSMZ currently comprises more than 73,700 items, including about 31,900 different bacterial and 6,600 fungal strains, 840 human and animal cell lines, 1,500 plant viruses and antisera, 700 bacteriophages and 19,000 different types of bacterial genomic DNA. All biological materials accepted in the DSMZ collection are subject to extensive quality control and physiological and molecular characterization by our central services. In addition, DSMZ provides an extensive documentation and detailed diagnostic information on the biological materials. The unprecedented diversity and quality management of its bioresources render the DSMZ an internationally renowned supplier for science, diagnostic laboratories, national reference centers, as well as industrial partners.
The OFA databases are core to the organization’s objective of establishing control programs to lower the incidence of inherited disease. Responsible breeders have an inherent responsibility to breed healthy dogs. The OFA databases serve all breeds of dogs and cats, and provide breeders a means to respond to the challenge of improving the genetic health of their breed through better breeding practices. The testing methodology and the criteria for evaluating the test results for each database were independently established by veterinary scientists from their respective specialty areas, and the standards used are generally accepted throughout the world.
Knoema is a knowledge platform. The basic idea is to connect data with analytical and presentation tools. As a result, we end with one uniformed platform for users to access, present and share data-driven content. Within Knoema, we capture most aspects of a typical data use cycle: accessing data from multiple sources, bringing relevant indicators into a common space, visualizing figures, applying analytical functions, creating a set of dashboards, and presenting the outcome.