Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 60 result(s)
!!! >>> intrepidbio.com expired <<< !!!! Intrepid Bioinformatics serves as a community for genetic researchers and scientific programmers who need to achieve meaningful use of their genetic research data – but can’t spend tremendous amounts of time or money in the process. The Intrepid Bioinformatics system automates time consuming manual processes, shortens workflow, and eliminates the threat of lost data in a faster, cheaper, and better environment than existing solutions. The system also provides the functionality and community features needed to analyze the large volumes of Next Generation Sequencing and Single Nucleotide Polymorphism data, which is generated for a wide range of purposes from disease tracking and animal breeding to medical diagnosis and treatment.
DNASU is a central repository for plasmid clones and collections. Currently we store and distribute over 200,000 plasmids including 75,000 human and mouse plasmids, full genome collections, the protein expression plasmids from the Protein Structure Initiative as the PSI: Biology Material Repository (PSI : Biology-MR), and both small and large collections from individual researchers. We are also a founding member and distributor of the ORFeome Collaboration plasmid collection.
The Federal Interagency Traumatic Brain Injury Research (FITBIR) informatics system was developed to share data across the entire TBI research field and to facilitate collaboration between laboratories, as well as interconnectivity with other informatics platforms. Sharing data, methodologies, and associated tools, rather than summaries or interpretations of this information, can accelerate research progress by allowing re-analysis of data, as well as re-aggregation, integration, and rigorous comparison with other data, tools, and methods. This community-wide sharing requires common data definitions and standards, as well as comprehensive and coherent informatics approaches.
Project Data Sphere, LLC, operates a free digital library-laboratory where the research community can broadly share, integrate and analyze historical, de-identified, patient-level data from academic and industry cancer Phase II-III clinical trials. These patient-level datasets are available through the Project Data Sphere platform to researchers affiliated with life science companies, hospitals and institutions, as well as independent researchers, at no cost and without requiring a research proposal.
WorldData.AI comes with a built-in workspace – the next-generation hyper-computing platform powered by a library of 3.3 billion curated external trends. WorldData.AI allows you to save your models in its “My Models Trained” section. You can make your models public and share them on social media with interesting images, model features, summary statistics, and feature comparisons. Empower others to leverage your models. For example, if you have discovered a previously unknown impact of interest rates on new-housing demand, you may want to share it through “My Models Trained.” Upload your data and combine it with external trends to build, train, and deploy predictive models with one click! WorldData.AI inspects your raw data, applies feature processors, chooses the best set of algorithms, trains and tunes multiple models, and then ranks model performance.
Country
<<<!!!<<< The database is no longer available from 1st July 2018 >>>!!!>>> CRYSTMET was previously included in the NCDS as part of CrystalWorks. Unfortunately we are no longer able to license the CRYSTMET database for access through the NCDS. Therefore the database will no longer be accessible from 1st July 2018. >>>> CRYSTMET contains chemical, crystallographic and bibliographic data together with associated comments regarding experimental details for each study. It is a database of critically evaluated crystallographic data for metals, including alloys, intermetallics and minerals.Using these data, a number of associated files are derived, a major one being a parallel file of calculated powder patterns. These derived data are included within the CRYSTMET product.
The National Science Foundation (NSF) Ultraviolet (UV) Monitoring Network provides data on ozone depletion and the associated effects on terrestrial and marine systems. Data are collected from 7 sites in Antarctica, Argentina, United States, and Greenland. The network is providing data to researchers studying the effects of ozone depletion on terrestrial and marine biological systems. Network data is also used for the validation of satellite observations and for the verification of models describing the transfer of radiation through the atmosphere.
The CancerData site is an effort of the Medical Informatics and Knowledge Engineering team (MIKE for short) of Maastro Clinic, Maastricht, The Netherlands. Our activities in the field of medical image analysis and data modelling are visible in a number of projects we are running. CancerData is offering several datasets. They are grouped in collections and can be public or private. You can search for public datasets in the NBIA (National Biomedical Imaging Archive) image archives without logging in.
Surface air temperature change is a primary measure of global climate change. The GISTEMP project started in the late 1970s to provide an estimate of the changing global surface air temperature which could be compared with the estimates obtained from climate models simulating the effect of changes in atmospheric carbon dioxide, volcanic aerosols, and solar irradiance. The continuing analysis updates global temperature change from the late 1800s to the present.
The NCI's Genomic Data Commons (GDC) provides the cancer research community with a unified data repository that enables data sharing across cancer genomic studies in support of precision medicine. The GDC obtains validated datasets from NCI programs in which the strategies for tissue collection couples quantity with high quality. Tools are provided to guide data submissions by researchers and institutions.
Country
Survey of India, The National Survey and Mapping Organization of the country under the Department of Science & Technology, is the OLDEST SCIENTIFIC DEPARTMENT OF THE GOVT. OF INDIA. It was set up in 1767 and has evolved rich traditions over the years. In its assigned role as the nation's Principal Mapping Agency, Survey of India bears a special responsibility to ensure that the country's domain is explored and mapped suitably, provide base maps for expeditious and integrated development and ensure that all resources contribute with their full measure to the progress, prosperity and security of our country now and for generations to come. The history of the Survey of India dates back to the 18th Century. Forerunners of army of the East India Company and Surveyors had an onerous task of exploring the unknown. Bit by bit the tapestry of Indian terrain was completed by the painstaking efforts of a distinguished line of Surveyors such as Mr. Lambton and Sir George Everest. It is a tribute to the foresight of such Surveyors that at the time of independence the country inherited a survey network built on scientific principles. The great Trigonometric series spanning the country from North to South East to West are some of the best geodetic control series available in the world. The scientific principles of surveying have since been augmented by the latest technology to meet the multidisciplinary requirement of data from planners and scientists. Organized into only 5 Directorates in 1950, mainly to look after the mapping needs of Defense Forces in North West and North East, the Department has now grown into 22 Directorates spread in approx. all parts (states) of the country to provide the basic map coverage required for the development of the country. Its technology, latest in the world, has been oriented to meet the needs of defense forces, planners and scientists in the field of geo-sciences, land and resource management. Its expert advice is being utilized by various Ministries and undertakings of Govt. of India in many sensitive areas including settlement of International borders, State boundaries and in assisting planned development of hitherto under developed areas. Faced with the requirement of digital topographical data, the department has created three Digital Centers during late eighties to generate Digital Topographical Data Base for the entire country for use in various planning processes and creation of geographic information system. Its specialized Directorates such as Geodetic and Research Branch, and Indian Institute of Surveying & Mapping (erstwhile Survey Training Institute) have been further strengthened to meet the growing requirement of user community. The department is also assisting in many scientific programs of the Nation related to the field of geo-physics, remote sensing and digital data transfers.
The Virtual Research Environment (VRE) is an open-source data management platform that enables medical researchers to store, process and share data in compliance with the European Union (EU) General Data Protection Regulation (GDPR). The VRE addresses the present lack of digital research data infrastructures fulfilling the need for (a) data protection for sensitive data, (b) capability to process complex data such as radiologic imaging, (c) flexibility for creating own processing workflows, (d) access to high performance computing. The platform promotes FAIR data principles and reduces barriers to biomedical research and innovation. The VRE offers a web portal with graphical and command-line interfaces, segregated data zones and organizational measures for lawful data onboarding, isolated computing environments where large teams can collaboratively process sensitive data privately, analytics workbench tools for processing, analyzing, and visualizing large datasets, automated ingestion of hospital data sources, project-specific data warehouses for structured storage and retrieval, graph databases to capture and query ontology-based metadata, provenance tracking, version control, and support for automated data extraction and indexing. The VRE is based on a modular and extendable state-of-the art cloud computing framework, a RESTful API, open developer meetings, hackathons, and comprehensive documentation for users, developers, and administrators. The VRE with its concerted technical and organizational measures can be adopted by other research communities and thus facilitates the development of a co-evolving interoperable platform ecosystem with an active research community.
GeneCards is a searchable, integrative database that provides comprehensive, user-friendly information on all annotated and predicted human genes. It automatically integrates gene-centric data from ~125 web sources, including genomic, transcriptomic, proteomic, genetic, clinical and functional information.
<<<!!!<<< This record is merged into Continental Scientific Drilling Facility https://www.re3data.org/repository/r3d100012874 >>>!!!>>> LacCore curates cores and samples from continental coring and drilling expeditions around the world, and also archives metadata and contact information for cores stored at other institutions.LacCore curates cores and samples from continental coring and drilling expeditions around the world, and also archives metadata and contact information for cores stored at other institutions.
THIN is a medical data collection scheme that collects anonymised patient data from its members through the healthcare software Vision. The UK Primary Care database contains longitudinal patient records for approximately 6% of the UK Population. The anonymised data collection, which goes back to 1994, is nationally representative of the UK population.
Protectedplanet.net combines crowd sourcing and authoritative sources to enrich and provide data for protected areas around the world. Data are provided in partnership with the World Database on Protected Areas (WDPA). The data include the location, designation type, status year, and size of the protected areas, as well as species information.
Originally named the Radiation Belt Storm Probes (RBSP), the mission was re-named the Van Allen Probes, following successful launch and commissioning. For simplicity and continuity, the RBSP short-form has been retained for existing documentation, file naming, and data product identification purposes. The RBSPICE investigation including the RBSPICE Instrument SOC maintains compliance with requirements levied in all applicable mission control documents.
Brainlife promotes engagement and education in reproducible neuroscience. We do this by providing an online platform where users can publish code (Apps), Data, and make it "alive" by integragrate various HPC and cloud computing resources to run those Apps. Brainlife also provide mechanisms to publish all research assets associated with a scientific project (data and analyses) embedded in a cloud computing environment and referenced by a single digital-object-identifier (DOI). The platform is unique because of its focus on supporting scientific reproducibility beyond open code and open data, by providing fundamental smart mechanisms for what we refer to as “Open Services.”
The Fungal Genetics Stock Center has preserved and distributed strains of genetically characterized fungi since 1960. The collection includes over 20,000 accessioned strains of classical and genetically engineered mutants of key model, human, and plant pathogenic fungi. These materials are distributed as living stocks to researchers around the world.
Country
<<<!!!<<< The pages were merged. Please use "Forschungsdaten- und Servicezentrum der Bundesbank" https://www.re3data.org/repository/r3d100012252 >>>!!!<<<
The Restriction Enzyme Database is a collection of information about restriction enzymes, methylases, the microorganisms from which they have been isolated, recognition sequences, cleavage sites, methylation specificity, the commercial availability of the enzymes, and references - both published and unpublished observations (dating back to 1952). REBASE is updated daily and is constantly expanding.
TIW’s Warehouse is a centralized, electronic database holding the most current details on the official, or “gold,” record for virtually all cleared and bilateral credit default swap (CDS) contracts outstanding in the global marketplace. The Warehouse contains more than 50,000 accounts representing derivatives counterparties across 95 countries.
The CONP portal is a web interface for the Canadian Open Neuroscience Platform (CONP) to facilitate open science in the neuroscience community. CONP simplifies global researcher access and sharing of datasets and tools. The portal internalizes the cycle of a typical research project: starting with data acquisition, followed by processing using already existing/published tools, and ultimately publication of the obtained results including a link to the original dataset. From more information on CONP, please visit https://conp.ca
In 2003, the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) at NIH established Data, Biosample, and Genetic Repositories to increase the impact of current and previously funded NIDDK studies by making their data and biospecimens available to the broader scientific community. These Repositories enable scientists not involved in the original study to test new hypotheses without any new data or biospecimen collection, and they provide the opportunity to pool data across several studies to increase the power of statistical analyses. In addition, most NIDDK-funded studies are collecting genetic biospecimens and carrying out high-throughput genotyping making it possible for other scientists to use Repository resources to match genotypes to phenotypes and to perform informative genetic analyses.
The U.S. launched the Joint Global Ocean Flux Study (JGOFS) in the late 1980s to study the ocean carbon cycle. An ambitious goal was set to understand the controls on the concentrations and fluxes of carbon and associated nutrients in the ocean. A new field of ocean biogeochemistry emerged with an emphasis on quality measurements of carbon system parameters and interdisciplinary field studies of the biological, chemical and physical process which control the ocean carbon cycle. As we studied ocean biogeochemistry, we learned that our simple views of carbon uptake and transport were severely limited, and a new "wave" of ocean science was born. U.S. JGOFS has been supported primarily by the U.S. National Science Foundation in collaboration with the National Oceanic and Atmospheric Administration, the National Aeronautics and Space Administration, the Department of Energy and the Office of Naval Research. U.S. JGOFS, ended in 2005 with the conclusion of the Synthesis and Modeling Project (SMP).