Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 43 result(s)
!!! >>> intrepidbio.com expired <<< !!!! Intrepid Bioinformatics serves as a community for genetic researchers and scientific programmers who need to achieve meaningful use of their genetic research data – but can’t spend tremendous amounts of time or money in the process. The Intrepid Bioinformatics system automates time consuming manual processes, shortens workflow, and eliminates the threat of lost data in a faster, cheaper, and better environment than existing solutions. The system also provides the functionality and community features needed to analyze the large volumes of Next Generation Sequencing and Single Nucleotide Polymorphism data, which is generated for a wide range of purposes from disease tracking and animal breeding to medical diagnosis and treatment.
Country
BCCM/ITM is a collection of well documented mycobacteria, characterized by phenotypic and/or genotypic tests. While having an emphasis on (drug-resistant) M. tuberculosis complex, BCCM/ITM comprises more than 90 mycobacterial species from human, animal and environmental origin from all continents.
The figshare service for the University of Sheffield allows researchers to store, share and publish research data. It helps the research data to be accessible by storing Metadata alongside datasets. Additionally, every uploaded item receives a Digital Object identifier (DOI), which allows the data to be citable and sustainable. If there are any ethical or copyright concerns about publishing a certain dataset, it is possible to publish the metadata associated with the dataset to help discoverability while sharing the data itself via a private channel through manual approval.
Country
GTS AI is an Artificial Intelligence Company that offers excellent services to its clients. We use high definition images and use high quality data to analyze and help in Machine Learning Company . We are a dataset provider and we collect data in regards to artificial intelligence.
The National Science Foundation (NSF) Ultraviolet (UV) Monitoring Network provides data on ozone depletion and the associated effects on terrestrial and marine systems. Data are collected from 7 sites in Antarctica, Argentina, United States, and Greenland. The network is providing data to researchers studying the effects of ozone depletion on terrestrial and marine biological systems. Network data is also used for the validation of satellite observations and for the verification of models describing the transfer of radiation through the atmosphere.
The CancerData site is an effort of the Medical Informatics and Knowledge Engineering team (MIKE for short) of Maastro Clinic, Maastricht, The Netherlands. Our activities in the field of medical image analysis and data modelling are visible in a number of projects we are running. CancerData is offering several datasets. They are grouped in collections and can be public or private. You can search for public datasets in the NBIA (National Biomedical Imaging Archive) image archives without logging in.
The Wolfram Data Repository is a public resource that hosts an expanding collection of computable datasets, curated and structured to be suitable for immediate use in computation, visualization, analysis and more. Building on the Wolfram Data Framework and the Wolfram Language, the Wolfram Data Repository provides a uniform system for storing data and making it immediately computable and useful. With datasets of many types and from many sources, the Wolfram Data Repository is built to be a global resource for public data and data-backed publication.
The Cooperative Association for Internet Data Analysis (CAIDA) is a collaborative undertaking among organizations in the commercial, government, and research sectors aimed at promoting greater cooperation in the engineering and maintenance of a robust, scalable global Internet infrastructure.It is an independent analysis and research group with particular focus on: Collection, curation, analysis, visualization, dissemination of sets of the best available Internet data, providing macroscopic insight into the behavior of Internet infrastructure worldwide, improving the integrity of the field of Internet science, improving the integrity of operational Internet measurement and management, informing science, technology, and communications public policies.
Country
Datatang is a professional data pre-processing company. We are engaged in data collecting, annotating, and customizing to meet our clients’ various needs. We assist our clients from university research labs and company R&D departments to waive trivial yet necessary data processing procedure and make their approach to the highest-value data in a more efficient way.
The repository is no longer available. >>>!!!<<< 2018-09-14: no more access to GIS Data Depot >>>!!!<<<
Country
Survey of India, The National Survey and Mapping Organization of the country under the Department of Science & Technology, is the OLDEST SCIENTIFIC DEPARTMENT OF THE GOVT. OF INDIA. It was set up in 1767 and has evolved rich traditions over the years. In its assigned role as the nation's Principal Mapping Agency, Survey of India bears a special responsibility to ensure that the country's domain is explored and mapped suitably, provide base maps for expeditious and integrated development and ensure that all resources contribute with their full measure to the progress, prosperity and security of our country now and for generations to come. The history of the Survey of India dates back to the 18th Century. Forerunners of army of the East India Company and Surveyors had an onerous task of exploring the unknown. Bit by bit the tapestry of Indian terrain was completed by the painstaking efforts of a distinguished line of Surveyors such as Mr. Lambton and Sir George Everest. It is a tribute to the foresight of such Surveyors that at the time of independence the country inherited a survey network built on scientific principles. The great Trigonometric series spanning the country from North to South East to West are some of the best geodetic control series available in the world. The scientific principles of surveying have since been augmented by the latest technology to meet the multidisciplinary requirement of data from planners and scientists. Organized into only 5 Directorates in 1950, mainly to look after the mapping needs of Defense Forces in North West and North East, the Department has now grown into 22 Directorates spread in approx. all parts (states) of the country to provide the basic map coverage required for the development of the country. Its technology, latest in the world, has been oriented to meet the needs of defense forces, planners and scientists in the field of geo-sciences, land and resource management. Its expert advice is being utilized by various Ministries and undertakings of Govt. of India in many sensitive areas including settlement of International borders, State boundaries and in assisting planned development of hitherto under developed areas. Faced with the requirement of digital topographical data, the department has created three Digital Centers during late eighties to generate Digital Topographical Data Base for the entire country for use in various planning processes and creation of geographic information system. Its specialized Directorates such as Geodetic and Research Branch, and Indian Institute of Surveying & Mapping (erstwhile Survey Training Institute) have been further strengthened to meet the growing requirement of user community. The department is also assisting in many scientific programs of the Nation related to the field of geo-physics, remote sensing and digital data transfers.
The Virtual Research Environment (VRE) is an open-source data management platform that enables medical researchers to store, process and share data in compliance with the European Union (EU) General Data Protection Regulation (GDPR). The VRE addresses the present lack of digital research data infrastructures fulfilling the need for (a) data protection for sensitive data, (b) capability to process complex data such as radiologic imaging, (c) flexibility for creating own processing workflows, (d) access to high performance computing. The platform promotes FAIR data principles and reduces barriers to biomedical research and innovation. The VRE offers a web portal with graphical and command-line interfaces, segregated data zones and organizational measures for lawful data onboarding, isolated computing environments where large teams can collaboratively process sensitive data privately, analytics workbench tools for processing, analyzing, and visualizing large datasets, automated ingestion of hospital data sources, project-specific data warehouses for structured storage and retrieval, graph databases to capture and query ontology-based metadata, provenance tracking, version control, and support for automated data extraction and indexing. The VRE is based on a modular and extendable state-of-the art cloud computing framework, a RESTful API, open developer meetings, hackathons, and comprehensive documentation for users, developers, and administrators. The VRE with its concerted technical and organizational measures can be adopted by other research communities and thus facilitates the development of a co-evolving interoperable platform ecosystem with an active research community.
GeneCards is a searchable, integrative database that provides comprehensive, user-friendly information on all annotated and predicted human genes. It automatically integrates gene-centric data from ~125 web sources, including genomic, transcriptomic, proteomic, genetic, clinical and functional information.
Kaggle is a platform for predictive modelling and analytics competitions in which statisticians and data miners compete to produce the best models for predicting and describing the datasets uploaded by companies and users. This crowdsourcing approach relies on the fact that there are countless strategies that can be applied to any predictive modelling task and it is impossible to know beforehand which technique or analyst will be most effective.
Science Photo Library (SPL) provides creative professionals with striking specialist imagery, unrivalled in quality, accuracy and depth of information. We have more than 600,000 images and 40,000 clips to choose from, with hundreds of new submissions uploaded to the website each week.
Brainlife promotes engagement and education in reproducible neuroscience. We do this by providing an online platform where users can publish code (Apps), Data, and make it "alive" by integragrate various HPC and cloud computing resources to run those Apps. Brainlife also provide mechanisms to publish all research assets associated with a scientific project (data and analyses) embedded in a cloud computing environment and referenced by a single digital-object-identifier (DOI). The platform is unique because of its focus on supporting scientific reproducibility beyond open code and open data, by providing fundamental smart mechanisms for what we refer to as “Open Services.”
Country
JĂĽlich DATA is a registry service to index all research data created at or in the context of Forschungszentrum JĂĽlich. As an institutionial repository, it may also be used for data and software publications.
The Fungal Genetics Stock Center has preserved and distributed strains of genetically characterized fungi since 1960. The collection includes over 20,000 accessioned strains of classical and genetically engineered mutants of key model, human, and plant pathogenic fungi. These materials are distributed as living stocks to researchers around the world.
Content type(s)
Launched in November 1995, RADARSAT-1 provided Canada and the world with an operational radar satellite system capable of timely delivery of large amounts of data. Equipped with a powerful synthetic aperture radar (SAR) instrument, it acquired images of the Earth day or night, in all weather and through cloud cover, smoke and haze. RADARSAT-1 was a Canadian-led project involving the Canadian federal government, the Canadian provinces, the United States, and the private sector. It provided useful information to both commercial and scientific users in such fields as disaster management, interferometry, agriculture, cartography, hydrology, forestry, oceanography, ice studies and coastal monitoring. In 2007, RADARSAT-2 was launched, producing over 75,000 images per year since. In 2019, the RADARSAT Constellation Mission was deployed, using its three-satellite configuration for all-condition coverage. More information about RADARSAT-2 see https://mda.space/en/geo-intelligence/ RADARSAT-2 PORTAL see https://gsiportal.mda.space/gc_cp/#/map
Additionally to the institutional repository, current St. Edward's faculty have the option of uploading their work directly to their own SEU accounts on stedwards.figshare.com. Projects created on Figshare will automatically be published on this website as well. For more information, please see documentation
The CONP portal is a web interface for the Canadian Open Neuroscience Platform (CONP) to facilitate open science in the neuroscience community. CONP simplifies global researcher access and sharing of datasets and tools. The portal internalizes the cycle of a typical research project: starting with data acquisition, followed by processing using already existing/published tools, and ultimately publication of the obtained results including a link to the original dataset. From more information on CONP, please visit https://conp.ca
EMSC collects real time parametric data (source parmaters and phase pickings) provided by 65 seismological networks of the Euro-Med region. These data are provided to the EMSC either by email or via QWIDS (Quake Watch Information Distribution System, developped by ISTI). The collected data are automatically archived in a database, made available via an autoDRM, and displayed on the web site. The collected data are automatically merged to produce automatic locations which are sent to several seismological institutes in order to perform quick moment tensors determination.
The FigShare service for University of Auckland, New Zealand was launched in January 2015 and allows researchers to store, share and publish research data. It helps the research data to be accessible by storing Metadata alongside datasets. Additionally, every uploaded item recieves a Digital Object identifier (DOI), which allows the data to be cited. If there are any ethical or copyright concerns about publishing a certain dataset, it is possible to publish the metadata associated with the dataset to help discoverability while sharing the data itself via a private channel through manual approval.