Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 37 result(s)
The African Development Bank Group (AfDB) is committed to supporting statistical development in Africa as a sound basis for designing and managing effective development policies for reducing poverty on the continent. Reliable and timely data is critical to setting goals and targets as well as evaluating project impact. Reliable data constitutes the single most convincing way of getting the people involved in what their leaders and institutions are doing. It also helps them to get involved in the development process, thus giving them a sense of ownership of the entire development process. The AfDB has a large team of researchers who focus on the production of statistical data on economic and social situations. The data produced by the institution’s statistics department constitutes the background information in the Bank’s flagship development publications. Besides its own publication, the AfDB also finances studies in collaboration with its partners. The Statistics Department aims to stand as the primary source of relevant, reliable and timely data on African development processes, starting with the data generated from its current management of the Africa component of the International Comparison Program (ICP-Africa). The Department discharges its responsibilities through two divisions: The Economic and Social Statistics Division (ESTA1); The Statistical Capacity Building Division (ESTA2)
Country
Established in 1998, we have been the authoritative source of spatial data and imagery in Alberta for over 20 years. We have a joint venture agreement with Alberta Data Partnerships Ltd. (ADP) and are responsible for the day-to-day management and distribution of the digital data sets they manage. As the agent for ADP, we are responsible for making mapping products available, accessible, accurate and affordable. We are the leading data management, maintenance, and distribution company in Alberta, and ensure the continued updating, re-engineering, storage, distribution, value-added redistribution, and general management of primary provincial mapping datasets. Our webstore, Altalis.com, enables customers to explore, view, and acquire spatial data products both paid and open data with the click of a button. We take pride in providing exceptional customer service and building long-term relationships with our clients. Our experienced customer service team are available to answer any questions you may have about finding the right data to meet your needs.
Alzforum is an independent research project to develop an online community resource to manage scientific knowledge, information, and data about Alzheimer disease (AD).
Country
BCCM/ITM is a collection of well documented mycobacteria, characterized by phenotypic and/or genotypic tests. While having an emphasis on (drug-resistant) M. tuberculosis complex, BCCM/ITM comprises more than 90 mycobacterial species from human, animal and environmental origin from all continents.
BOARD (Bicocca Open Archive Research Data) is the institutional data repository of the University of Milano-Bicocca. BOARD is an open, free-to-use research data repository, which enables members of University of Milano-Bicocca to make their research data publicly available. By depositing their research data in BOARD researchers can: - Make their research data citable - Share their data privately or publicly - Ensure long-term storage for their data - Keep access to all versions - Link their article to their data
The Virtual Research Environment (VRE) is an open-source data management platform that enables medical researchers to store, process and share data in compliance with the European Union (EU) General Data Protection Regulation (GDPR). The VRE addresses the present lack of digital research data infrastructures fulfilling the need for (a) data protection for sensitive data, (b) capability to process complex data such as radiologic imaging, (c) flexibility for creating own processing workflows, (d) access to high performance computing. The platform promotes FAIR data principles and reduces barriers to biomedical research and innovation. The VRE offers a web portal with graphical and command-line interfaces, segregated data zones and organizational measures for lawful data onboarding, isolated computing environments where large teams can collaboratively process sensitive data privately, analytics workbench tools for processing, analyzing, and visualizing large datasets, automated ingestion of hospital data sources, project-specific data warehouses for structured storage and retrieval, graph databases to capture and query ontology-based metadata, provenance tracking, version control, and support for automated data extraction and indexing. The VRE is based on a modular and extendable state-of-the art cloud computing framework, a RESTful API, open developer meetings, hackathons, and comprehensive documentation for users, developers, and administrators. The VRE with its concerted technical and organizational measures can be adopted by other research communities and thus facilitates the development of a co-evolving interoperable platform ecosystem with an active research community.
A consolidated feed from 35 million instruments provides sophisticated normalized data, streamlining analysis and decisions from front office to operations. And with flexible delivery options including cloud and API, timely accurate data enables the enterprise to capture opportunities, evaluate risk and ensure compliance in fast-moving markets.
The Cooperative Association for Internet Data Analysis (CAIDA) is a collaborative undertaking among organizations in the commercial, government, and research sectors aimed at promoting greater cooperation in the engineering and maintenance of a robust, scalable global Internet infrastructure.It is an independent analysis and research group with particular focus on: Collection, curation, analysis, visualization, dissemination of sets of the best available Internet data, providing macroscopic insight into the behavior of Internet infrastructure worldwide, improving the integrity of the field of Internet science, improving the integrity of operational Internet measurement and management, informing science, technology, and communications public policies.
The CONP portal is a web interface for the Canadian Open Neuroscience Platform (CONP) to facilitate open science in the neuroscience community. CONP simplifies global researcher access and sharing of datasets and tools. The portal internalizes the cycle of a typical research project: starting with data acquisition, followed by processing using already existing/published tools, and ultimately publication of the obtained results including a link to the original dataset. From more information on CONP, please visit https://conp.ca
<<<!!!<<< CRAWDAD has moved to IEEE-Dataport https://www.re3data.org/repository/r3d100012569 The datasets in the Community Resource for Archiving Wireless Data at Dartmouth (CRAWDAD) repository are now hosted as the CRAWDAD Collection on IEEE Dataport. After nearly two decades as a stand-alone archive at crawdad.org, the migration of the collection to IEEE DataPort provides permanence and new visibility. >>>!!!>>>
Country
Datatang is a professional data pre-processing company. We are engaged in data collecting, annotating, and customizing to meet our clients’ various needs. We assist our clients from university research labs and company R&D departments to waive trivial yet necessary data processing procedure and make their approach to the highest-value data in a more efficient way.
TIW’s Warehouse is a centralized, electronic database holding the most current details on the official, or “gold,” record for virtually all cleared and bilateral credit default swap (CDS) contracts outstanding in the global marketplace. The Warehouse contains more than 50,000 accounts representing derivatives counterparties across 95 countries.
DNASU is a central repository for plasmid clones and collections. Currently we store and distribute over 200,000 plasmids including 75,000 human and mouse plasmids, full genome collections, the protein expression plasmids from the Protein Structure Initiative as the PSI: Biology Material Repository (PSI : Biology-MR), and both small and large collections from individual researchers. We are also a founding member and distributor of the ORFeome Collaboration plasmid collection.
A planetary-scale platform for Earth science data & analysis. Google Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities. Scientists, researchers, and developers use Earth Engine to detect changes, map trends, and quantify differences on the Earth's surface.
<<<!!!<<<The repository is no longer available. The printversion see: https://www.taylorfrancis.com/books/mono/10.1201/9781003220435/encyclopedia-astronomy-astrophysics-murdin >>>!!!>>> This unique resource covers the entire field of astronomy and astrophysics and this online version includes the full text of over 2,750 articles, plus sophisticated search and retrieval functionality, links to the primary literature, and is frequently updated with new material. An active editorial team, headed by the Encyclopedia's editor-in-chief, Paul Murdin, oversees the continual commissioning, reviewing and loading of new and revised content.In a unique collaboration, Nature Publishing Group and Institute of Physics Publishing published the most extensive and comprehensive reference work in astronomy and astrophysics in both print and online formats. First published as a four volume print edition in 2001, the initial Web version went live in 2002, and contained the original print material and was rapidly supplemented with numerous updates and newly commissioned material. Since July 2006 the Encyclopedia is published solely by Taylor & Francis.
EMSC collects real time parametric data (source parmaters and phase pickings) provided by 65 seismological networks of the Euro-Med region. These data are provided to the EMSC either by email or via QWIDS (Quake Watch Information Distribution System, developped by ISTI). The collected data are automatically archived in a database, made available via an autoDRM, and displayed on the web site. The collected data are automatically merged to produce automatic locations which are sent to several seismological institutes in order to perform quick moment tensors determination.
FactSage is a fully integrated Canadian thermochemical database system which couples proven software with self-consistent critically assessed thermodynamic data. It currently contains data on over 5000 chemical substances as well as solution databases representing over 1000 non-ideal multicomponent solutions (oxides, salts, sulfides, alloys, aqueous, etc.). FactSage is available for use with Windows.
INDI was formed as a next generation FCP effort. INDI aims to provide a model for the broader imaging community while simultaneously creating a public dataset capable of dwarfing those that most groups could obtain individually.
!!! >>> intrepidbio.com expired <<< !!!! Intrepid Bioinformatics serves as a community for genetic researchers and scientific programmers who need to achieve meaningful use of their genetic research data – but can’t spend tremendous amounts of time or money in the process. The Intrepid Bioinformatics system automates time consuming manual processes, shortens workflow, and eliminates the threat of lost data in a faster, cheaper, and better environment than existing solutions. The system also provides the functionality and community features needed to analyze the large volumes of Next Generation Sequencing and Single Nucleotide Polymorphism data, which is generated for a wide range of purposes from disease tracking and animal breeding to medical diagnosis and treatment.
The ISRCTN registry is a primary clinical trial registry recognised by WHO and ICMJE that accepts all clinical research studies (whether proposed, ongoing or completed), providing content validation and curation and the unique identification number necessary for publication. All study records in the database are freely accessible and searchable. ISRCTN supports transparency in clinical research, helps reduce selective reporting of results and ensures an unbiased and complete evidence base. ISRCTN accepts all studies involving human subjects or populations with outcome measures assessing effects on human health and well-being, including studies in healthcare, social care, education, workplace safety and economic development.
Junar provides a cloud-based open data platform that enables innovative organizations worldwide to quickly, easily and affordably make their data accessible to all. In just a few weeks, your initial datasets can be published, providing greater transparency, encouraging collaboration and citizen engagement, and freeing up precious staff resources.
Country
Jülich DATA is a registry service to index all research data created at or in the context of Forschungszentrum Jülich. As an institutionial repository, it may also be used for data and software publications.
Kaggle is a platform for predictive modelling and analytics competitions in which statisticians and data miners compete to produce the best models for predicting and describing the datasets uploaded by companies and users. This crowdsourcing approach relies on the fact that there are countless strategies that can be applied to any predictive modelling task and it is impossible to know beforehand which technique or analyst will be most effective.
Knoema is a knowledge platform. The basic idea is to connect data with analytical and presentation tools. As a result, we end with one uniformed platform for users to access, present and share data-driven content. Within Knoema, we capture most aspects of a typical data use cycle: accessing data from multiple sources, bringing relevant indicators into a common space, visualizing figures, applying analytical functions, creating a set of dashboards, and presenting the outcome.