Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 277 result(s)
As one of the cornerstones of the U.S. Geological Survey's (USGS) National Geospatial Program, The National Map is a collaborative effort among the USGS and other Federal, State, and local partners to improve and deliver topographic information for the Nation. It has many uses ranging from recreation to scientific analysis to emergency response. The National Map is easily accessible for display on the Web, as products and services, and as downloadable data. The geographic information available from The National Map includes orthoimagery (aerial photographs), elevation, geographic names, hydrography, boundaries, transportation, structures, and land cover. Other types of geographic information can be added within the viewer or brought in with The National Map data into a Geographic Information System to create specific types of maps or map views.
Country
The Research Data Center (RDC) “International Survey Programs“ provides researchers with data, services, and consultation on a number of important international study series which are under intensive curation by GESIS. They all cover numerous countries and, quite often, substantial time spans. The RDC provides optimal data preparation and access to a wide scope of data and topics for comparative analysis.
Country
The National High Energy Physics Science Data Center (NHEPSDC) is a repository for high-energy physics. In 2019, it was designated as a scientific data center at the national level by the Ministry of Science and Technology of China (MOST). NHEPSDC is constructed and operated by the Institute of High Energy Physics (IHEP) of the Chinese Academy of Sciences (CAS). NHEPSDC consists of a main data center in Beijing, a branch center in Guangdong-Hong Kong-Macao Greater Bay Area, and a branch center in Huairou District of Beijing. The mission of NHEPSDC is to provide the services of data collection, archiving, long-term preservation, access and sharing, software tools, and data analysis. The services of NHEPSDC are mainly for high-energy physics and related scientific research activities. The data collected can be roughly divided into the following two categories: one is the raw data from large scientific facilities, and the other is data generated from general scientific and technological projects (usually supported by government funding), hereafter referred to as generic data. More than 70 people work in NHEPSDC now, with 18 in high-energy physics, 17 in computer science, 15 in software engineering, 20 in data management and some other operation engineers. NHEPSDC is equipped with a hierarchical storage system, high-performance computing power, high bandwidth domestic and international network links, and a professional service support system. In the past three years, the average data increment is about 10 PB per year. By integrating data resources with the IT environment, a state-of-art data process platform is provided to users for scientific research, the volume of data accessed every year is more than 400 PB with more than 10 million visits.
This project is an open invitation to anyone and everyone to participate in a decentralized effort to explore the opportunities of open science in neuroimaging. We aim to document how much (scientific) value can be generated from a data release — from the publication of scientific findings derived from this dataset, algorithms and methods evaluated on this dataset, and/or extensions of this dataset by acquisition and incorporation of new data. The project involves the processing of acoustic stimuli. In this study, the scientists have demonstrated an audiodescription of classic "Forrest Gump" to subjects, while researchers using functional magnetic resonance imaging (fMRI) have captured the brain activity of test candidates in the processing of language, music, emotions, memories and pictorial representations.In collaboration with various labs in Magdeburg we acquired and published what is probably the most comprehensive sample of brain activation patterns of natural language processing. Volunteers listened to a two-hour audio movie version of the Hollywood feature film "Forrest Gump" in a 7T MRI scanner. High-resolution brain activation patterns and physiological measurements were recorded continuously. These data have been placed into the public domain, and are freely available to the scientific community and the general public.
The ASEP Data Repository is an institutional multidisciplinary on-line repository that stores scientific outputs - bibliographic records, full texts and datasets of the institutional authors from The Czech Academy of Sciences. The repository is hosted by Library of the Czech Academy of Sciences. Data that are stored in database are accessible in the on-line catalogue. Each dataset has its own description and metadata according to international standards.
The HSRC Research Data Service provides a digital repository facility for the HSRC's research data in support of evidence based human and social development in South Africa and the broader region. It includes both quantitative and qualitative data. Access to data is dependent on ethical requirements for protecting research participants, as well as on legal agreements with the owners, funders or in the case of data owned by the HSRC, the requirements of the depositors of the data.
LAUDATIO has developed an open access research data repository for historical corpora. For the access and (re-)use of historical corpora, the LAUDATIO repository uses a flexible and appropriate documentation schema with a subset of TEI customized by TEI ODD. The extensive metadata schema contains information about the preparation and checking methods applied to the data, tools, formats and annotation guidelines used in the project, as well as bibliographic metadata, and information on the research context (e.g. the research project). To provide complex and comprehensive search in the annotation data, the search and visualization tool ANNIS is integrated in the LAUDATIO-Repository.
STOREDB is a platform for the archiving and sharing of primary data and outputs of all kinds, including epidemiological and experimental data, from research on the effects of radiation. It also provides a directory of bioresources and databases containing information and materials that investigators are willing to share. STORE supports the creation of a radiation research commons.
CERIC Data Portal allows users to consult and manage data related to experiments carried out at CERIC (Central European Research Infrastructure Consortium) partner facilities. Data made available includes scientific datasets collected during experiments, experiment proposals, samples used and publications if any. Users can search for data based on related metadata (both their own data and other peoples' public data).
ICRISAT performs crop improvement research, using conventional as well as methods derived from biotechnology, on the following crops: Chickpea, Pigeonpea, Groundnut, Pearl millet,Sorghum and Small millets. ICRISAT's data repository collects, preserves and facilitates access to the datasets produced by ICRISAT researchers to all users who are interested in. Data includes Phenotypic, Genotypic, Social Science, and Spatial data, Soil and Weather.
AmeriFlux is a network of PI-managed sites measuring ecosystem CO2, water, and energy fluxes in North, Central and South America. It was established to connect research on field sites representing major climate and ecological biomes, including tundra, grasslands, savanna, crops, and conifer, deciduous, and tropical forests. As a grassroots, investigator-driven network, the AmeriFlux community has tailored instrumentation to suit each unique ecosystem. This “coalition of the willing” is diverse in its interests, use of technologies and collaborative approaches. As a result, the AmeriFlux Network continually pioneers new ground.
Country
Swedish National Data Service (SND) is a research data infrastructure designed to assist researchers in preserving, maintaining, and disseminating research data in a secure and sustainable manner. The SND Search function makes it easy to find, use, and cite research data from a variety of scientific disciplines. Together with an extensive network of almost 40 Swedish higher education institutions and other research organisations, SND works for increased access to research data, nationally as well as internationally.
The Humanitarian Data Exchange (HDX) is an open platform for sharing data across crises and organisations. Launched in July 2014, the goal of HDX is to make humanitarian data easy to find and use for analysis. HDX is managed by OCHA's Centre for Humanitarian Data, which is located in The Hague. OCHA is part of the United Nations Secretariat and is responsible for bringing together humanitarian actors to ensure a coherent response to emergencies. The HDX team includes OCHA staff and a number of consultants who are based in North America, Europe and Africa.
The SURF Data Repository is a user-friendly web-based data publication platform that allows researchers to store, annotate and publish research datasets of any size to ensure long-term preservation and availability of their data. The service allows any dataset to be stored, independent of volume, number of files and structure. A published dataset is enriched with complex metadata, unique identifiers are added and the data is preserved for an agreed-upon period of time. The service is domain-agnostic and supports multiple communities with different policy and metadata requirements.
Academic Torrents is a distributed data repository. The academic torrents network is built for researchers, by researchers. Its distributed peer-to-peer library system automatically replicates your datasets on many servers, so you don't have to worry about managing your own servers or file availability. Everyone who has data becomes a mirror for those data so the system is fault-tolerant.
Tropicos® was originally created for internal research but has since been made available to the world’s scientific community. All of the nomenclatural, bibliographic, and specimen data accumulated in MBG’s electronic databases during the past 30 years are publicly available here.
The mission of World Data Center for Climate (WDCC) is to provide central support for the German and European climate research community. The WDCC is member of the ISC's World Data System. Emphasis is on development and implementation of best practice methods for Earth System data management. Data for and from climate research are collected, stored and disseminated. The WDCC is restricted to data products. Cooperations exist with thematically corresponding data centres of, e.g., earth observation, meteorology, oceanography, paleo climate and environmental sciences. The services of WDCC are also available to external users at cost price. A special service for the direct integration of research data in scientific publications has been developed. The editorial process at WDCC ensures the quality of metadata and research data in collaboration with the data producers. A citation code and a digital identifier (DOI) are provided and registered together with citation information at the DOI registration agency DataCite.
Country
CINES is the French national long-term preservation service provider for Higher Education and Research: more than 20 institutions (universities, librairies, labs) archive their digital heritage at CINES so that it's preserved over time in a secure, dedicated environment. This includes documents such as PhD theses or publications, digitized ancient/rare books, satellite imagery, 3D/vidéos/image galleries, datasets, etc.
WikiPathways was established to facilitate the contribution and maintenance of pathway information by the biology community. WikiPathways is an open, collaborative platform dedicated to the curation of biological pathways. WikiPathways thus presents a new model for pathway databases that enhances and complements ongoing efforts, such as KEGG, Reactome and Pathway Commons. Building on the same MediaWiki software that powers Wikipedia, we added a custom graphical pathway editing tool and integrated databases covering major gene, protein, and small-molecule systems. The familiar web-based format of WikiPathways greatly reduces the barrier to participate in pathway curation. More importantly, the open, public approach of WikiPathways allows for broader participation by the entire community, ranging from students to senior experts in each field. This approach also shifts the bulk of peer review, editorial curation, and maintenance to the community.
Bioinformatics.org serves the scientific and educational needs of bioinformatic practitioners and the general public. We develop and maintain computational resources to facilitate world-wide communications and collaborations between people of all educational and professional levels. We provide and promote open access to the materials and methods required for, and derived from, research, development and education.
FLOSSmole is a collaborative collection of free, libre, and open source software (FLOSS) data. FLOSSmole contains nearly 1 TB of data covering the period 2004 until now, about more than 500,000 different open source projects.
The Harvard Dataverse is open to all scientific data from all disciplines worldwide. It includes the world's largest collection of social science research data. It is hosting data for projects, archives, researchers, journals, organizations, and institutions.
ScienceBase provides access to aggregated information derived from many data and information domains, including feeds from existing data systems, metadata catalogs, and scientists contributing new and original content. ScienceBase architecture is designed to help science teams and data practitioners centralize their data and information resources to create a foundation needed for their work. ScienceBase, both original software and engineered components, is released as an open source project to promote involvement from the larger scientific programming community both inside and outside the USGS.