Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 384 result(s)
Country
The Marine Data Portal is a product of the “Underway”- Data initiative of the German Marine Research Alliance (Deutsche Allianz Meeresforschung - DAM) and is supported by the marine science centers AWI, GEOMAR and Hereon of the Helmholtz Association. This initiative aims to improve and standardize the systematic data collection and data evaluation for expeditions with German research vessels and marine observation. It supports scientists in their data management duties and fosters (data) science through FAIR and open access to marine research data. AWI, GEOMAR and Hereon develop this marine data hub (Marehub) to build a decentralized data infrastructure for processing, long-term archiving and dissemination of marine observation and model data and data products. The Marine Data Portal provides user-friendly, centralized access to marine research data, reports and publications from a wide range of data repositories and libraries in the context of German marine research and its international collaboration. The Marine Data Portal is developed by scientists for scientists in order to facilitate Findability and Access of marine research data for Reuse. It supports machine-readable and data driven science. Please note that the quality of the data may vary depending on the purpose for which it was originally collected.
UCLA Library is adopting Dataverse, the open source web application designed for sharing, preserving and using research data. UCLA Dataverse will allow data, text, software, scripts, data visualizations, etc., created from research projects at UCLA to be made publicly available, widely discoverable, linkable, and ultimately, reusable
Country
>>> --- !!!! Attention: Obviously the institute does not exist any more. The links do not work anymore. !!!! --- <<< Our center is devoted to: Collection, compilation, evaluation, and dissemination of scientific information required for fusion research, and Investigation of problems arising in the course of development of fusion research. There are atomic and molecular (A & M) numerical databases and bibliographic databases on plasma physics and atomic physics.
Country
The repository is no longer available. <<<!!!<<< 2018-08-29: no more access to GAPHYOR >>>!!!>>> Important note: The database was no longer feeded with data or updated in the years 2005-2007. The financial support of the project had been stopped a few yers ahead that time. The maintainance of the IT system couldn't be ensured anymore and system was shutdown in 2015. Please see the other databases in the field.
The AOML Environmental Data Server (ENVIDS) provides interactive, on-line access to various oceanographic and atmospheric datasets residing at AOML. The in-house datasets include Atlantic Expendable Bathythermograph (XBT), Global Lagrangian Drifting Buoy, Hurricane Flight Level, and Atlantic Hurricane Tracks (North Atlantic Best Track and Synoptic). Other available datasets include Pacific Conductivitiy/Temperature/Depth Recorder (CTD) and World Ocean Atlas 1998.
-----<<<<< The repository is no longer available. This record is out-dated. >>>>>----- GEON is an open collaborative project that is developing cyberinfrastructure for integration of 3 and 4 dimensional earth science data. GEON will develop services for data integration and model integration, and associated model execution and visualization. Mid-Atlantic test bed will focus on tectonothermal, paleogeographic, and biotic history from the late-Proterozoicto mid-Paleozoic. Rockies test bed will focus on integration of data with dynamic models, to better understand deformation history. GEON will develop the most comprehensive regional datasets in test bed areas.
NACDA acquires and preserves data relevant to gerontological research, processing as needed to promote effective research use, disseminates them to researchers, and facilitates their use. By preserving and making available the largest library of electronic data on aging in the United States, NACDA offers opportunities for secondary analysis on major issues of scientific and policy relevance
WDC for Meteorology, Asheville acquires, catalogues, and archives data and makes them available to requesters in the international scientific community. Data are exchanged with counterparts, WDC for Meteorology, Obninsk and WDC for Meteorology, Beijing as necessary to improve access. Special research data sets prepared under international programs such as the IGY, World Climate Program (WCP), Global Atmospheric Research Program (GARP), etc., are archived and made available to the research community. All data and special data sets contributed to the WDC are available to scientific investigators without restriction. Data are available from 1755 to 2015.
The Data Center at the University of Wisconsin-Madison Space Science and Engineering Center (SSEC), is responsible for the access, maintenance and distribution of real-time and archive weather satellite data.
This is a database for vegetation data from West Africa, i.e. phytosociological and dendrometric relevés as well as floristic inventories. The West African Vegetation Database has been developed in the framework of the projects “SUN - Sustainable Use of Natural Vegetation in West Africa” and “Biodiversity Transect Analysis in Africa” (BIOTA, https://www.biota-africa.org/).
Country
BIBB has a strong tradition of survey-based research. It initiates and realises the collection of individual and firm-level data on crucial positions and transitions in the education and labour market system. The BIBB-FDZ covers a variety of data deploying different units of analysis and temporal designs and focusing on various thematic issues. Standard access to well prepared firm- and individual-level data on the attainment and utilization of vocational education and training Documentation of these data sets, i.e. a description of their central characteristics, main issues and variables, data collection, anonymisation, weighting and recoding etc. Advisory service on data choice, data access and handling, research potential and scope and validity of the data. Supply of a range of data tools such as standard measures and classifications in the fields of education, occupations, industries and regions (if possible also including cross-national fields), formally anonymous data for remote data access, or references to publications with the data.
The Cooperative Association for Internet Data Analysis (CAIDA) is a collaborative undertaking among organizations in the commercial, government, and research sectors aimed at promoting greater cooperation in the engineering and maintenance of a robust, scalable global Internet infrastructure.It is an independent analysis and research group with particular focus on: Collection, curation, analysis, visualization, dissemination of sets of the best available Internet data, providing macroscopic insight into the behavior of Internet infrastructure worldwide, improving the integrity of the field of Internet science, improving the integrity of operational Internet measurement and management, informing science, technology, and communications public policies.
>>>>!!!<<<<As of March 28, 2016, the 'NSF Arctic Data Center' will serve as the current repository for NSF-funded Arctic data. The ACADIS Gateway http://www.aoncadis.org is no longer accepting data submissions. All data and metadata in the ACADIS system have been transferred to the NSF Arctic Data Center system. There is no need for you to resubmit existing data. >>>>!!!<<<< ACADIS is a repository for Arctic research data to provide data archival, preservation and access for all projects funded by NSF's Arctic Science Program (ARC). Data include long-term observational timeseries, local, regional, and system-scale research from many diverse domains. The Advanced Cooperative Arctic Data and Information Service (ACADIS) program includes data management services.
Greengenes is an Earth Sciences website that assists clinical and environmental microbiologists from around the globe in classifying microorganisms from their local environments. A 16S rRNA gene database addresses limitations of public repositories by providing chimera screening, standard alignment, and taxonomic classification using multiple published taxonomies.
The Cancer Genome Atlas (TCGA) Data Portal provides a platform for researchers to search, download, and analyze data sets generated by TCGA. It contains clinical information, genomic characterization data, and high level sequence analysis of the tumor genomes. The Data Coordinating Center (DCC) is the central provider of TCGA data. The DCC standardizes data formats and validates submitted data.
The Brain Transcriptome Database (BrainTx) project aims to create an integrated platform to visualize and analyze our original transcriptome data and publicly accessible transcriptome data related to the genetics that underlie the development, function, and dysfunction stages and states of the brain.
>>>!!!<<< 2018-01-18: no data nor programs can be found >>>!!!<<< These archives contain public domain programs for calculations in physics and other programs that we suppose about will help during work with computer. Physical constants and experimental or theoretical data as cross sections, rate constants, swarm parameters, etc., that are necessary for physical calculations are stored here, too. Programs are mainly dedicated to computers compatible with PC IBM. If programs do not use graphic units it is possible to use them on other computers, too. It is necessary to reprogram the graphic parts of programs in the other cases.
NCEP delivers national and global weather, water, climate and space weather guidance, forecasts, warnings and analyses to its Partners and External User Communities. The National Centers for Environmental Prediction (NCEP), an arm of the NOAA's National Weather Service (NWS), is comprised of nine distinct Centers, and the Office of the Director, which provide a wide variety of national and international weather guidance products to National Weather Service field offices, government agencies, emergency managers, private sector meteorologists, and meteorological organizations and societies throughout the world. NCEP is a critical national resource in national and global weather prediction. NCEP is the starting point for nearly all weather forecasts in the United States. The Centers are: Aviation Weather Center (AWC), Climate Prediction Center (CPC), Environmental Modeling Center (EMC), NCEP Central Operations (NCO), National Hurricane Center (NHC), Ocean Prediction Center (OPC), Storm Prediction Center (SPC), Space Weather Prediction Center (SWPC), Weather Prediction Center (WPC)
PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system that produces content concerning the impact of significant earthquakes around the world, informing emergency responders, government and aid agencies, and the media of the scope of the potential disaster. PAGER rapidly assesses earthquake impacts by comparing the population exposed to each level of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world. Earthquake alerts – which were formerly sent based only on event magnitude and location, or population exposure to shaking – now will also be generated based on the estimated range of fatalities and economic losses. PAGER uses these earthquake parameters to calculate estimates of ground shaking by using the methodology and software developed for ShakeMaps. ShakeMap sites provide near-real-time maps of ground motion and shaking intensity following significant earthquakes. These maps are used by federal, state, and local organizations, both public and private, for post-earthquake response and recovery, public and scientific information, as well as for preparedness exercises and disaster planning.
Strong-motion data of engineering and scientific importance from the United States and other seismically active countries are served through the Center for Engineering Strong Motion Data(CESMD). The CESMD now automatically posts strong-motion data from an increasing number of seismic stations in California within a few minutes following an earthquake as an InternetQuick Report(IQR). As appropriate,IQRs are updated by more comprehensive Internet Data Reports that include reviewed versions of the data and maps showing, for example, the finite fault rupture along with the distribution of recording stations. Automated processing of strong-motion data will be extended to post the strong-motion records of the regional seismic networks of the Advanced National Seismic System (ANSS) outside California.
The repository is no longer available. >>>!!!<<< 2018-09-14: no more access to GIS Data Depot >>>!!!<<<
Clinical Genomic Database (CGD) is a manually curated database of conditions with known genetic causes, focusing on medically significant genetic data with available interventions.
CODEX is a database of NGS mouse and human experiments. Although, the main focus of CODEX is Haematopoiesis and Embryonic systems, the database includes a large variety of cell types. In addition to the publically available data, CODEX also includes a private site hosting non-published data. CODEX provides access to processed and curated NGS experiments. To use CODEX: (i) select a specialized repository (HAEMCODE or ESCODE) or choose the whole compendium (CODEX), then (ii) filter by organism and (iii) choose how to explore the database.