Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 874 result(s)
Country
Historical Climate Data (formerly: The National Climate Data and Information Archive - NCDC) is a web-based resource maintained by Environment and Climate Change Canada and the Meteorological Service of Canada. The site houses a number of statistical and data compilation and viewing tools. From the website, users can access historical climate data for Canadian locations and dates, view climate normals and averages, and climate summaries. A list of Canadian air stations and their meteorological reports and activities is also available, along with rainfall statistics for more than 500 locations across Canada. Users can also download the Canadian daily climate data for 2006/07. In addition, technical documentation is offered for data users to interpret the available data. Other documentation includes a catalogue of Canadian weather reporting stations, a glossary, and a calculation of the 1971 to 2000 climate normals for Canada. This resource carries authority and accuracy because it is maintained by a national government department and service. While the majority of the statistics are historical, the data is up-to-date and contains current and fairly recent climate data. This government resource is intended for an audience that has the ability and knowledge to interpret climatological data
The European Centre for Medium-Range Weather Forecasts (ECMWF) is an independent intergovernmental organisation supported by 34 states. ECMWF is both a research institute and a 24/7 operational service, producing and disseminating numerical weather predictions to its Member States. This data is fully available to the national meteorological services in the Member States. The Centre also offers a catalogue of forecast data that can be purchased by businesses worldwide and other commercial customers Forecasts, analyses, climate re-analyses, reforecasts and multi-model data are available from our archive (MARS) or via dedicated data servers or via point-to-point dissemination
mzCloud is an extensively curated database of high-resolution tandem mass spectra that are arranged into spectral trees. MS/MS and multi-stage MSn spectra were acquired at various collision energies, precursor m/z, and isolation widths using Collision-induced dissociation (CID) and Higher-energy collisional dissociation (HCD). Each raw mass spectrum was filtered and recalibrated giving rise to additional filtered and recalibrated spectral trees that are fully searchable. Besides the experimental and processed data, each database record contains the compound name with synonyms, the chemical structure, computationally and manually annotated fragments (peaks), identified adducts and multiply charged ions, molecular formulas, predicted precursor structures, detailed experimental information, peak accuracies, mass resolution, InChi, InChiKey, and other identifiers. mzCloud is a fully searchable library that allows spectra searches, tree searches, structure and substructure searches, monoisotopic mass searches, peak (m/z) searches, precursor searches, and name searches. mzCloud is free and available for public use online.
SeaDataNet is a standardized system for managing the large and diverse data sets collected by the oceanographic fleets and the automatic observation systems. The SeaDataNet infrastructure network and enhance the currently existing infrastructures, which are the national oceanographic data centres of 35 countries, active in data collection. The networking of these professional data centres, in a unique virtual data management system provide integrated data sets of standardized quality on-line. As a research infrastructure, SeaDataNet contributes to build research excellence in Europe.
Country
The Digital Repository of Ireland (DRI) is a national trusted digital repository (TDR) for Ireland’s social and cultural data. We preserve, curate, and provide sustained access to a wealth of Ireland’s humanities and social sciences data through a single online portal. The repository houses unique and important collections from a variety of organisations including higher education institutions, cultural institutions, government agencies, and specialist archives. DRI has staff members from a wide variety of backgrounds, including software engineers, designers, digital archivists and librarians, data curators, policy and requirements specialists, educators, project managers, social scientists and humanities scholars. DRI is certified by the CoreTrustSeal, the current TDR standard widely recommended for best practice in Open Science. In addition to providing trusted digital repository services, the DRI is also Ireland’s research centre for best practices in digital archiving, repository infrastructures, preservation policy, research data management and advocacy at the national and European levels. DRI contributes to policy making nationally (e.g. via the National Open Research Forum and the IRC), and internationally, including European Commission expert groups, the DPC, RDA and the OECD.
Country
The CDC Data Catalogue describes the Climate Data of the DWD and provides access to data, descriptions and access methods. Climate Data refers to observations, statistical indices and spatial analyses. CDC comprises Climate Data for Germany, but also global Climate Data, which were collected and processed in the framework of international co-operation. The CDC Data Catalogue is under construction and not yet complete. The purposes of the CDC Data Catalogue are: to provide uniform access to climate data centres and climate datasets of the DWD to describe the climate data according to international metadata standards to make the catalogue information available on the Internet to support the search for climate data to facilitate the access to climate data and climate data descriptions
Country
IDOC-DATA is a department of IDOC IDOC (Integrated Data & Operation Center) has existed since 2003 as a satellite operations center and data center for the Institute of Space Astrophysics (IAS) in Orsay, France. Since then, it has operated within the OSUPS (Observatoire des Sciences de l'Univers de l'Université Paris-Saclay - first french university in shanghai ranking), which includes three institutes: IAS, AIM (Astrophysique, Interprétation, Modélisation - IRFU, CEA) and GEOPS (Geosciences Paris-Saclay) . IDOC participates in the space missions of OSUPS and its partners, from mission design to long-term scientific data archiving. For each phase of the missions, IDOC offers three kinds of services in the scientific themes of OSUPS and therefore IDOC's activities are divided into three departments: IDOC-INSTR: instrument design and testing, IDOC-OPE: instrument operations, IDOC-DATA: data management and data value chain: to produce the different levels of data constructed from observations of these instruments and make them available to users for ergonomic and efficient scientific interpretation (IDOC-DATA). It includes the responsibility: - To build access to these datasets. - To offer the corresponding services such as catalogue management, visualization tools, software pipeline automation, etc. - To preserve the availability and reliability of this hardware and software infrastructure, its confidentiality where applicable and its security.
The IMEx consortium is an international collaboration between a group of major public interaction data providers who have agreed to share curation effort and develop and work to a single set of curation rules when capturing data from both directly deposited interaction data or from publications in peer-reviewed journals, capture full details of an interaction in a “deep” curation model, perform a complete curation of all protein-protein interactions experimentally demonstrated within a publication, make these interaction available in a single search interface on a common website, provide the data in standards compliant download formats, make all IMEx records freely accessible under the Creative Commons Attribution License
Our knowledge of the many life-forms on Earth - of animals, plants, fungi, protists and bacteria - is scattered around the world in books, journals, databases, websites, specimen collections, and in the minds of people everywhere. Imagine what it would mean if this information could be gathered together and made available to everyone – anywhere – at a moment’s notice. This dream is becoming a reality through the Encyclopedia of Life.
The Department of Energy (DOE) Joint Genome Institute (JGI) is a national user facility with massive-scale DNA sequencing and analysis capabilities dedicated to advancing genomics for bioenergy and environmental applications. Beyond generating tens of trillions of DNA bases annually, the Institute develops and maintains data management systems and specialized analytical capabilities to manage and interpret complex genomic data sets, and to enable an expanding community of users around the world to analyze these data in different contexts over the web. The JGI Genome Portal provides a unified access point to all JGI genomic databases and analytical tools. A user can find all DOE JGI sequencing projects and their status, search for and download assemblies and annotations of sequenced genomes, and interactively explore those genomes and compare them with other sequenced microbes, fungi, plants or metagenomes using specialized systems tailored to each particular class of organisms. Databases: Genome Online Database (GOLD), Integrated Microbial Genomes (IGM), MycoCosm, Phytozome
Country
As the third center for oceanography of the World Data Center following WDC-A of the United States and WDC-B of Russia, WDC-D for oceanography boasts long-term and stable sources of domestic marine basic data. The State Oceanic Administration now has long-term observations obtained from the fixed coastal ocean stations, offshore and oceanic research vessels, moored and drifting buoys. More and more marine data have been available from the Chinese-foreign marine cooperative surveys, analysis and measurement of laboratory samples, reception by the satellite ground station, aerial telemeter and remote sensing, the GOOS program and global ships of opportunity reports, etc; More marine data are being and will be obtained from the ongoing “863” program, one of the state key projects during the Ninth Five-year plan and the seasat No 1 which is scheduled to be launched next year. Through many years’ effort, the WDC-D for oceanography has established formal relationship of marine data exchange with over 130 marine institutions in more than 60 countries in the world and is maintaining a close relationship of data exchange with over 30 major national oceanographic data centers. The established China Oceanic Information Network has joined the international marine data exchange system via Internet. Through these channels, a large amount data have been acquired of through international exchange, which, plus the marine data collected at home for many years, has brought the WDC-D for Oceanography over 100 years’ global marine data with a total data amounting to more than 10 billion bytes. In the meantime, a vast amount of work has been done in the standardized and normalized processing and management of the data, and a series of national and professional standards have been formulated and implemented successively. Moreover, appropriate standards and norms are being formulated as required.
The Whitehall II study was established to explore the relationship between socio-economic status, stress and cardiovascular disease. A cohort of 10,308 participants aged 35-55, of whom 3,413 were women and 6,895 men, was recruited from the British Civil Service in 1985. Since this first wave of data collection, self-completion questionnaires and clinical data have been collected from the cohort every two to five years with a high level of participation. Data collection is intended to continue until 2030.
<<<!!!<<< The repository is no longer available. You can find the data using https://www.re3data.org/repository/r3d100010199 and searching for WATCH. Further information see: https://catalogue.ceh.ac.uk/documents/ba6e8ddd-22a9-457d-acf4-d63cd34f2dda >>>!!!>>>
Country
Unidata – Bicocca Data Archive is an interdepartmental center of the University of Milan-Bicocca, born in 2015. The center is the Italian point of reference for the research data archiving and dissemination, based on the example of the National Archives located in major European countries and beyond. UniData inherits the long work from the ADPSS-Sociodata Data Archive, born in 1999 in the Department of Sociology and Social Research at the same University. Here you can find only individual data from 2010. For older surveys please visit ADPSS Sociodata, Data Archive for Social Sciences - Archivio Dati e Programmi Per le Scienze Soziali: https://www.unidata.unimib.it/old/ and ADPSS-SOCIODATA Archivio Dati e Programmi per le Scienze Sociali Dataverse : https://dataverse.harvard.edu/dataverse/adpss
The project is set up in order to improve the infrastructure for text-based linguistic research and development by building a huge, automatically annotated German text corpus and the corresponding tools for corpus annotation and exploitation. DeReKo constitutes the largest linguistically motivated collection of contemporary German texts, contains fictional, scientific and newspaper texts, as well as several other text types, contains only licenced texts, is encoded with rich meta-textual information, is fully annotated morphosyntactically (three concurrent annotations), is continually expanded, with a focus on size and stratification of data, may be analyzed free of charge via the query system COSMAS II, serves as a 'primordial sample' from which users may draw specialized sub-samples (socalled 'virtual corpora') to represent the language domain they wish to investigate. !!! Access to data of Das Deutsche Referenzkorpus is also provided by: IDS Repository https://www.re3data.org/repository/r3d100010382 !!!
The Joint Information Systems Committee (JISC) funded Landmap service which ran from 2001 to July 2014 collected, modified and hosted a large amount of earth observation data for the majority of the UK, including imagery from ERS satellites, ENVISAT and ALOS, high-resolution Digital Elevation Models (DEMs) and Digital Terrain Models (DTMs) and aerial photography dating back to 1930. After removal of JISC funding in 2013, the Landmap service is no longer operational, with the data now held at the NEODC. Aside from the thermal imagery data which stands alone, the data reside in four collections: optical, elevation, radar and feature.
Country
To target the multidisciplinary, broad scale nature of empirical educational research in the Federal Republic of Germany, a networked research data infrastructure is required which brings together disparate services from different research data providers, delivering services to researchers in a usable, needs-oriented way. The Verbund Forschungsdaten Bildung (Educational Research Data Alliance, VFDB) therefore aims to cooperate with relevant actors from science, politics and research funding institutes to set up a powerful infrastructure for empirical educational research. This service is meant to adequately capture specific needs of the scientific communities and support empirical educational research in carrying out excellent research.
Country
The National Human Brain Tissue Bank for Health and Disease (the Brain Bank) was built to meet the needs of scientific research by integrating experts and forces from neuroscience, human anatomy, pathology and other related disciplines. The Brain Bank collects and stores post-mortem brain tissue donated by patients with various neuropsychiatric disorders and normal controls, as well as their life histories, in accordance with international standards, and provides a detailed and accurate neuropathological diagnosis of these brain tissue samples (also known as the "final diagnosis"). The aim is to discover and elucidate the causes of human neuropsychiatric diseases such as Alzheimer's disease, Parkinson's disease, depression, schizophrenia and other human diseases, and to provide scientists with the most direct and effective means of finding the relevant pathogenesis and establishing effective treatments. The National Brain Tissue Resource for Health and Disease The goal of the National Human Brain Tissue Repository for Health and Disease is to integrate collection, diagnosis, storage and utilisation, and to build a first-class human resource preservation infrastructure in China that is in line with international standards and provides support for neuroscience research. In 2020, the National Brain Bank has established three branches of the National Brain Bank in Hefei, Anhui, Nanjing, Jiangsu, and Shanghai. major cities.
Merritt is a curation repository for the preservation of and access to the digital research data of the ten campus University of California system and external project collaborators. Merritt is supported by the University of California Curation Center (UC3) at the California Digital Library (CDL). While Merritt itself is content agnostic, accepting digital content regardless of domain, format, or structure, it is being used for management of research data, and it forms the basis for a number of domain-specific repositories, such as the ONEShare repository for earth and environmental science and the DataShare repository for life sciences. Merritt provides persistent identifiers, storage replication, fixity audit, complete version history, REST API, a comprehensive metadata catalog for discovery, ATOM-based syndication, and curatorially-defined collections, access control rules, and data use agreements (DUAs). Merritt content upload and download may each be curatorially-designated as public or restricted. Merritt DOIs are provided by UC3's EZID service, which is integrated with DataCite. All DOIs and associated metadata are automatically registered with DataCite and are harvested by Ex Libris PRIMO and Thomson Reuters Data Citation Index (DCI) for high-level discovery. Merritt is also a member node in the DataONE network; curatorially-designated data submitted to Merritt are automatically registered with DataONE for additional replication and federated discovery through the ONEMercury search/browse interface.
This interface provides access to several types of data related to the Chesapeake Bay. Bay Program databases can be queried based upon user-defined inputs such as geographic region and date range. Each query results in a downloadable, tab- or comma-delimited text file that can be imported to any program (e.g., SAS, Excel, Access) for further analysis. Comments regarding the interface are encouraged. Questions in reference to the data should be addressed to the contact provided on subsequent pages.
DEIMS-SDR (Dynamic Ecological Information Management System - Site and dataset registry) is an information management system that allows you to discover long-term ecosystem research sites around the globe, along with the data gathered at those sites and the people and networks associated with them. DEIMS-SDR describes a wide range of sites, providing a wealth of information, including each site’s location, ecosystems, facilities, parameters measured and research themes. It is also possible to access a growing number of datasets and data products associated with the sites. All sites and dataset records can be referenced using unique identifiers that are generated by DEIMS-SDR. It is possible to search for sites via keyword, predefined filters or a map search. By including accurate, up to date information in DEIMS, site managers benefit from greater visibility for their LTER site, LTSER platform and datasets, which can help attract funding to support site investments. The aim of DEIMS-SDR is to be the globally most comprehensive catalogue of environmental research and monitoring facilities, featuring foremost but not exclusively information about all LTER sites on the globe and providing that information to science, politics and the public in general.
The ACTRIS DC is designed to assist scientists with discovering and accessing atmospheric data and contains an up-to-date catalogue of available datasets in a number of databases distributed throughout the world. A site like this can never be complete, but we have aimed at including datasets from the most relevant databases to the ACTRIS project, also building on the work and experiences achieved in the EU FP6 research project Global Earth Observation and Monitoring. The focus of the web portal is validated data, but it is also possible to browse the ACTRIS data server for preliminary data (rapid delivery data) through this site. The web site allows you to search in a local metadata catalogue that contains information on actual datasets that are archived in external archives. It is set up so that you can search for data by selecting the chemical/physical variable, the data location, the database that holds the data, the type of data, the data acquisition platform, and the data matrix
Country
The eCUDO system is carried out by a consortium of partners that involves various research units, scientific institutes, and universities, which are brought together by a common field of scientific interest – the study of the seas and oceans. System publish oceanographic data as Open Access to a wider range of recipients, both in the research and the industrial sector, but also to regular citizens interested in the subject. The database prepared by the condortium covers the widest possible spectrum of information on the environment of the Baltic Sea and other marine areas. This database, along with dedicated tools for data exploration, contributes to the development of environmental awareness, economy, and sustainable exploitation of marine resources.
In keeping with the open data policies of the U.S. Agency for International Development (USAID) and Bill & Melinda Gates Foundation, the Cereal Systems Initiative for South Asia (CSISA) has launched the CSISA Data Repository to ensure public accessibility to key data sets, including crop cut data- directly observed, crop yield estimates, on-station and on-farm research trial data and socioeconomic surveys. CSISA is a science-driven and impact-oriented regional initiative for increasing the productivity of cereal-based cropping systems in Bangladesh, India and Nepal, thus improving food security and farmers’ livelihoods. CSISA generates data that is of value and interest to a diverse audience of researchers, policymakers and the public. CSISA’s data repository is hosted on Dataverse, an open source web application developed at Harvard University to share, preserve, cite, explore and analyze research data. CSISA’s repository contains rich datasets, including on-station trial data from 2009–17 about crop and resource management practices for sustainable future cereal-based cropping systems. Collection of this data occurred during the long-term, on-station research trials conducted at the Indian Council of Agricultural Research – Research Complex for the Eastern Region in Bihar, India. The data include information on agronomic management for the sustainable intensification of cropping systems, mechanization, diversification, futuristic approaches to sustainable intensification, long-term effects of conservation agriculture practices on soil health and the pest spectrum. Additional trial data in the repository includes nutrient omission plot technique trials from Bihar, eastern Uttar Pradesh and Odisha, India, covering 2012–15, which help determine the indigenous nutrient supplying ability of the soil. This data helps develop precision nutrient management approaches that would be most effective in different types of soils. CSISA’s most popular dataset thus far includes crop cut data on maize in Odisha, India and rice in Nepal. Crop cut datasets provide ground-truthed yield estimates, as well as valuable information on relevant agronomic and socioeconomic practices affecting production practices and yield. A variety of research data on wheat systems are also available from Bangladesh and India. Additional crop cut data will also be coming online soon. Cropping system-related data and socioeconomic data are in the repository, some of which are cross-listed with a Dataverse run by the International Food Policy Research Institute. The socioeconomic datasets contain baseline information that is crucial for technology targeting, as well as to assess the adoption and performance of CSISA-supported technologies under smallholder farmers’ constrained conditions, representing the ultimate litmus test of their potential for change at scale. Other highly interesting datasets include farm composition and productive trajectory information, based on a 20-year panel dataset, and numerous wheat crop cut and maize nutrient omission trial data from across Bangladesh.