Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 68 result(s)
The International Ocean Discovery Program’s (IODP) Gulf Coast Repository (GCR) is located in the Research Park on the Texas A&M University campus in College Station, Texas. This repository stores DSDP, ODP, and IODP cores from the Pacific Ocean, the Caribbean Sea and Gulf of Mexico, and the Southern Ocean. A satellite repository at Rutgers University houses New Jersey/Delaware land cores 150X and 174AX.
The Analytical Geomagnetic Data Center of the Trans-Regional INTERMAGNET Segment is operated by the Geophysical Center of the Russian Academy of Sciences (GC RAS). Geomagnetic data are transmitted from observatories and stations located in Russia and near-abroad countries. The Center also provides access to spaceborne data products. The MAGNUS hardware-software system underlies the operation of the Center. Its particular feature is the automated real-time recognition of artificial (anthropogenic) disturbances in incoming data. Being based on fuzzy logic approach, this quality control service facilitates the preparation of the definitive magnetograms from preliminary records carried out by data experts manually. The MAGNUS system also performs on-the-fly multi-criteria estimation of geomagnetic activity using several indicators and provides online tools for modeling electromagnetic parameters in the near-Earth space. The collected geomagnetic data are stored using relational database management system. The geomagnetic database is intended for storing both 1-minute and 1-second data. The results of anthropogenic and natural disturbance recognition are also stored in the database.
<<<!!!<<< OFFLINE >>>!!!>>> A recent computer security audit has revealed security flaws in the legacy HapMap site that require NCBI to take it down immediately. We regret the inconvenience, but we are required to do this. That said, NCBI was planning to decommission this site in the near future anyway (although not quite so suddenly), as the 1,000 genomes (1KG) project has established itself as a research standard for population genetics and genomics. NCBI has observed a decline in usage of the HapMap dataset and website with its available resources over the past five years and it has come to the end of its useful life. The International HapMap Project is a multi-country effort to identify and catalog genetic similarities and differences in human beings. Using the information in the HapMap, researchers will be able to find genes that affect health, disease, and individual responses to medications and environmental factors. The Project is a collaboration among scientists and funding agencies from Japan, the United Kingdom, Canada, China, Nigeria, and the United States. All of the information generated by the Project will be released into the public domain. The goal of the International HapMap Project is to compare the genetic sequences of different individuals to identify chromosomal regions where genetic variants are shared. By making this information freely available, the Project will help biomedical researchers find genes involved in disease and responses to therapeutic drugs. In the initial phase of the Project, genetic data are being gathered from four populations with African, Asian, and European ancestry. Ongoing interactions with members of these populations are addressing potential ethical issues and providing valuable experience in conducting research with identified populations. Public and private organizations in six countries are participating in the International HapMap Project. Data generated by the Project can be downloaded with minimal constraints. The Project officially started with a meeting in October 2002 (https://www.genome.gov/10005336/) and is expected to take about three years.
<<<!!!<<< This repository is no longer available. >>>!!!>>> BioVeL is a virtual e-laboratory that supports research on biodiversity issues using large amounts of data from cross-disciplinary sources. BioVeL supports the development and use of workflows to process data. It offers the possibility to either use already made workflows or create own. BioVeL workflows are stored in MyExperiment - Biovel Group http://www.myexperiment.org/groups/643/content. They are underpinned by a range of analytical and data processing functions (generally provided as Web Services or R scripts) to support common biodiversity analysis tasks. You can find the Web Services catalogued in the BiodiversityCatalogue.
STRENDA DB is a storage and search platform supported by the Beilstein-Institut that incorporates the STRENDA Guidelines in a user-friendly, web-based system. If you are an author who is preparing a manuscript containing functional enzymology data, STRENDA DB provides you the means to ensure that your data sets are complete and valid before you submit them as part of a publication to a journal. Data entered in the STRENDA DB submission form are automatically checked for compliance with the STRENDA Guidelines; users receive warnings informing them when necessary information is missing.
The Australian National University undertake work to collect and publish metadata about research data held by ANU, and in the case of four discipline areas, Earth Sciences, Astronomy, Phenomics and Digital Humanities to develop pipelines and tools to enable the publication of research data using a common and repeatable approach. Aims and outcomes: To identify and describe research data held at ANU, to develop a consistent approach to the publication of metadata on the University's data holdings: Identification and curation of significant orphan data sets that might otherwise be lost or inadvertently destroyed, to develop a culture of data data sharing and data re-use.
EMSC collects real time parametric data (source parmaters and phase pickings) provided by 65 seismological networks of the Euro-Med region. These data are provided to the EMSC either by email or via QWIDS (Quake Watch Information Distribution System, developped by ISTI). The collected data are automatically archived in a database, made available via an autoDRM, and displayed on the web site. The collected data are automatically merged to produce automatic locations which are sent to several seismological institutes in order to perform quick moment tensors determination.
The ProteomeXchange consortium has been set up to provide a single point of submission of MS proteomics data to the main existing proteomics repositories, and to encourage the data exchange between them for optimal data dissemination. Current members accepting submissions are: The PRIDE PRoteomics IDEntifications database at the European Bioinformatics Institute focusing mainly on shotgun mass spectrometry proteomics data PeptideAtlas/PASSEL focusing on SRM/MRM datasets.
The main goal of the ECCAD project is to provide scientific and policy users with datasets of surface emissions of atmospheric compounds, and ancillary data, i.e. data required to estimate or quantify surface emissions. The supply of ancillary data - such as maps of population density, maps of fires spots, burnt areas, land cover - could help improve and encourage the development of new emissions datasets. ECCAD offers: Access to global and regional emission inventories and ancillary data, in a standardized format Quick visualization of emission and ancillary data Rationalization of the use of input data in algorithms or emission models Analysis and comparison of emissions datasets and ancillary data Tools for the evaluation of emissions and ancillary data ECCAD is a dynamical and interactive database, providing the most up to date datasets including data used within ongoing projects. Users are welcome to add their own datasets, or have their regional masks included in order to use ECCAD tools.
The EarthChem Library is a data repository that archives, publishes and makes accessible data and other digital content from geoscience research (analytical data, data syntheses, models, technical reports, etc.)
InnateDB is a publicly available database of the genes, proteins, experimentally-verified interactions and signaling pathways involved in the innate immune response of humans, mice and bovines to microbial infection. The database captures an improved coverage of the innate immunity interactome by integrating known interactions and pathways from major public databases together with manually-curated data into a centralised resource. The database can be mined as a knowledgebase or used with our integrated bioinformatics and visualization tools for the systems level analysis of the innate immune response.
The World Glacier Monitoring Service (WGMS) collects standardized observations on changes in mass, volume, area and length of glaciers with time (glacier fluctuations), as well as statistical information on the distribution of perennial surface ice in space (glacier inventories). Such glacier fluctuation and inventory data are high priority key variables in climate system monitoring; they form a basis for hydrological modelling with respect to possible effects of atmospheric warming, and provide fundamental information in glaciology, glacial geomorphology and quaternary geology. The highest information density is found for the Alps and Scandinavia, where long and uninterrupted records are available. As a contribution to the Global Terrestrial/Climate Observing System (GTOS, GCOS), the Division of Early Warning and Assessment and the Global Environment Outlook of UNEP, and the International Hydrological Programme of UNESCO, the WGMS collects and publishes worldwide standardized glacier data.
IBICT is providing a research data repository that takes care of long-term preservation and archiving of good practices, so that researchers can share, maintain control and get recognition for your data. The repository supports research data sharing with Quote persistent data, allowing them to be played. The Dataverse is a large open data repository of all disciplines, created by the Institute for Quantitative Social Science at Harvard University. IBICT the Dataverse repository provides a means available for free to deposit and find specific data sets stored by employees of the institutions participating in the Cariniana network.
Provides quick, uncluttered access to information about Heliophysics research data that have been described with SPASE resource descriptions.
The EPN (or EUREF Permanent Network) is a voluntary organization of several European agencies and universities that pool resources and permanent GNSS station data to generate precise GNSS products. The EPN has been created under the umbrella of the International Association Geodesy and more precisely by its sub-commission EUREF. The European Terrestrial Reference System 89 (ETRS89) is used as the standard precise GPS coordinate system throughout Europe. Supported by EuroGeographics and endorsed by the EU, this reference system forms the backbone for all geographic and geodynamic projects on the European territory both on a national as on an international level.
Nuclear Data Services contains atomic, molecular and nuclear data sets for the development and maintenance of nuclear technologies. It includes energy-dependent reaction probabilities (cross sections), the energy and angular distributions of reaction products for many combinations of target and projectile, and the atomic and nuclear properties of excited states, and their radioactive decay data. Their main concern is providing data required to design a modern nuclear reactor for electricity production. Approximately 11.5 million nuclear data points have been measured and compiled into computerized form.
Welcome to the home page of the Rutgers/New Jersey Geological and Water Survey Core Repository. We are an official repository of the International Ocean Discovery Program (IODP), hosting Legs 150X and 174AX onshore cores drilled as part of the NJ/Mid-Atlantic Transect, and the New Jersey Geological and Water Survey (NJGWS). Cores from other ODP/IODP repositories are available through ODP. In addition to ODP/IODP cores, we are the repository for: 1. 6668 m of Newark Basin Drilling Project Triassic cores (e.g., Olsen, Kent, et al. 1996) 2. More than 10,000 m of the Army Corps of Engineers Passaic Tunnel Project Triassic and Jurassic cores 3. 1947 m of core from the Chesapeake Bay Impact Structure Deep Hole 4. Cores obtained from the Northern North Atlantic as part of the IODP Expedition 303/306 5. Cores from various rift and drift basins on the eastern and Gulf Coasts of the U.S. 6. Geological samples from the New Jersey Geological and Water Survey (NJGWS) and United States Geological Survey (USGS) including 304 m of continuous NJGWS/USGS NJ coastal plain cores.
A Research Data Repository (RDR) for researchers in India. Any registered researchers of Indian Universities can manage their research data on eSHODHMANTHAN-RDR free of cost. This research data repository is configured to provide free of cost research data management services to existing and forthcoming researchers throughout their research life. eSHODHMANTHAN-RDR is powered by Dataverse project of Harvard University
The WDC is concerned with the collection, management, distribution and utilization of data from Chinese provinces, autonomous regions and counties,including: Resource data:management,distribution and utlilzation of land, water, climate, forest, grassland, minerals, energy, etc. Environmental data:pollution,environmental quality, change, natural disasters,soli erosion, etc. Biological resources:animals, plants,wildlife Social economy:agriculture, industry, transport, commerce,infrastructure,etc. Population and labor Geographic background data on scales of 1:4M,1:1M, 1:(1/2)M, 1:2500, etc.
Numerical database of atomic and molecular processes and particle-surface interactions. ALADDIN has formatted data on atomic structure and spectra (energy levels,wave lengths, and transition probabilities); electron and heavy particle collisions with atoms, ions, and molecules (cross sections and/or rate coefficients, including, in most cases, analytic fit to the data); sputtering of surfaces by impact of main plasma constituents and self sputtering; particle reflection from surfaces; thermophysical and thermomechanical properties of beryllium and pyrolytic graphites.
The Barcode of Life Data Systems (BOLD) provides DNA barcode data. BOLD's online workbench supports data validation, annotation, and publication for specimen, distributional, and molecular data. The platform consists of four main modules: a data portal, a database of barcode clusters, an educational portal, and a data collection workbench. BOLD is the go-to site for DNA-based identification. As the central informatics platform for DNA barcoding, BOLD plays a crucial role in assimilating and organizing data gathered by the international barcode research community. Two iBOL (International Barcode of Life) Working Groups are supporting the ongoing development of BOLD.
The objective of this database is to stimulate the exchange of information and the collaboration between researchers within the ChArMEx community. However, this community is not exclusive and researchers not directly involved in ChArMEx, but who wish to contribute to the achievements of ChArMEx scientific and/or educational goals are welcome to join-in. The database is a depository for all the data collected during the various projects that contribute to ChArMEx coordinated program. It aims at documenting, storing and distributing the data produced or used by the project community. However, it is also intended to host datasets that were produced outside the ChArMEx program but which are meaningful to ChArMEx scientific and/or educational goals. Any data owner who wishes to add or link his dataset to ChArMEx database is welcome to contact the database manager in order to get help and support. The ChArMEx database includes past and recent geophysical in situ observations, satellite products and model outputs. The database organizes the data management and provides data services to end-users of ChArMEx data. The database system provides a detailed description of the products and uses standardized formats whenever it is possible. It defines the access rules to the data and details the mutual rights and obligations of data providers and users (see ChArMEx data and publication policy). The database is being developed jointly by : SEDOO, OMP Toulouse , ICARE, Lille and ESPRI, IPSL Paris
The aim of the EPPO Global Database is to provide in a single portal for all pest-specific information that has been produced or collected by EPPO. The full database is available via the Internet, but when no Internet connection is available a subset of the database called ‘EPPO GD Desktop’ can be run as a software (now replacing PQR).