Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 42 result(s)
The tree of life links all biodiversity through a shared evolutionary history. This project will produce the first online, comprehensive first-draft tree of all 1.8 million named species, accessible to both the public and scientific communities. Assembly of the tree will incorporate previously-published results, with strong collaborations between computational and empirical biologists to develop, test and improve methods of data synthesis. This initial tree of life will not be static; instead, we will develop tools for scientists to update and revise the tree as new data come in. Early release of the tree and tools will motivate data sharing and facilitate ongoing synthesis of knowledge.
The CiardRING is a global directory of web-based information services and datasets for agricultural research for development (ARD). It is the principal tool created through the CIARD initiative to allow information providers to register their services and datasets in various categories and so facilitate the discovery of sources of agriculture-related information across the world. The RING aims to provide an infrastructure to improve the accessibility of the outputs of agricultural research and of information relevant to agriculture.
OBIS strives to document the ocean's diversity, distribution and abundance of life. Created by the Census of Marine Life, OBIS is now part of the Intergovernmental Oceanographic Commission (IOC) of UNESCO, under its International Oceanographic Data and Information Exchange (IODE) programme
<<<!!!<<< This repository is no longer available. >>>!!!>>> The programme "International Oceanographic Data and Information Exchange" (IODE) of the "Intergovernmental Oceanographic Commission" (IOC) of UNESCO was established in 1961. Its purpose is to enhance marine research, exploitation and development, by facilitating the exchange of oceanographic data and information between participating Member States, and by meeting the needs of users for data and information products.
The HUGO Gene Nomenclature Committee (HGNC) assigned unique gene symbols and names to over 35,000 human loci, of which around 19,000 are protein coding. This curated online repository of HGNC-approved gene nomenclature and associated resources includes links to genomic, proteomic and phenotypic information, as well as dedicated gene family pages.
The EUROLAS Data Center (EDC) is one of the two data centers of the International Laser Ranging Service (ILRS). It collects, archives and distributes tracking data, predictions and other tracking relevant information from the global SLR network. Additionally EDC holds a mirror of the official Web-Pages of the ILRS at Goddard Space Flight Center (GSFC). And as result of the activities of the Analysis Working Group (AWG) of the ILRS, DGFI has been selected as analysis centers (AC) and as backup combination center (CC). This task includes weekly processing of SLR observations to LAGEOS-1/2 and ETALON-1/2 to compute station coordinates and earth orientation parameters. Additionally the combination of SLR solutions from the various analysis centres to a combinerd ILRS SLR solution.
The WorldWide Antimalarial Resistance Network (WWARN) is a collaborative platform generating innovative resources and reliable evidence to inform the malaria community on the factors affecting the efficacy of antimalarial medicines. Access to data is provided through diverse Tools and Resources: WWARN Explorer, Molecular Surveyor K13 Methodology, Molecular Surveyor pfmdr1 & pfcrt, Molecular Surveyor dhfr & dhps.
Galaxies, made up of billions of stars like our Sun, are the beacons that light up the structure of even the most distant regions in space. Not all galaxies are alike, however. They come in very different shapes and have very different properties; they may be large or small, old or young, red or blue, regular or confused, luminous or faint, dusty or gas-poor, rotating or static, round or disky, and they live either in splendid isolation or in clusters. In other words, the universe contains a very colourful and diverse zoo of galaxies. For almost a century, astronomers have been discussing how galaxies should be classified and how they relate to each other in an attempt to attack the big question of how galaxies form. Galaxy Zoo (Lintott et al. 2008, 2011) pioneered a novel method for performing large-scale visual classifications of survey datasets. This webpage allows anyone to download the resulting GZ classifications of galaxies in the project.
The Reciprocal Net is a distributed database used by research crystallographers to store information about molecular structures; much of the data is available to the general public. The Reciprocal Net project is still under development. Currently, there are 18 participating crystallography laboratories online. The project is funded by the National Science Foundation (NSF) and part of the National Science Digital Library. The contents of this collection will come principally from structures contributed by participating crystallography laboratories, thus providing a means for teachers, students, and the general public to connect better with current chemistry research. The Reciprocal Net's emphasis is on obtaining structures of general interest and usefulness to those several classes of digital library users.
The Brain Transcriptome Database (BrainTx) project aims to create an integrated platform to visualize and analyze our original transcriptome data and publicly accessible transcriptome data related to the genetics that underlie the development, function, and dysfunction stages and states of the brain.
The global data compilation consisting of ca. 60,000 data points may be downloaded in csv/xml format. This compilation does not contain the descriptive codes relating to metadata that were included in the previous compilations. Users are advised to consult the references and make their own interpretations as to the quality of the data.
The EPN (or EUREF Permanent Network) is a voluntary organization of several European agencies and universities that pool resources and permanent GNSS station data to generate precise GNSS products. The EPN has been created under the umbrella of the International Association Geodesy and more precisely by its sub-commission EUREF. The European Terrestrial Reference System 89 (ETRS89) is used as the standard precise GPS coordinate system throughout Europe. Supported by EuroGeographics and endorsed by the EU, this reference system forms the backbone for all geographic and geodynamic projects on the European territory both on a national as on an international level.
GeneCards is a searchable, integrative database that provides comprehensive, user-friendly information on all annotated and predicted human genes. It automatically integrates gene-centric data from ~125 web sources, including genomic, transcriptomic, proteomic, genetic, clinical and functional information.
Antarctic marine and terrestrial biodiversity data is widely scattered, patchy and often not readily accessible. In many cases the data is in danger of being irretrievably lost. Biodiversity.aq establishes and supports a distributed system of interoperable databases, giving easy access through a single internet portal to a set of resources relevant to research, conservation and management pertaining to Antarctic biodiversity. biodiversity.aq provides access to both marine and terrestrial Antarctic biodiversity data.
World Data Center for Oceanography serves to store and provide to users data on physical, chemical and dynamical parameters of the global ocean as well as oceanography-related papers and publications, which are either came from other countries through the international exchange or provided to the international exchange by organizations of the Russian Federation
The Humanitarian Data Exchange (HDX) is an open platform for sharing data across crises and organisations. Launched in July 2014, the goal of HDX is to make humanitarian data easy to find and use for analysis. HDX is managed by OCHA's Centre for Humanitarian Data, which is located in The Hague. OCHA is part of the United Nations Secretariat and is responsible for bringing together humanitarian actors to ensure a coherent response to emergencies. The HDX team includes OCHA staff and a number of consultants who are based in North America, Europe and Africa.
The International Ocean Discovery Program’s (IODP) Gulf Coast Repository (GCR) is located in the Research Park on the Texas A&M University campus in College Station, Texas. This repository stores DSDP, ODP, and IODP cores from the Pacific Ocean, the Caribbean Sea and Gulf of Mexico, and the Southern Ocean. A satellite repository at Rutgers University houses New Jersey/Delaware land cores 150X and 174AX.
<<<!!!<<< OFFLINE >>>!!!>>> A recent computer security audit has revealed security flaws in the legacy HapMap site that require NCBI to take it down immediately. We regret the inconvenience, but we are required to do this. That said, NCBI was planning to decommission this site in the near future anyway (although not quite so suddenly), as the 1,000 genomes (1KG) project has established itself as a research standard for population genetics and genomics. NCBI has observed a decline in usage of the HapMap dataset and website with its available resources over the past five years and it has come to the end of its useful life. The International HapMap Project is a multi-country effort to identify and catalog genetic similarities and differences in human beings. Using the information in the HapMap, researchers will be able to find genes that affect health, disease, and individual responses to medications and environmental factors. The Project is a collaboration among scientists and funding agencies from Japan, the United Kingdom, Canada, China, Nigeria, and the United States. All of the information generated by the Project will be released into the public domain. The goal of the International HapMap Project is to compare the genetic sequences of different individuals to identify chromosomal regions where genetic variants are shared. By making this information freely available, the Project will help biomedical researchers find genes involved in disease and responses to therapeutic drugs. In the initial phase of the Project, genetic data are being gathered from four populations with African, Asian, and European ancestry. Ongoing interactions with members of these populations are addressing potential ethical issues and providing valuable experience in conducting research with identified populations. Public and private organizations in six countries are participating in the International HapMap Project. Data generated by the Project can be downloaded with minimal constraints. The Project officially started with a meeting in October 2002 (https://www.genome.gov/10005336/) and is expected to take about three years.
<<<!!!<<< This repository is no longer available. >>>!!!>>> BioVeL is a virtual e-laboratory that supports research on biodiversity issues using large amounts of data from cross-disciplinary sources. BioVeL supports the development and use of workflows to process data. It offers the possibility to either use already made workflows or create own. BioVeL workflows are stored in MyExperiment - Biovel Group http://www.myexperiment.org/groups/643/content. They are underpinned by a range of analytical and data processing functions (generally provided as Web Services or R scripts) to support common biodiversity analysis tasks. You can find the Web Services catalogued in the BiodiversityCatalogue.
The DBCP is an international program coordinating the use of autonomous data buoys to observe atmospheric and oceanographic conditions, over ocean areas where few other measurements are taken.
The MPC is responsible for the designation of minor bodies in the solar system: minor planets; comets, in conjunction with the Central Bureau for Astronomical Telegrams (CBAT); and natural satellites (also in conjunction with CBAT). The MPC is also responsible for the efficient collection, computation, checking and dissemination of astrometric observations and orbits for minor planets and comets
The U.S. launched the Joint Global Ocean Flux Study (JGOFS) in the late 1980s to study the ocean carbon cycle. An ambitious goal was set to understand the controls on the concentrations and fluxes of carbon and associated nutrients in the ocean. A new field of ocean biogeochemistry emerged with an emphasis on quality measurements of carbon system parameters and interdisciplinary field studies of the biological, chemical and physical process which control the ocean carbon cycle. As we studied ocean biogeochemistry, we learned that our simple views of carbon uptake and transport were severely limited, and a new "wave" of ocean science was born. U.S. JGOFS has been supported primarily by the U.S. National Science Foundation in collaboration with the National Oceanic and Atmospheric Administration, the National Aeronautics and Space Administration, the Department of Energy and the Office of Naval Research. U.S. JGOFS, ended in 2005 with the conclusion of the Synthesis and Modeling Project (SMP).
EMSC collects real time parametric data (source parmaters and phase pickings) provided by 65 seismological networks of the Euro-Med region. These data are provided to the EMSC either by email or via QWIDS (Quake Watch Information Distribution System, developped by ISTI). The collected data are automatically archived in a database, made available via an autoDRM, and displayed on the web site. The collected data are automatically merged to produce automatic locations which are sent to several seismological institutes in order to perform quick moment tensors determination.