Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 17 result(s)
The World Ocean Database (WOD) is a collection of scientifically quality-controlled ocean profile and plankton data that includes measurements of temperature, salinity, oxygen, phosphate, nitrate, silicate, chlorophyll, alkalinity, pH, pCO2, TCO2, Tritium, Δ13Carbon, Δ14Carbon, Δ18Oxygen, Freon, Helium, Δ3Helium, Neon, and plankton. WOD contains all data of "World Data Service Oceanography" (WDS-Oceanography).
The Global Hydrology Resource Center (GHRC) provides both historical and current Earth science data, information, and products from satellite, airborne, and surface-based instruments. GHRC acquires basic data streams and produces derived products from many instruments spread across a variety of instrument platforms.
Specification Patterns is an online repository for information about property specification for finite-state verification. The intent of this repository is to collect patterns that occur commonly in the specification of concurrent and reactive systems.
The range of CIRAD's research has given rise to numerous datasets and databases associating various types of data: primary (collected), secondary (analysed, aggregated, used for scientific articles, etc), qualitative and quantitative. These "collections" of research data are used for comparisons, to study processes and analyse change. They include: genetics and genomics data, data generated by trials and measurements (using laboratory instruments), data generated by modelling (interpolations, predictive models), long-term observation data (remote sensing, observatories, etc), data from surveys, cohorts, interviews with players.
Country
HIstome: The Histone Infobase is a database of human histones, their post-translational modifications and modifying enzymes. HIstome is a combined effort of researchers from two institutions, Advanced Center for Treatment, Research and Education in Cancer (ACTREC), Navi Mumbai and Center of Excellence in Epigenetics, Indian Institute of Science Education and Research (IISER), Pune.
<<<!!!<<< OFFLINE >>>!!!>>> A recent computer security audit has revealed security flaws in the legacy HapMap site that require NCBI to take it down immediately. We regret the inconvenience, but we are required to do this. That said, NCBI was planning to decommission this site in the near future anyway (although not quite so suddenly), as the 1,000 genomes (1KG) project has established itself as a research standard for population genetics and genomics. NCBI has observed a decline in usage of the HapMap dataset and website with its available resources over the past five years and it has come to the end of its useful life. The International HapMap Project is a multi-country effort to identify and catalog genetic similarities and differences in human beings. Using the information in the HapMap, researchers will be able to find genes that affect health, disease, and individual responses to medications and environmental factors. The Project is a collaboration among scientists and funding agencies from Japan, the United Kingdom, Canada, China, Nigeria, and the United States. All of the information generated by the Project will be released into the public domain. The goal of the International HapMap Project is to compare the genetic sequences of different individuals to identify chromosomal regions where genetic variants are shared. By making this information freely available, the Project will help biomedical researchers find genes involved in disease and responses to therapeutic drugs. In the initial phase of the Project, genetic data are being gathered from four populations with African, Asian, and European ancestry. Ongoing interactions with members of these populations are addressing potential ethical issues and providing valuable experience in conducting research with identified populations. Public and private organizations in six countries are participating in the International HapMap Project. Data generated by the Project can be downloaded with minimal constraints. The Project officially started with a meeting in October 2002 (https://www.genome.gov/10005336/) and is expected to take about three years.
Country
depositar — taking the term from the Portuguese/Spanish verb for to deposit — is an online repository for research data. The site is built by the researchers for the researchers. You are free to deposit, discover, and reuse datasets on depositar for all your research purposes.
Country
QSAR DataBank (QsarDB) is repository for (Quantitative) Structure-Activity Relationships ((Q)SAR) data and models. It also provides open domain-specific digital data exchange standards and associated tools that enable research groups, project teams and institutions to share and represent predictive in silico models.
The Community Coordinated Modeling Center (CCMC) is a multi-agency partnership based at the NASA Goddard Space Flight Center in Greenbelt, Maryland and a component of the National Space Weather Program. The CCMC provides, to the international research community, access to modern space science simulations. In addition, the CCMC supports the transition to space weather operations of modern space research models.
GigaDB primarily serves as a repository to host data and tools associated with articles published by GigaScience Press; GigaScience and GigaByte (both are online, open-access journals). GigaDB defines a dataset as a group of files (e.g., sequencing data, analyses, imaging files, software programs) that are related to and support a unit-of-work (article or study). GigaDB allows the integration of manuscript publication with supporting data and tools.
Country
Swedish National Data Service (SND) is a research data infrastructure designed to assist researchers in preserving, maintaining, and disseminating research data in a secure and sustainable manner. The SND Search function makes it easy to find, use, and cite research data from a variety of scientific disciplines. Together with an extensive network of almost 40 Swedish higher education institutions and other research organisations, SND works for increased access to research data, nationally as well as internationally.
This website is a portal that enables access to multi-Terabyte turbulence databases. The data reside on several nodes and disks on our database cluster computer and are stored in small 3D subcubes. Positions are indexed using a Z-curve for efficient access.
Country
TUdatalib is the institutional repository of the TU Darmstadt for research data. It enables the structured storage of research data and descriptive metadata, long-term archiving (at least 10 years) and, if desired, the publication of data including DOI assignment. In addition there is a fine granular rights and role management.
The Energy Data eXchange (EDX) is an online collection of capabilities and resources that advance research and customize energy-related needs. EDX is developed and maintained by NETL-RIC researchers and technical computing teams to support private collaboration for ongoing research efforts, and tech transfer of finalized DOE NETL research products. EDX supports NETL-affiliated research by: Coordinating historical and current data and information from a wide variety of sources to facilitate access to research that crosscuts multiple NETL projects/programs; Providing external access to technical products and data published by NETL-affiliated research teams; Collaborating with a variety of organizations and institutions in a secure environment through EDX’s ;Collaborative Workspaces
Country
REDU is the institutional open research data repository of the University of Campinas, Brazil. It contains research data produced by all research groups of the University, in a wide range of scientific domains, which are indexed by DataCite DOI. Created at the end of 2020, it is coordinated by a scientific and technical committee composed by data librarians, IT professionals, and scientists representing user groups. Implemented on top of Dataverse, it exports metadata using OAIS. Files with sensitive content (due to ethics or legal constraints) are not stored therein - rather, only their metadata is recorded in REDU, as well as contact information so that interested researchers can contact the persons responsible for the files for conditional subsequent access. It is being little by little populated, following the University's Open Science policies.