Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 26 result(s)
Country
The World Atlas of Language Structures (WALS) is a large database of structural (phonological, grammatical, lexical) properties of languages gathered from descriptive materials (such as reference grammars) by a team of 55 authors (many of them the leading authorities on the subject).
The tree of life links all biodiversity through a shared evolutionary history. This project will produce the first online, comprehensive first-draft tree of all 1.8 million named species, accessible to both the public and scientific communities. Assembly of the tree will incorporate previously-published results, with strong collaborations between computational and empirical biologists to develop, test and improve methods of data synthesis. This initial tree of life will not be static; instead, we will develop tools for scientists to update and revise the tree as new data come in. Early release of the tree and tools will motivate data sharing and facilitate ongoing synthesis of knowledge.
The OpenMadrigal project seeks to develop and support an on-line database for geospace data. The project has been led by MIT Haystack Observatory since 1980, but now has active support from Jicamarca Observatory and other community members. Madrigal is a robust, World Wide Web based system capable of managing and serving archival and real-time data, in a variety of formats, from a wide range of ground-based instruments. Madrigal is installed at a number of sites around the world. Data at each Madrigal site is locally controlled and can be updated at any time, but shared metadata between Madrigal sites allow searching of all Madrigal sites at once from any Madrigal site. Data is local; metadata is shared.
The EUROLAS Data Center (EDC) is one of the two data centers of the International Laser Ranging Service (ILRS). It collects, archives and distributes tracking data, predictions and other tracking relevant information from the global SLR network. Additionally EDC holds a mirror of the official Web-Pages of the ILRS at Goddard Space Flight Center (GSFC). And as result of the activities of the Analysis Working Group (AWG) of the ILRS, DGFI has been selected as analysis centers (AC) and as backup combination center (CC). This task includes weekly processing of SLR observations to LAGEOS-1/2 and ETALON-1/2 to compute station coordinates and earth orientation parameters. Additionally the combination of SLR solutions from the various analysis centres to a combinerd ILRS SLR solution.
Country
Avibase is an extensive database information system about all birds of the world, containing over 60 million records about 10,000 species and 22,000 subspecies of birds, including distribution information, taxonomy, synonyms in several languages and more. This site is managed by Denis Lepage and hosted by Bird Studies Canada, the Canadian copartner of Birdlife International. Avibase has been a work in progress since 1992 and I am now pleased to offer it as a service to the bird-watching and scientific community.
As with most biomedical databases, the first step is to identify relevant data from the research community. The Monarch Initiative is focused primarily on phenotype-related resources. We bring in data associated with those phenotypes so that our users can begin to make connections among other biological entities of interest. We import data from a variety of data sources. With many resources integrated into a single database, we can join across the various data sources to produce integrated views. We have started with the big players including ClinVar and OMIM, but are equally interested in boutique databases. You can learn more about the sources of data that populate our system from our data sources page https://monarchinitiative.org/about/sources.
ClinVar is a freely accessible, public archive of reports of the relationships among human variations and phenotypes, with supporting evidence. ClinVar thus facilitates access to and communication about the relationships asserted between human variation and observed health status, and the history of that interpretation. ClinVar processes submissions reporting variants found in patient samples, assertions made regarding their clinical significance, information about the submitter, and other supporting data. The alleles described in submissions are mapped to reference sequences, and reported according to the HGVS standard. ClinVar then presents the data for interactive users as well as those wishing to use ClinVar in daily workflows and other local applications. ClinVar works in collaboration with interested organizations to meet the needs of the medical genetics community as efficiently and effectively as possible
Country
The Canada Open Data Project provides Government of Canada data to the public as potential driver for economic innovation. Searchable and browsable raw data is available for download, and the public can recommend specific data be made available.
The Maize Genetics and Genomics Database focuses on collecting data related to the crop plant and model organism Zea mays. The project's goals are to synthesize, display, and provide access to maize genomics and genetics data, prioritizing mutant and phenotype data and tools, structural and genetic map sets, and gene models. MaizeGDB also aims to make the Maize Newsletter available, and provide support services to the community of maize researchers. MaizeGDB is working with the Schnable lab, the Panzea project, The Genome Reference Consortium, and iPlant Collaborative to create a plan for archiving, dessiminating, visualizing, and analyzing diversity data. MMaizeGDB is short for Maize Genetics/Genomics Database. It is a USDA/ARS funded project to integrate the data found in MaizeDB and ZmDB into a single schema, develop an effective interface to access this data, and develop additional tools to make data analysis easier. Our goal in the long term is a true next-generation online maize database.aize genetics and genomics database.
Junar provides a cloud-based open data platform that enables innovative organizations worldwide to quickly, easily and affordably make their data accessible to all. In just a few weeks, your initial datasets can be published, providing greater transparency, encouraging collaboration and citizen engagement, and freeing up precious staff resources.
OpenWorm aims to build the first comprehensive computational model of the Caenorhabditis elegans (C. elegans), a microscopic roundworm. With only a thousand cells, it solves basic problems such as feeding, mate-finding and predator avoidance. Despite being extremely well studied in biology, this organism still eludes a deep, principled understanding of its biology. We are using a bottom-up approach, aimed at observing the worm behaviour emerge from a simulation of data derived from scientific experiments carried out over the past decade. To do so we are incorporating the data available in the scientific community into software models. We are engineering Geppetto and Sibernetic, open-source simulation platforms, to be able to run these different models in concert. We are also forging new collaborations with universities and research institutes to collect data that fill in the gaps All the code we produce in the OpenWorm project is Open Source and available on GitHub.
The CATH database is a hierarchical domain classification of protein structures in the Protein Data Bank. Protein structures are classified using a combination of automated and manual procedures. There are four major levels in the CATH hierarchy; Class, Architecture, Topology and Homologous superfamily.
Country
Welcome to the transparency portal of the city of Karlsruhe, your central contact point for open data and documents of the city of Karlsruhe. On this portal you will find documents and reports as well as machine-readable data sets ("open data"). You may - under a few conditions - distribute, edit and also commercially use this information free of charge. We are happy if interesting projects arise from this - and if you tell us about your project. The information offered is constantly being expanded.
Country
The web service correspSearch aggregates metadata of letters from printed and digital scholarly editions and publications. It offers the aggregated correspondence metadata both via a feature-rich interface and via an API. The letter metadata are provided by scholarly projects of different institutions in a standardised, TEI-XML-based exchange format and and by using IDs from authority files (GeoNames, GND, VIAF etc.). The web service itself does not set a spatial or temporal collection focus. Currently, the time frame of the aggregated correspondence data ranges from 1500 to the 20th century.
Real-Time Database for high-resolution Neutron Monitor measurements. NMDB provides access to Neutron Monitor measurements from stations around the world. The goal of NMDB is to provide easy access to all Neutron Monitor measurements through an easy to use interface. NMDB provides access to real-time as well as historical data.
The National Practitioner Data Bank (NPDB), or "the Data Bank," is a confidential information clearinghouse created by Congress with the primary goals of improving health care quality, protecting the public, and reducing health care fraud and abuse in the U.S.
You will find in the Access to Archival Databases (AAD) resource online access to records in a small selection of historic databases preserved permanently in NARA. Out of the nearly 200,000 data files in its holdings, NARA has selected approximately 475 of them for public searching through AAD. We selected these data because the records identify specific persons, geographic areas, organizations, and dates. The records cover a wide variety of civilian and military functions and have many genealogical, social, political, and economic research uses. AAD provides: Access to over 85 million historic electronic records created by more than 30 agencies of the U.S. federal government and from collections of donated historical materials. Both free-text and fielded searching options. The ability to retrieve, print, and download records with the specific information that you seek. Information to help you find and understand the records.
The IUCN Red List of Threatened Species provides taxonomic, conservation status and distribution data on plants and animals that are critically endangered, endangered and vulnerable. Data are available in Esri File Geodatabase format, Esri Shapefile format, and Excel format.
The Coastal Data Information Program (CDIP) is an extensive network for monitoring waves and beaches along the coastlines of the United States. Since its inception in 1975, the program has produced a vast database of publicly-accessible environmental data for use by coastal engineers and planners, scientists, mariners, and marine enthusiasts. The program has also remained at the forefront of coastal monitoring, developing numerous innovations in instrumentation, system control and management, computer hardware and software, field equipment, and installation techniques.
This web site is provided by the United States Geological Survey’s (USGS) Earthquake Hazards Program as part of our effort to reduce earthquake hazard in the United States. We are part of the USGS Hazards Mission Area and are the USGS component of the congressionally established, multi-agency National Earthquake Hazards Reduction Program (NEHRP).
The Met Office is the UK's National Weather Service. We have a long history of weather forecasting and have been working in the area of climate change for more than two decades. As a world leader in providing weather and climate services, we employ more than 1,800 at 60 locations throughout the world. We are recognised as one of the world's most accurate forecasters, using more than 10 million weather observations a day, an advanced atmospheric model and a high performance supercomputer to create 3,000 tailored forecasts and briefings a day. These are delivered to a huge range of customers from the Government, to businesses, the general public, armed forces, and other organisations.
Knoema is a knowledge platform. The basic idea is to connect data with analytical and presentation tools. As a result, we end with one uniformed platform for users to access, present and share data-driven content. Within Knoema, we capture most aspects of a typical data use cycle: accessing data from multiple sources, bringing relevant indicators into a common space, visualizing figures, applying analytical functions, creating a set of dashboards, and presenting the outcome.
This is CSDB version 1 merged from Bacterial (BCSDB) and Plant&Fungal (PFCSDB) databases. This database aims at provision of structural, bibliographic, taxonomic, NMR spectroscopic and other information on glycan and glycoconjugate structures of prokaryotic, plant and fungal origin. It has been merged from the Bacterial and Plant&Fungal Carbohydrate Structure Databases (BCSDB+PFCSDB). The key points of this service are: High coverage. The coverage for bacteria (up to 2016) and archaea (up to 2016) is above 80%. Similar coverage for plants and fungi is expected in the future. The database is close to complete up to 1998 for plants, and up to 2006 for fungi. Data quality. High data quality is achieved by manual curation using original publications which is assisted by multiple automatic procedures for error control. Errors present in publications are reported and corrected, when possible. Data from other databases are verified on import. Detailed annotations. Structural data are supplied with extended bibliography, assigned NMR spectra, taxon identification including strains and serogroups, and other information if available in the original publication. Services. CSDB serves as a platform for a number of computational services tuned for glycobiology, such as NMR simulation, automated structure elucidation, taxon clustering, 3D molecular modeling, statistical processing of data etc. Integration. CSDB is cross-linked to other glycoinformatics projects and NCBI databases. The data are exportable in various formats, including most widespread encoding schemes and records using GlycoRDF ontology. Free web access. Users can access the database for free via its web interface (see Help). The main source of data is retrospective literature analysis. About 20% of data were imported from CCSD (Carbbank, University of Georgia, Athens; structures published before 1996) with subsequent manual curation and approval. The current coverage is displayed in red on the top of the left menu. The time lag between the publication of new data and their deposition into CSDB is ca. 1 year. In the scope of bacterial carbohydrates, CSDB covers nearly all structures of this origin published up to 2016. Prokaryotic, plant and fungal means that a glycan was found in the organism(s) belonging to these taxonomic domains or was obtained by modification of those found in them. Carbohydrate means a structure composed of any residues linked by glycosidic, ester, amidic, ketal, phospho- or sulpho-diester bonds in which at least one residue is a sugar or its derivative.