Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 524 result(s)
Country
KRISHI Portal is an initiative of Indian Council of Agricultural Research (ICAR) to bring its knowledge resources to all stakeholders at one place. The portal is being developed as a centralized data repository system of ICAR consisting of Technology, Data generated through Experiments/ Surveys/ Observational studies, Geo-spatial data, Publications, Learning Resources etc. For implementation of research data management electronically in ICAR Institutes and digitization of agricultural research, KRISHI (Knowledge based Resources Information Systems Hub for Innovations in Agriculture) Portal has been developed as ICAR Research Data Repository for knowledge management. Data Inventory Repository aims at creating Meta Data Inventory through information related to data availability at Institute level. The portal consists of six repositories viz. technology, publication, experimental data, observational data survey data and geo-portal. The portal can be accessed at http://krishi.icar.gov.in. During the period of 2016-17, input data on latitude and longitude of all KVKs under this Zone was submitted to the concerned authority to put them in geo-portal. One brainstorming session was organized at this institute for all scientists on its use and uploading information in portal. As per guidelines of the council, various kinds of publications pertaining to this institute were also uploaded in this portal.
As a member of SWE-CLARIN, the Humanities Lab will provide tools and expertise related to language archiving, corpus and (meta)data management, with a continued emphasis on multimodal corpora, many of which contain Swedish resources, but also other (often endangered) languages, multilingual or learner corpora. As a CLARIN K-centre we provide advice on multimodal and sensor-based methods, including EEG, eye-tracking, articulography, virtual reality, motion capture, av-recording. Current work targets automatic data retrieval from multimodal data sets, as well as the linking of measurement data (e.g. EEG, fMRI) or geo-demographic data (GIS, GPS) to language data (audio, video, text, annotations). We also provide assistance with speech and language technology related matters to various projects. A primary resource in the Lab is The Humanities Lab corpus server, containing a varied set of multimodal language corpora with standardised metadata and linked layers of annotations and other resources.
Språkbanken was established in 1975 as a national center located in the Faculty of Arts, University of Gothenburg. Allén's groundbreaking corpus linguistic research resulted in the creation of one of the first large electronic text corpora in another language than English, with one million words of newspaper text. The task of Språkbanken is to collect, develop, and store (Swedish) text corpora, and to make linguistic data extracted from the corpora available to researchers and to the public.
IsoArcH is an open access isotope web-database for bioarchaeological samples from prehistoric and historical periods all over the world. With 40,000+ isotope related data obtained on 13,000+ specimens (i.e., humans, animals, plants and organic residues) coming from 500+ archaeological sites, IsoArcH is now one of the world's largest repositories for isotopic data and metadata deriving from archaeological contexts. IsoArcH allows to initiate big data initiatives but also highlights research lacks in certain regions or time periods. Among others, it supports the creation of sound baselines, the undertaking of multi-scale analysis, and the realization of extensive studies and syntheses on various research issues such as paleodiet, food production, resource management, migrations, paleoclimate and paleoenvironmental changes.
SeaBASS, the publicly shared archive of in situ oceanographic and atmospheric data maintained by the NASA Ocean Biology Processing Group (OBPG). High quality in situ measurements are prerequisite for satellite data product validation, algorithm development, and many climate-related inquiries. As such, the NASA Ocean Biology Processing Group (OBPG) maintains a local repository of in situ oceanographic and atmospheric data to support their regular scientific analyses. The SeaWiFS Project originally developed this system, SeaBASS, to catalog radiometric and phytoplankton pigment data used their calibration and validation activities. To facilitate the assembly of a global data set, SeaBASS was expanded with oceanographic and atmospheric data collected by participants in the SIMBIOS Program, under NASA Research Announcements NRA-96 and NRA-99, which has aided considerably in minimizing spatial bias and maximizing data acquisition rates. Archived data include measurements of apparent and inherent optical properties, phytoplankton pigment concentrations, and other related oceanographic and atmospheric data, such as water temperature, salinity, stimulated fluorescence, and aerosol optical thickness. Data are collected using a number of different instrument packages, such as profilers, buoys, and hand-held instruments, and manufacturers on a variety of platforms, including ships and moorings.
The mission of the GO Consortium is to develop a comprehensive, computational model of biological systems, ranging from the molecular to the organism level, across the multiplicity of species in the tree of life. The Gene Ontology (GO) knowledgebase is the world’s largest source of information on the functions of genes. This knowledge is both human-readable and machine-readable, and is a foundation for computational analysis of large-scale molecular biology and genetics experiments in biomedical research.
OMIM is a comprehensive, authoritative compendium of human genes and genetic phenotypes that is freely available and updated daily. OMIM is authored and edited at the McKusick-Nathans Institute of Genetic Medicine, Johns Hopkins University School of Medicine, under the direction of Dr. Ada Hamosh. Its official home is omim.org.
Jason is a remote-controlled deep-diving vessel that gives shipboard scientists immediate, real-time access to the sea floor. Instead of making short, expensive dives in a submarine, scientists can stay on deck and guide Jason as deep as 6,500 meters (4 miles) to explore for days on end. Jason is a type of remotely operated vehicle (ROV), a free-swimming vessel connected by a long fiberoptic tether to its research ship. The 10-km (6 mile) tether delivers power and instructions to Jason and fetches data from it.
Seafloor Sediments Data Collection is a collection of more than 14,000 archived marine geological samples recovered from the seafloor. The inventory includes long, stratified sediment cores, as well as rock dredges, surface grabs, and samples collected by the submersible Alvin.
The Digital Archaeological Record (tDAR) is an international digital repository for the digital records of archaeological investigations. tDAR’s use, development, and maintenance are governed by Digital Antiquity, an organization dedicated to ensuring the long-term preservation of irreplaceable archaeological data and to broadening the access to these data.
NED is a comprehensive database of multiwavelength data for extragalactic objects, providing a systematic, ongoing fusion of information integrated from hundreds of large sky surveys and tens of thousands of research publications. The contents and services span the entire observed spectrum from gamma rays through radio frequencies. As new observations are published, they are cross- identified or statistically associated with previous data and integrated into a unified database to simplify queries and retrieval. Seamless connectivity is also provided to data in NASA astrophysics mission archives (IRSA, HEASARC, MAST), to the astrophysics literature via ADS, and to other data centers around the world.
Country
The CDC Data Catalogue describes the Climate Data of the DWD and provides access to data, descriptions and access methods. Climate Data refers to observations, statistical indices and spatial analyses. CDC comprises Climate Data for Germany, but also global Climate Data, which were collected and processed in the framework of international co-operation. The CDC Data Catalogue is under construction and not yet complete. The purposes of the CDC Data Catalogue are: to provide uniform access to climate data centres and climate datasets of the DWD to describe the climate data according to international metadata standards to make the catalogue information available on the Internet to support the search for climate data to facilitate the access to climate data and climate data descriptions
Country
The Atlantic Canada Conservation Data Centre (ACCDC) maintains comprehensive lists of plant and animal species. The Atlantic CDC has geo-located records of species occurrences and records of extremely rare to uncommon species in the Atlantic region, including New Brunswick, Nova Scotia, Prince Edward Island, Newfoundland, and Labrador. The Atlantic CDC also maintains biological and other types of data in a variety of linked databases.
ARCHE (A Resource Centre for the HumanitiEs) is a service aimed at offering stable and persistent hosting as well as dissemination of digital research data and resources for the Austrian humanities community. ARCHE welcomes data from all humanities fields. ARCHE is the successor of the Language Resources Portal (LRP) and acts as Austria’s connection point to the European network of CLARIN Centres for language resources.
dbEST is a division of GenBank that contains sequence data and other information on "single-pass" cDNA sequences, or "Expressed Sequence Tags", from a number of organisms. Expressed Sequence Tags (ESTs) are short (usually about 300-500 bp), single-pass sequence reads from mRNA (cDNA). Typically they are produced in large batches. They represent a snapshot of genes expressed in a given tissue and/or at a given developmental stage. They are tags (some coding, others not) of expression for a given cDNA library. Most EST projects develop large numbers of sequences. These are commonly submitted to GenBank and dbEST as batches of dozens to thousands of entries, with a great deal of redundancy in the citation, submitter and library information. To improve the efficiency of the submission process for this type of data, we have designed a special streamlined submission process and data format. dbEST also includes sequences that are longer than the traditional ESTs, or are produced as single sequences or in small batches. Among these sequences are products of differential display experiments and RACE experiments. The thing that these sequences have in common with traditional ESTs, regardless of length, quality, or quantity, is that there is little information that can be annotated in the record. If a sequence is later characterized and annotated with biological features such as a coding region, 5'UTR, or 3'UTR, it should be submitted through the regular GenBank submissions procedure (via BankIt or Sequin), even if part of the sequence is already in dbEST. dbEST is reserved for single-pass reads. Assembled sequences should not be submitted to dbEST. GenBank will accept assembled EST submissions for the forthcoming TSA (Transcriptome Shotgun Assembly) division. The individual reads which make up the assembly should be submitted to dbEST, the Trace archive or the Short Read Archive (SRA) prior to the submission of the assemblies.
The EUROLAS Data Center (EDC) is one of the two data centers of the International Laser Ranging Service (ILRS). It collects, archives and distributes tracking data, predictions and other tracking relevant information from the global SLR network. Additionally EDC holds a mirror of the official Web-Pages of the ILRS at Goddard Space Flight Center (GSFC). And as result of the activities of the Analysis Working Group (AWG) of the ILRS, DGFI has been selected as analysis centers (AC) and as backup combination center (CC). This task includes weekly processing of SLR observations to LAGEOS-1/2 and ETALON-1/2 to compute station coordinates and earth orientation parameters. Additionally the combination of SLR solutions from the various analysis centres to a combinerd ILRS SLR solution.
The USDA Economics, Statistics and Market Information System contains reports and datasets of multiple agencies within the United States Department of Agriculture, including the Agricultural Marketing Service, the Economic Research Service, the Foreign Agricultural Service, the National Agricultural Statistics Service, and the World Agricultural Outlook Board. Historical and current reports and datasets are included.
Climate4impact: a dedicated interface to ESGF for the climate impact community The portal Climate4impact, part of the ENES Data Infrastructure, provides access to data and quick looks of global and regional climate models and downscaled higher resolution climate data. The portal provides data transformation tooling and mapping & plotting capabilities, guidance, documentation, FAQ and examples. The Climate4Impact portal will be further developed during the IS-ENES3 project (2019-2023)and moved to a different environment. Meanwhile the portal at https://climate4impact.eu will remain available, but no new information or processing options will be included. When the new portal will become available this will be announced on https://is.enes.org/.
The Objectively Analyzed air-sea Fluxes (OAFlux) project is a research and development project focusing on global air-sea heat, moisture, and momentum fluxes. The project is committed to produce high-quality, long-term, global ocean surface forcing datasets from the late 1950s to the present to serve the needs of the ocean and climate communities on the characterization, attribution, modeling, and understanding of variability and long-term change in the atmosphere and the oceans.
OrthoMCL is a genome-scale algorithm for grouping orthologous protein sequences. It provides not only groups shared by two or more species/genomes, but also groups representing species-specific gene expansion families. So it serves as an important utility for automated eukaryotic genome annotation. OrthoMCL starts with reciprocal best hits within each genome as potential in-paralog/recent paralog pairs and reciprocal best hits across any two genomes as potential ortholog pairs. Related proteins are interlinked in a similarity graph. Then MCL (Markov Clustering algorithm,Van Dongen 2000; www.micans.org/mcl) is invoked to split mega-clusters. This process is analogous to the manual review in COG construction. MCL clustering is based on weights between each pair of proteins, so to correct for differences in evolutionary distance the weights are normalized before running MCL.
The DOE Data Explorer (DDE) is an information tool to help you locate DOE's collections of data and non-text information and, at the same time, retrieve individual datasets within some of those collections. It includes collection citations prepared by the Office of Scientific and Technical Information, as well as citations for individual datasets submitted from DOE Data Centers and other organizations.
Country
Lithuanian Data Archive for Social Sciences and Humanities (LiDA) is a virtual digital infrastructure for SSH data and research resources acquisition, long-term preservation and dissemination. All the data and research resources are documented in both English and Lithuanian according to international standards. Access to the resources is provided via Dataverse repository. LiDA curates different types of resources and they are published into catalogues according to the type: Survey Data, Aggregated Data (including Historical Statistics), Encoded Data (including News Media Studies), and Textual Data. Also, LiDA holds collections of social sciences and humanities data deposited by Lithuanian science and higher education institutions and Lithuanian state institutions (Data of Other Institutions). LiDA is hosted by the Centre for Data Analysis and Archiving of Kaunas University of Technology (data.ktu.edu).
This Animal Quantitative Trait Loci (QTL) database (Animal QTLdb) is designed to house all publicly available QTL and trait mapping data (i.e. trait and genome location association data; collectively called "QTL data" on this site) on livestock animal species for easily locating and making comparisons within and between species. New database tools are continuely added to align the QTL and association data to other types of genome information, such as annotated genes, RH / SNP markers, and human genome maps. Besides the QTL data from species listed below, the QTLdb is open to house QTL/association date from other animal species where feasible. Note that the JAS along with other journals, now require that new QTL/association data be entered into a QTL database as part of their publication requirements.