Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 130 result(s)
The UniProt Knowledgebase (UniProtKB) is the central hub for the collection of functional information on proteins, with accurate, consistent and rich annotation. In addition to capturing the core data mandatory for each UniProtKB entry (mainly, the amino acid sequence, protein name or description, taxonomic data and citation information), as much annotation information as possible is added. This includes widely accepted biological ontologies, classifications and cross-references, and clear indications of the quality of annotation in the form of evidence attribution of experimental and computational data. The Universal Protein Resource (UniProt) is a comprehensive resource for protein sequence and annotation data. The UniProt databases are the UniProt Knowledgebase (UniProtKB), the UniProt Reference Clusters (UniRef), and the UniProt Archive (UniParc). The UniProt Metagenomic and Environmental Sequences (UniMES) database is a repository specifically developed for metagenomic and environmental data. The UniProt Knowledgebase,is an expertly and richly curated protein database, consisting of two sections called UniProtKB/Swiss-Prot and UniProtKB/TrEMBL.
The ILO Department of Statistics is the focal point to the United Nations on labour statistics. They develop international standards for better measurement of labour issues and enhanced international comparability; provide relevant, timely and comparable labour statistics; and help Member States develop and improve their labour statistics.
The Universal Protein Resource (UniProt) is a comprehensive resource for protein sequence and annotation data. The UniProt databases are the UniProt Knowledgebase (UniProtKB), the UniProt Reference Clusters (UniRef), and the UniProt Archive (UniParc).
>>>!!!<<< This site is going away on April 1, 2021. General access to the site has been disabled and community users will see an error upon login. >>>!!!<<< Socrata’s cloud-based solution allows government organizations to put their data online, make data-driven decisions, operate more efficiently, and share insights with citizens.
Content type(s)
The IDR makes datasets that have never previously been accessible publicly available, allowing the community to search, view, mine and even process and analyze large, complex, multidimensional life sciences image data. Sharing data promotes the validation of experimental methods and scientific conclusions, the comparison with new data obtained by the global scientific community, and enables data reuse by developers of new analysis and processing tools.
The Lens is building an open platform for Innovation Cartography. Specifically, the Lens serves nearly all of the patent documents in the world as open, annotatable digital public goods that are integrated with scholarly and technical literature along with regulatory and business data.
Country
The ZBW Journal Data Archive is a service for editors of journals in economics and management. The Journal Data Archive offers the possibility for journal authors of papers that contain empirical work, simulations or experimental work to store the data, programs, and other details of computations, to make these files publicly available and to support confirmability and replicability of their published research papers.
Country
In the digital collections, you can take a look at the digitized prints from the holdings of the GWLB Hannover free of cost. In special collections, the GWLB unites rare, valuable and unique parts of holdings that are installed as an ensemble. Deposita, unpublished works, donations, acquisition of rare books etc. were and are an important source for the constant growth of the library. These treasures and specialties - beyond their academic value - also contribute substantially to the profile of the GWLB.
The Media Archive of the Zurich University of the Arts is the platform for collaborative work, sharing and archiving of media at the ZHdK. It is available to students, lecturers, reserarchers and staff. The areas of application of the media archive are mainly focused on teaching and research, but the ZHdK departments archive and university communication also benefit. The media archive manages a wide range of visual and audiovisual content and supports collaborative forms of working. It serves as an instutional repository for research data management and as a platform for hybrid publications.
EarthWorks is a discovery tool for geospatial (a.k.a. GIS) data. It allows users to search and browse the GIS collections owned by Stanford University Libraries, as well as data collections from many other institutions. Data can be searched spatially, by manipulating a map; by keyword search; by selecting search limiting facets (e.g., limit to a given format type); or by combining these options.
Country
coastMap offers campaign data, model analysis and thematic maps predominantly in the Biogeosciences. Spotlights explain in a nutshell important topics of the research conducted for the interested public. The portal offers applications to visualise and download field and laboratory work and to connect the information with interactive maps. Filter functions allow the user to search for general topics like a marine field of interest or single criteria, for example a specific ship campaign or one of 1000 measured parameters.
LinkedEarth is an EarthCube-funded project aiming to better organize and share Earth Science data, especially paleoclimate data. LinkedEarth facilitates the work of scientists by empowering them to curate their own data and to build new tools centered around those.
Country
The Leibniz Data Manager (LDM) is a scientific repository for research data from the fields of science and technology. The service supports a better re-usability of research data for scientific projects. The LDM fosters the management and access to heterogeneous research data publications and assists researchers in the selection of relevant data sets for their respective disciplines. The LDM currently offers the following functions for the visualization of research data: · Supports data collections and publications with different formats. · Different views on the same data set (2D and 3D support). · Visualization of Auto CAD files. · Jupyter Notes for demonstrating live code. · RDF Description of data collections.
Country
OCTOPUS is an Open Geospatial Consortium (OGC) compliant web-enabled database that allows users to visualise, query, and download cosmogenic 10Be and 26Al, luminescence, and radiocarbon ages and denudation rates associated with erosional landscapes, Quaternary depositional landforms and archaeological records, along with associated geospatial (vector and raster) data layers.
TERN provides open data, research and management tools, data infrastructure and site-based research equipment. The open access ecosystem data is provided by TERN Data Discovery Portal , see https://www.re3data.org/repository/r3d100012013
The Illinois Data Bank is a public access data repository that collects, disseminates, and provides persistent and reliable access to the research data of faculty, staff, and students at the University of Illinois at Urbana-Champaign. Faculty, staff, graduate students can deposit their research data directly into the Illinois Data Bank and receive a DOI for citation purposes.
The Brown Digital Repository (BDR) is a place to gather, index, store, preserve, and make available digital assets produced via the scholarly, instructional, research, and administrative activities at Brown.
Kaggle is a platform for predictive modelling and analytics competitions in which statisticians and data miners compete to produce the best models for predicting and describing the datasets uploaded by companies and users. This crowdsourcing approach relies on the fact that there are countless strategies that can be applied to any predictive modelling task and it is impossible to know beforehand which technique or analyst will be most effective.
VertNet is a NSF-funded collaborative project that makes biodiversity data free and available on the web. VertNet is a tool designed to help people discover, capture, and publish biodiversity data. It is also the core of a collaboration between hundreds of biocollections that contribute biodiversity data and work together to improve it. VertNet is an engine for training current and future professionals to use and build upon best practices in data quality, curation, research, and data publishing. Yet, VertNet is still the aggregate of all of the information that it mobilizes. To us, VertNet is all of these things and more.
Country
Data are the key to successful scientific work. A sophisticated data management will guarantee the long-term availability of observational data and metadata, and will allow for an easy data search and retrieval, to supplement the international data exchange and to provide data products for scientific, political, industrial and public stakeholders.
The Maize Genetics and Genomics Database focuses on collecting data related to the crop plant and model organism Zea mays. The project's goals are to synthesize, display, and provide access to maize genomics and genetics data, prioritizing mutant and phenotype data and tools, structural and genetic map sets, and gene models. MaizeGDB also aims to make the Maize Newsletter available, and provide support services to the community of maize researchers. MaizeGDB is working with the Schnable lab, the Panzea project, The Genome Reference Consortium, and iPlant Collaborative to create a plan for archiving, dessiminating, visualizing, and analyzing diversity data. MMaizeGDB is short for Maize Genetics/Genomics Database. It is a USDA/ARS funded project to integrate the data found in MaizeDB and ZmDB into a single schema, develop an effective interface to access this data, and develop additional tools to make data analysis easier. Our goal in the long term is a true next-generation online maize database.aize genetics and genomics database.
The Répertoire International des Sources Musicales (RISM) - International Inventory of Musical Sources - is an international, non-profit organization that aims to comprehensively document extant musical sources worldwide. These primary sources are music manuscripts or printed music editions, writings on music theory, and libretti. They are preserved in libraries, archives, churches, schools and private collections. RISM was founded in Paris in 1952 and is the largest and only international organization that documents written musical sources. RISM records what exists and where it can be found. As a result, by virtue of being cataloged in a comprehensive inventory, music traditions are protected while also being made available to musicologists and musicians alike. Such work is thus not an end in itself, but leads directly to practical applications.
The German Text Archive (Deutsches Textarchiv, DTA) presents online a selection of key German-language works in various disciplines from the 17th to 19th centuries. The electronic full-texts are indexed linguistically and the search facilities tolerate a range of spelling variants. The DTA presents German-language printed works from around 1650 to 1900 as full text and as digital facsimile. The selection of texts was made on the basis of lexicographical criteria and includes scientific or scholarly texts, texts from everyday life, and literary works. The digitalisation was made from the first edition of each work. Using the digital images of these editions, the text was first typed up manually twice (‘double keying’). To represent the structure of the text, the electronic full-text was encoded in conformity with the XML standard TEI P5. The next stages complete the linguistic analysis, i.e. the text is tokenised, lemmatised, and the parts of speech are annotated. The DTA thus presents a linguistically analysed, historical full-text corpus, available for a range of questions in corpus linguistics. Thanks to the interdisciplinary nature of the DTA Corpus, it also offers valuable source-texts for neighbouring disciplines in the humanities, and for scientists, legal scholars and economists.