Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 11 result(s)
VertNet is a NSF-funded collaborative project that makes biodiversity data free and available on the web. VertNet is a tool designed to help people discover, capture, and publish biodiversity data. It is also the core of a collaboration between hundreds of biocollections that contribute biodiversity data and work together to improve it. VertNet is an engine for training current and future professionals to use and build upon best practices in data quality, curation, research, and data publishing. Yet, VertNet is still the aggregate of all of the information that it mobilizes. To us, VertNet is all of these things and more.
Open Context is a free, open access resource for the electronic publication of primary field research from archaeology and related disciplines. It emerged as a means for scholars and students to easily find and reuse content created by others, which are key to advancing research and education. Open Context's technologies focus on ease of use, open licensing frameworks, informal data integration and, most importantly, data portability.Open Context currently publishes 132 projects.
The UCD Digital Library is a platform for exploring cultural heritage, engaging with digital scholarship, and accessing research data. The UCD Digital Library allows you to search, browse and explore a growing collection of historical materials, photographs, art, interviews, letters, and other exciting content, that have been digitised and made freely available.
ShareGeo Open was a repository of geospatial data, previously hosted by EDINA. ShareGeo Open has been discontinued, so its datasets have been migrated to this Edinburgh DataShare Collection, for preservation, in accordance with the agreement signed by all ShareGeo depositors.
Maddison's work contains the Project Dataset with estimates of GDP per capita for all countries in the world between 1820 and 2010 in a format amenable to analysis in R. The database was last updated in January 2013. The update incorporates much of the latest research in the field, and presents new estimates of economic growth in the world economic between AD 1 and 2010 The Maddison Project database presented builts on Angus Maddison's original dataset. The original estimates are kept intact, and only revised or adjusted when there is more and better information available. Angus Maddison's unaltered final dataset remains available on the Original Maddison Homepage https://www.rug.nl/ggdc/historicaldevelopment/maddison/original-maddison
The ADS is an accredited digital repository for heritage data that supports research, learning and teaching with freely available, high quality and dependable digital resources by preserving and disseminating digital data in the long term. The ADS also promotes good practice in the use of digital data, provides technical advice to the heritage community, and supports the deployment of digital technologies.
The German Text Archive (Deutsches Textarchiv, DTA) presents online a selection of key German-language works in various disciplines from the 17th to 19th centuries. The electronic full-texts are indexed linguistically and the search facilities tolerate a range of spelling variants. The DTA presents German-language printed works from around 1650 to 1900 as full text and as digital facsimile. The selection of texts was made on the basis of lexicographical criteria and includes scientific or scholarly texts, texts from everyday life, and literary works. The digitalisation was made from the first edition of each work. Using the digital images of these editions, the text was first typed up manually twice (‘double keying’). To represent the structure of the text, the electronic full-text was encoded in conformity with the XML standard TEI P5. The next stages complete the linguistic analysis, i.e. the text is tokenised, lemmatised, and the parts of speech are annotated. The DTA thus presents a linguistically analysed, historical full-text corpus, available for a range of questions in corpus linguistics. Thanks to the interdisciplinary nature of the DTA Corpus, it also offers valuable source-texts for neighbouring disciplines in the humanities, and for scientists, legal scholars and economists.