Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 20 result(s)
Country
The KiezDeutsch-Korpus (KiDKo) has been developed by project B6 (PI: Heike Wiese) of the collaborative research centre Information Structure (SFB 632) at the University of Potsdam from 2008 to 2015. KiDKo is a multi-modal digital corpus of spontaneous discourse data from informal, oral peer group situations in multi- and monoethnic speech communities. KiDKo contains audio data from self-recordings, with aligned transcriptions (i.e., at every point in a transcript, one can access the corresponding area in the audio file). The corpus provides parts-of-speech tags as well as an orthographically normalised layer (Rehbein & Schalowski 2013). Another annotation level provides information on syntactic chunks and topological fields. There are several complementary corpora: KiDKo/E (Einstellungen - "attitudes") captures spontaneous data from the public discussion on Kiezdeutsch: it assembles emails and readers' comments posted in reaction to media reports on Kiezdeutsch. By doing so, KiDKo/E provides data on language attitudes, language perceptions, and language ideologies, which became apparent in the context of the debate on Kiezdeutsch, but which frequently related to such broader domains as multilingualism, standard language, language prestige, and social class. KiDKo/LL ("Linguistic Landscape") assembles photos of written language productions in public space from the context of Kiezdeutsch, for instance love notes on walls, park benches, and playgrounds, graffiti in house entrances, and scribbled messages on toilet walls. Contains materials in following languages: Spanish, Italian, Greek, Kurdish, Swedish, French, Croatian, Arabic, Turkish. The corpus is available online via the Hamburger Zentrum für Sprachkorpora (HZSK) https://corpora.uni-hamburg.de/secure/annis-switch.php?instance=kidko .
EDINA delivers online services and tools to benefit students, teachers and researchers in UK Higher and Further Education and beyond.
DBpedia is a crowd-sourced community effort to extract structured information from Wikipedia and make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia, and to link the different data sets on the Web to Wikipedia data. We hope that this work will make it easier for the huge amount of information in Wikipedia to be used in some new interesting ways. Furthermore, it might inspire new mechanisms for navigating, linking, and improving the encyclopedia itself.
Merritt is a curation repository for the preservation of and access to the digital research data of the ten campus University of California system and external project collaborators. Merritt is supported by the University of California Curation Center (UC3) at the California Digital Library (CDL). While Merritt itself is content agnostic, accepting digital content regardless of domain, format, or structure, it is being used for management of research data, and it forms the basis for a number of domain-specific repositories, such as the ONEShare repository for earth and environmental science and the DataShare repository for life sciences. Merritt provides persistent identifiers, storage replication, fixity audit, complete version history, REST API, a comprehensive metadata catalog for discovery, ATOM-based syndication, and curatorially-defined collections, access control rules, and data use agreements (DUAs). Merritt content upload and download may each be curatorially-designated as public or restricted. Merritt DOIs are provided by UC3's EZID service, which is integrated with DataCite. All DOIs and associated metadata are automatically registered with DataCite and are harvested by Ex Libris PRIMO and Thomson Reuters Data Citation Index (DCI) for high-level discovery. Merritt is also a member node in the DataONE network; curatorially-designated data submitted to Merritt are automatically registered with DataONE for additional replication and federated discovery through the ONEMercury search/browse interface.
The German Text Archive (Deutsches Textarchiv, DTA) presents online a selection of key German-language works in various disciplines from the 17th to 19th centuries. The electronic full-texts are indexed linguistically and the search facilities tolerate a range of spelling variants. The DTA presents German-language printed works from around 1650 to 1900 as full text and as digital facsimile. The selection of texts was made on the basis of lexicographical criteria and includes scientific or scholarly texts, texts from everyday life, and literary works. The digitalisation was made from the first edition of each work. Using the digital images of these editions, the text was first typed up manually twice (‘double keying’). To represent the structure of the text, the electronic full-text was encoded in conformity with the XML standard TEI P5. The next stages complete the linguistic analysis, i.e. the text is tokenised, lemmatised, and the parts of speech are annotated. The DTA thus presents a linguistically analysed, historical full-text corpus, available for a range of questions in corpus linguistics. Thanks to the interdisciplinary nature of the DTA Corpus, it also offers valuable source-texts for neighbouring disciplines in the humanities, and for scientists, legal scholars and economists.
B2SHARE is a user-friendly, reliable and trustworthy way for researchers, scientific communities and citizen scientists to store and share small-scale research data from diverse contexts and disciplines. B2SHARE is able to add value to your research data via (domain tailored) metadata, and assigning citable Persistent Identifiers PIDs (Handles) to ensure long-lasting access and references. B2SHARE is one of the B2 services developed via EUDAT and long tail data deposits do not cost money. Special arrangements such as branding and special metadata elements can be made on request.
Country
Phaidra Universität Wien, is the innovative whole-university digital asset management system with long-term archiving functions, offers the possibility to archive valuable data university-wide with permanent security and systematic input, offering multilingual access using metadata (data about data), thus providing worldwide availability around the clock. As a constant data pool for administration, research and teaching, resources can be used flexibly, where continual citability allows the exact location and retrieval of prepared digital objects.
The National Archives of the Netherlands (Nationaal Archief), which is situated in The Hague, holds over 3.5 million records that have been created by the central government, organisations and individuals and are of national significance. Many records relate to the colonial and trading history of the Netherlands in the period from 1600 to 1975. The Dutch presence in countries in North and South America, Africa and Asia is reflected within these collections.
B2SAFE is a robust, safe and highly available service which allows community and departmental repositories to implement data management policies on their research data across multiple administrative domains in a trustworthy manner. A solution to: provide an abstraction layer which virtualizes large-scale data resources, guard against data loss in long-term archiving and preservation, optimize access for users from different regions, bring data closer to powerful computers for compute-intensive analysis
Country
The Universidad del Rosario Research data repository is an institutional iniciative launched in 2019 to preserve, provide access and promote the use of data resulting from Universidad del Rosario research projects. The Repository aims to consolidate an online, collaborative working space and data-sharing platform to support Universidad del Rosario researchers and their collaborators, and to ensure that research data is available to the community, in order to support further research and contribute to the democratization of knowledge. The Research data repository is the heart of an institutional strategy that seeks to ensure the generation of Findable, Accessible, Interoperable and Reusable (FAIR) data, with the aim of increasing its impact and visibility. This strategy follows the international philosophy of making research data “as open as possible and as closed as necessary”, in order to foster the expansion, valuation, acceleration and reusability of scientific research, but at the same time, safeguard the privacy of the subjects. The platform storage, preserves and facilitates the management of research data from all disciplines, generated by the researchers of all the schools and faculties of the University, that work together to ensure research with the highest standards of quality and scientific integrity, encouraging innovation for the benefit of society.
Regionaal Archief Alkmaar (RAA) is a joint arrangement that operates within a large region in the province of Noord-Holland. The first purpose of this arrangement is to fulfill the function of a regional knowledge and information center through the acquisition and preservation of a broad collection of historical sources. The second purpose is to make these sources actively available. It does so according to the Dutch Public Records Act (Archiefwet 1995). At the time of writing, the joint arrangement services include 9 municipalities, namely: Alkmaar, Bergen, Castricum, Den Helder, Heiloo, Hollands Kroon, Schagen, Dijk en Waard and Texel. The arrangement also includes other joint arrangements. These are the GGD Hollands Noorden and Veiligheidsregio Noord-Holland Noord. Also, the RAA keeps the archives of the water authority Hoogheemraadschap Hollands Noorderkwartier and its predecessors. This is being done on the basis of a service agreement. Finally many archives of families, persons of interest, companies and non-governmental organizations are being collected and managed. This is a secondary task of the RAA, but these archives are also being managed on the ground of the Dutch Public Records Act.
Country
REDU is the institutional open research data repository of the University of Campinas, Brazil. It contains research data produced by all research groups of the University, in a wide range of scientific domains, which are indexed by DataCite DOI. Created at the end of 2020, it is coordinated by a scientific and technical committee composed by data librarians, IT professionals, and scientists representing user groups. Implemented on top of Dataverse, it exports metadata using OAIS. Files with sensitive content (due to ethics or legal constraints) are not stored therein - rather, only their metadata is recorded in REDU, as well as contact information so that interested researchers can contact the persons responsible for the files for conditional subsequent access. It is being little by little populated, following the University's Open Science policies.
Country
DBT is the institutional repository of the FSU Jena, the TU Ilmenau and the University of Erfurt as well as members of the other Thuringian universities and colleges can publish scientific documents in the DBT. In individual cases, land users (via the ThULB Jena) can also archive documents in the DBT.
This is the KONECT project, a project in the area of network science with the goal to collect network datasets, analyse them, and make available all analyses online. KONECT stands for Koblenz Network Collection, as the project has roots at the University of Koblenz–Landau in Germany. All source code is made available as Free Software, and includes a network analysis toolbox for GNU Octave, a network extraction library, as well as code to generate these web pages, including all statistics and plots. KONECT contains over a hundred network datasets of various types, including directed, undirected, bipartite, weighted, unweighted, signed and rating networks. The networks of KONECT are collected from many diverse areas such as social networks, hyperlink networks, authorship networks, physical networks, interaction networks and communication networks. The KONECT project has developed network analysis tools which are used to compute network statistics, to draw plots and to implement various link prediction algorithms. The result of these analyses are presented on these pages. Whenever we are allowed to do so, we provide a download of the networks.
The Arctic Data Center is the primary data and software repository for the Arctic section of NSF Polar Programs. The Center helps the research community to reproducibly preserve and discover all products of NSF-funded research in the Arctic, including data, metadata, software, documents, and provenance that links these together. The repository is open to contributions from NSF Arctic investigators, and data are released under an open license (CC-BY, CC0, depending on the choice of the contributor). All science, engineering, and education research supported by the NSF Arctic research program are included, such as Natural Sciences (Geoscience, Earth Science, Oceanography, Ecology, Atmospheric Science, Biology, etc.) and Social Sciences (Archeology, Anthropology, Social Science, etc.). Key to the initiative is the partnership between NCEAS at UC Santa Barbara, DataONE, and NOAA’s NCEI, each of which bring critical capabilities to the Center. Infrastructure from the successful NSF-sponsored DataONE federation of data repositories enables data replication to NCEI, providing both offsite and institutional diversity that are critical to long term preservation.
Country
The ZBW Digital Long-Term Archive is a dark archive whose sole purpose is to guarantee the long term availability of the objects stored in it. The storage for the ZBW’s digital objects and their representation platforms is maintained by the ZBW division IT-Infrastructures and is not part of the tasks of the group Digital Preservation. The content that the ZBW provides is accessible via special representation platforms. The special representation platforms are: EconStor: an open access publication server for literature on business and economics. ZBW DIGITAL ARCHIVE: it contains born digital material from the domains of business and economics. The content of this archive is accessible in open access via EconBiz, the subject portal for business and economics of the ZBW. National and Alliance Licenses: the ZBW negotiates and curates licenses for electronic products on a national level. This is processed under the framework of the German Research Foundation as well as the Alliance of Science Associations, partly with third party funding, partly solely funded by the ZBW. A part of these electronic products is already hosted by the ZBW and counts among the items that are preserved in the digital archive. 20th Century Press Archive: a portal with access to archival material consisting of press clippings from newspapers covering the time period from the beginning of the 20th century to the year 1949.