Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 18 result(s)
Country
The KiezDeutsch-Korpus (KiDKo) has been developed by project B6 (PI: Heike Wiese) of the collaborative research centre Information Structure (SFB 632) at the University of Potsdam from 2008 to 2015. KiDKo is a multi-modal digital corpus of spontaneous discourse data from informal, oral peer group situations in multi- and monoethnic speech communities. KiDKo contains audio data from self-recordings, with aligned transcriptions (i.e., at every point in a transcript, one can access the corresponding area in the audio file). The corpus provides parts-of-speech tags as well as an orthographically normalised layer (Rehbein & Schalowski 2013). Another annotation level provides information on syntactic chunks and topological fields. There are several complementary corpora: KiDKo/E (Einstellungen - "attitudes") captures spontaneous data from the public discussion on Kiezdeutsch: it assembles emails and readers' comments posted in reaction to media reports on Kiezdeutsch. By doing so, KiDKo/E provides data on language attitudes, language perceptions, and language ideologies, which became apparent in the context of the debate on Kiezdeutsch, but which frequently related to such broader domains as multilingualism, standard language, language prestige, and social class. KiDKo/LL ("Linguistic Landscape") assembles photos of written language productions in public space from the context of Kiezdeutsch, for instance love notes on walls, park benches, and playgrounds, graffiti in house entrances, and scribbled messages on toilet walls. Contains materials in following languages: Spanish, Italian, Greek, Kurdish, Swedish, French, Croatian, Arabic, Turkish. The corpus is available online via the Hamburger Zentrum für Sprachkorpora (HZSK) https://corpora.uni-hamburg.de/secure/annis-switch.php?instance=kidko .
Country
Avibase is an extensive database information system about all birds of the world, containing over 60 million records about 10,000 species and 22,000 subspecies of birds, including distribution information, taxonomy, synonyms in several languages and more. This site is managed by Denis Lepage and hosted by Bird Studies Canada, the Canadian copartner of Birdlife International. Avibase has been a work in progress since 1992 and I am now pleased to offer it as a service to the bird-watching and scientific community.
The EUDAT project aims to contribute to the production of a Collaborative Data Infrastructure (CDI). The project´s target is to provide a pan-European solution to the challenge of data proliferation in Europe's scientific and research communities. The EUDAT vision is to support a Collaborative Data Infrastructure which will allow researchers to share data within and between communities and enable them to carry out their research effectively. EUDAT aims to provide a solution that will be affordable, trustworthy, robust, persistent and easy to use. EUDAT comprises 26 European partners, including data centres, technology providers, research communities and funding agencies from 13 countries. B2FIND is the EUDAT metadata service allowing users to discover what kind of data is stored through the B2SAFE and B2SHARE services which collect a large number of datasets from various disciplines. EUDAT will also harvest metadata from communities that have stable metadata providers to create a comprehensive joint catalogue to help researchers find interesting data objects and collections.
Country
The Common Research Data Repository (Deposita Dados) is a database for archiving, publishing, disseminating, preserving and sharing digital research data and its mission is to promote, support and facilitate the adoption of open access to the datasets of Brazilian researchers linked to scientific institutions that do not yet have their own research data repositories and/or of Brazilian researchers who have executed their datasets through scientific collaboration in foreign teaching and research institutions.
Kaggle is a platform for predictive modelling and analytics competitions in which statisticians and data miners compete to produce the best models for predicting and describing the datasets uploaded by companies and users. This crowdsourcing approach relies on the fact that there are countless strategies that can be applied to any predictive modelling task and it is impossible to know beforehand which technique or analyst will be most effective.
Country
The Goethe University Data Repository (GUDe) provides a platform for its members to electronically archive, share, and publish their research data. GUDe is jointly operated by the University Library and the University Data Center of the Goethe University. The metadata of all public content is freely available and indexed by search engines as well as scientific web services. GUDe follows the FAIR principles for long-term accessibility (minimum 10 years), allows for reliable citation via DOIs as well as cooperative access to non-public data and operates on DSpace-CRIS v7. If you have any questions regarding the use of GUDe, please consult the user documentation.
B2SHARE is a user-friendly, reliable and trustworthy way for researchers, scientific communities and citizen scientists to store and share small-scale research data from diverse contexts and disciplines. B2SHARE is able to add value to your research data via (domain tailored) metadata, and assigning citable Persistent Identifiers PIDs (Handles) to ensure long-lasting access and references. B2SHARE is one of the B2 services developed via EUDAT and long tail data deposits do not cost money. Special arrangements such as branding and special metadata elements can be made on request.
The CONP portal is a web interface for the Canadian Open Neuroscience Platform (CONP) to facilitate open science in the neuroscience community. CONP simplifies global researcher access and sharing of datasets and tools. The portal internalizes the cycle of a typical research project: starting with data acquisition, followed by processing using already existing/published tools, and ultimately publication of the obtained results including a link to the original dataset. From more information on CONP, please visit https://conp.ca
Arca Data is Fiocruz's official repository for archiving, publishing, disseminating, preserving and sharing digital research data produced by the Fiocruz community or in partnership with other research institutes or bodies, with the aim of promoting new research, ensuring the reproducibility or replicability of existing research and promoting an Open and Citizen Science. Its objective is to stimulate the wide circulation of scientific knowledge, strengthening the institutional commitment to Open Science and free access to health information, in addition to providing transparency and fostering collaboration between researchers, educators, academics, managers and graduate students, to the advancement of knowledge and the creation of solutions that meet the demands of society.
The Energy Data eXchange (EDX) is an online collection of capabilities and resources that advance research and customize energy-related needs. EDX is developed and maintained by NETL-RIC researchers and technical computing teams to support private collaboration for ongoing research efforts, and tech transfer of finalized DOE NETL research products. EDX supports NETL-affiliated research by: Coordinating historical and current data and information from a wide variety of sources to facilitate access to research that crosscuts multiple NETL projects/programs; Providing external access to technical products and data published by NETL-affiliated research teams; Collaborating with a variety of organizations and institutions in a secure environment through EDX’s ;Collaborative Workspaces
Country
APID Interactomes is a database that provides a comprehensive collection of protein interactomes for more than 400 organisms based in the integration of known experimentally validated protein-protein physical interactions (PPIs). Construction of the interactomes is done with a methodological approach to report quality levels and coverage over the proteomes for each organism included. In this way, APID provides interactomes from specific organisms that in 25 cases have more than 500 proteins. As a whole APID includes a comprehensive compendium of 90,379 distinct proteins and 678,441 singular interactions. The analytical and integrative effort done in APID unifies PPIs from primary databases of molecular interactions (BIND, BioGRID, DIP, HPRD, IntAct, MINT) and also from experimentally resolved 3D structures (PDB) where more than two distinct proteins have been identified. In this way, 8,388 structures have been analyzed to find specific protein-protein interactions reported with details of their molecular interfaces. APID also includes a new data visualization web-tool that allows the construction of sub-interactomes using query lists of proteins of interest and the visual exploration of the corresponding networks, including an interactive selection of the properties of the interactions (i.e. the reliability of the "edges" in the network) and an interactive mapping of the functional environment of the proteins (i.e. the functional annotations of the "nodes" in the network).
Country
Trove helps you find and use resources relating to Australia. It’s more than a search engine. Trove brings together content from libraries, museums, archives and other research organisations and gives you tools to explore and build.
This is the KONECT project, a project in the area of network science with the goal to collect network datasets, analyse them, and make available all analyses online. KONECT stands for Koblenz Network Collection, as the project has roots at the University of Koblenz–Landau in Germany. All source code is made available as Free Software, and includes a network analysis toolbox for GNU Octave, a network extraction library, as well as code to generate these web pages, including all statistics and plots. KONECT contains over a hundred network datasets of various types, including directed, undirected, bipartite, weighted, unweighted, signed and rating networks. The networks of KONECT are collected from many diverse areas such as social networks, hyperlink networks, authorship networks, physical networks, interaction networks and communication networks. The KONECT project has developed network analysis tools which are used to compute network statistics, to draw plots and to implement various link prediction algorithms. The result of these analyses are presented on these pages. Whenever we are allowed to do so, we provide a download of the networks.
Country
The Universidad del Rosario Research data repository is an institutional iniciative launched in 2019 to preserve, provide access and promote the use of data resulting from Universidad del Rosario research projects. The Repository aims to consolidate an online, collaborative working space and data-sharing platform to support Universidad del Rosario researchers and their collaborators, and to ensure that research data is available to the community, in order to support further research and contribute to the democratization of knowledge. The Research data repository is the heart of an institutional strategy that seeks to ensure the generation of Findable, Accessible, Interoperable and Reusable (FAIR) data, with the aim of increasing its impact and visibility. This strategy follows the international philosophy of making research data “as open as possible and as closed as necessary”, in order to foster the expansion, valuation, acceleration and reusability of scientific research, but at the same time, safeguard the privacy of the subjects. The platform storage, preserves and facilitates the management of research data from all disciplines, generated by the researchers of all the schools and faculties of the University, that work together to ensure research with the highest standards of quality and scientific integrity, encouraging innovation for the benefit of society.