Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 30 result(s)
Country
The TRR228DB is the project-database of the Collaborative Research Centre 228 "Future Rural Africa: Future-making and social-ecological transformation" (CRC/Transregio 228, https://www.crc228.de) funded by the German Research Foundation (DFG, German Research Foundation – Project number 328966760). The project-database is a new implementation of the TR32DB and online since 2018. It handles all data including metadata, which are created by the involved project participants from several institutions (e.g. Universities of Cologne and Bonn) and research fields (e.g. anthropology, agroeconomics, ecology, ethnology, geography, politics and soil sciences). The data is resulting from several field campaigns, interviews, surveys, remote sensing, laboratory studies and modelling approaches. Furthermore, outcomes of the scientists such as publications, conference contributions, PhD reports and corresponding images are collected.
Welcome to the largest bibliographic database dedicated to Economics and available freely on the Internet. This site is part of a large volunteer effort to enhance the free dissemination of research in Economics, RePEc, which includes bibliographic metadata from over 1,800 participating archives, including all the major publishers and research outlets. IDEAS is just one of several services that use RePEc data. Authors are invited to register with RePEc to create an online profile. Then, anyone finding some of your research here can find your latest contact details and a listing of your other research. You will also receive a monthly mailing about the popularity of your works, your ranking and newly found citations. Besides that IDEAS provides software and public accessible data from Federal Reserve Bank.
A research data repository for the education and developmental sciences.
Subject(s)
Country
Edmond is the institutional repository of the Max Planck Society for public research data. It enables Max Planck scientists to create citable scientific assets by describing, enriching, sharing, exposing, linking, publishing and archiving research data of all kinds. Further on, all objects within Edmond have a unique identifier and therefore can be clearly referenced in publications or reused in other contexts.
ICRISAT performs crop improvement research, using conventional as well as methods derived from biotechnology, on the following crops: Chickpea, Pigeonpea, Groundnut, Pearl millet,Sorghum and Small millets. ICRISAT's data repository collects, preserves and facilitates access to the datasets produced by ICRISAT researchers to all users who are interested in. Data includes Phenotypic, Genotypic, Social Science, and Spatial data, Soil and Weather.
Brainlife promotes engagement and education in reproducible neuroscience. We do this by providing an online platform where users can publish code (Apps), Data, and make it "alive" by integragrate various HPC and cloud computing resources to run those Apps. Brainlife also provide mechanisms to publish all research assets associated with a scientific project (data and analyses) embedded in a cloud computing environment and referenced by a single digital-object-identifier (DOI). The platform is unique because of its focus on supporting scientific reproducibility beyond open code and open data, by providing fundamental smart mechanisms for what we refer to as “Open Services.”
The Humanitarian Data Exchange (HDX) is an open platform for sharing data across crises and organisations. Launched in July 2014, the goal of HDX is to make humanitarian data easy to find and use for analysis. HDX is managed by OCHA's Centre for Humanitarian Data, which is located in The Hague. OCHA is part of the United Nations Secretariat and is responsible for bringing together humanitarian actors to ensure a coherent response to emergencies. The HDX team includes OCHA staff and a number of consultants who are based in North America, Europe and Africa.
Country
GESIS preserves (mainly quantitative) social research data to make it available to the scientific research community. The data is described in a standardized way, secured for the long term, provided with a permanent identifier (DOI), and can be easily found and reused through browser-optimized catalogs (https://search.gesis.org/).
Country
Sikt archives research data on people and society to make sure the data can be shared and is made available for reuse. We continuously enrich our data collections to provide a richer basis for research. Sikt’s main focus is quantitative data matrices on individuals, organisations, administrative, political, and geographical actors. The archive specialise in survey data, which undergoes extensive curation at the variable level and detailed metadata is produced and published in Norwegian and English.
Country
The purpose of the Social Data Repository (RDS) is to make available in the Internet social data, consisting of data sets and accompanying technical or methodological documentation. The use of Repository is open for everyone. The repository is operated by the University of Warsaw (Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw). Individual collections in the Social Data Repository are subject to editorial review by University of Warsaw or collection administrators, under separate rules for a given collection. In particular, the supervising editor for the collection “Archive of Quantitative Social Data” is the Team of the Archive of Quantitative Social Data.
ETH Data Archive is ETH Zurich's long-term preservation solution for digital information such as research data, digitised content, archival records, or images. It serves as the backbone of data curation and for most of its content, it is a “dark archive” without public access. In this capacity, the ETH Data Archive also archives the content of ETH Zurich’s Research Collection which is the primary repository for members of the university and the first point of contact for publication of data at ETH Zurich. All data that was produced in the context of research at the ETH Zurich, can be published and archived in the Research Collection. An automated connection to the ETH Data Archive in the background ensures the medium to long-term preservation of all publications and research data. Direct access to the ETH Data Archive is intended only for customers who need to deposit software source code within the framework of ETH transfer Software Registration. Open Source code packages and other content from legacy workflows can be accessed via ETH Library @ swisscovery (https://library.ethz.ch/en/).
Country
The KPDL covers cultural heritage, scientific and regional collections – digital copies of different forms of publications: books, journals, graphics, articles, leaflets, posters, playbills, photographs, invitations, maps, exhibition catalogues and trade fairs of the region. The Kujawsko-Pomorska Digital Library is to serve scientists, students, schoolchildren and all the citizens of the region.
Pandora is an open data platform devoted to the study of the human story. Data may be deposited from various disciplines and research topics that investigate humans from their early beginnings until present in addition to their environmental context (e.g. archeology, anthropology history, ancient DNA, isotopes, zooarchaeology, archaeobotany, and paleoenvironmental and paleoclimatic studies, etc.). Pandora allows autonomous data communities to self-manage their webspace and community membership. Data communities self-curate their data plus other supporting resources. Datasets may be assigned a new DOI and a schema markup is employed to improve data findability. Pandora also allows for links to datasets stored externally and having previously assigned DOIs. Through this, it becomes possible to establish data networks devoted to specific topics that may combine a mix of datasets stored either within Pandora or externally.
Country
PubData is Leuphana's institu­tional research data reposi­tory for the long-term preser­vation, documen­tation and publi­cation of research data from scienti­fic projects. PubData is main­tained by Leuphana's Media and Infor­mation Centre (MIZ) and is free of charge. The service is primarily aimed at Leuphana em­ployees and additionally at re­searchers from coope­ration partners con­tractually asso­ciated with Leuphana.
The Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC) is a team of researchers, data specialists and computer system developers who are supporting the development of a data management system to store scientific data generated by Gulf of Mexico researchers. The Master Research Agreement between BP and the Gulf of Mexico Alliance that established the Gulf of Mexico Research Initiative (GoMRI) included provisions that all data collected or generated through the agreement must be made available to the public. The Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC) is the vehicle through which GoMRI is fulfilling this requirement. The mission of GRIIDC is to ensure a data and information legacy that promotes continual scientific discovery and public awareness of the Gulf of Mexico Ecosystem.
Polish CLARIN node – CLARIN-PL Language Technology Centre – is being built at Wrocław University of Technology. The LTC is addressed to scholars in the humanities and social sciences. Registered users are granted free access to digital language resources and advanced tools to explore them. They can also archive and share their own language data (in written, spoken, video or multimodal form).
The DesignSafe Data Depot Repository (DDR) is the platform for curation and publication of datasets generated in the course of natural hazards research. The DDR is an open access data repository that enables data producers to safely store, share, organize, and describe research data, towards permanent publication, distribution, and impact evaluation. The DDR allows data consumers to discover, search for, access, and reuse published data in an effort to accelerate research discovery. It is a component of the DesignSafe cyberinfrastructure, which represents a comprehensive research environment that provides cloud-based tools to manage, analyze, curate, and publish critical data for research to understand the impacts of natural hazards. DesignSafe is part of the NSF-supported Natural Hazards Engineering Research Infrastructure (NHERI), and aligns with its mission to provide the natural hazards research community with open access, shared-use scholarship, education, and community resources aimed at supporting civil and social infrastructure prior to, during, and following natural disasters. It serves a broad national and international audience of natural hazard researchers (both engineers and social scientists), students, practitioners, policy makers, as well as the general public. It has been in operation since 2016, and also provides access to legacy data dating from about 2005. These legacy data were generated as part of the NSF-supported Network for Earthquake Engineering Simulation (NEES), a predecessor to NHERI. Legacy data and metadata belonging to NEES were transferred to the DDR for continuous preservation and access.
The Open Science Framework (OSF) is part network of research materials, part version control system, and part collaboration software. The purpose of the software is to support the scientist's workflow and help increase the alignment between scientific values and scientific practices. Document and archive studies. Move the organization and management of study materials from the desktop into the cloud. Labs can organize, share, and archive study materials among team members. Web-based project management reduces the likelihood of losing study materials due to computer malfunction, changing personnel, or just forgetting where you put the damn thing. Share and find materials. With a click, make study materials public so that other researchers can find, use and cite them. Find materials by other researchers to avoid reinventing something that already exists. Detail individual contribution. Assign citable, contributor credit to any research material - tools, analysis scripts, methods, measures, data. Increase transparency. Make as much of the scientific workflow public as desired - as it is developed or after publication of reports. Find public projects here. Registration. Registering materials can certify what was done in advance of data analysis, or confirm the exact state of the project at important points of the lifecycle such as manuscript submission or at the onset of data collection. Discover public registrations here. Manage scientific workflow. A structured, flexible system can provide efficiency gain to workflow and clarity to project objectives, as pictured.