Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 23 result(s)
The centerpiece of the Global Trade Analysis Project is a global data base describing bilateral trade patterns, production, consumption and intermediate use of commodities and services. The GTAP Data Base consists of bilateral trade, transport, and protection matrices that link individual country/regional economic data bases. The regional data bases are derived from individual country input-output tables, from varying years.
Welcome to the largest bibliographic database dedicated to Economics and available freely on the Internet. This site is part of a large volunteer effort to enhance the free dissemination of research in Economics, RePEc, which includes bibliographic metadata from over 1,800 participating archives, including all the major publishers and research outlets. IDEAS is just one of several services that use RePEc data. Authors are invited to register with RePEc to create an online profile. Then, anyone finding some of your research here can find your latest contact details and a listing of your other research. You will also receive a monthly mailing about the popularity of your works, your ranking and newly found citations. Besides that IDEAS provides software and public accessible data from Federal Reserve Bank.
The National Sleep Research Resource (NSRR) is an NHLBI-supported repository for sharing large amounts of sleep data (polysomnography, actigraphy and questionnaire-based) from multiple cohorts, clinical trials, and other data sources. Launched in April 2014, the mission of the NSRR is to advance sleep and circadian science by supporting secondary data analysis, algorithmic development, and signal processing through the sharing of high-quality data sets.
Subject(s)
Country
Edmond is the institutional repository of the Max Planck Society for public research data. It enables Max Planck scientists to create citable scientific assets by describing, enriching, sharing, exposing, linking, publishing and archiving research data of all kinds. Further on, all objects within Edmond have a unique identifier and therefore can be clearly referenced in publications or reused in other contexts.
Country
Kinsources is an open and interactive platform to archive, share, analyze and compare kinship data used in scientific research. Kinsources is not just another genealogy website, but a peer-reviewed repository designed for comparative and collaborative research. The aim of Kinsources is to provide kinship studies with a large and solid empirical base. Kinsources combines the functionality of communal data repository with a toolbox providing researchers with advanced software for analyzing kinship data. The software Puck (Program for the Use and Computation of Kinship data) is integrated in the statistical package and the search engine of the Kinsources website. Kinsources is part of a research perspective that seeks to understand the interaction between genealogy, terminology and space in the emergence of kinship structures. Hosted by the TGIR HumaNum, the platform ensures both security and free access to the scientific data is validated by the research community.
Brainlife promotes engagement and education in reproducible neuroscience. We do this by providing an online platform where users can publish code (Apps), Data, and make it "alive" by integragrate various HPC and cloud computing resources to run those Apps. Brainlife also provide mechanisms to publish all research assets associated with a scientific project (data and analyses) embedded in a cloud computing environment and referenced by a single digital-object-identifier (DOI). The platform is unique because of its focus on supporting scientific reproducibility beyond open code and open data, by providing fundamental smart mechanisms for what we refer to as “Open Services.”
Country
The Australian Data Archive (ADA) provides a national service for the collection and preservation of digital research data and to make these data available for secondary analysis by academic researchers and other users. Data are stored in seven sub-archives: Social Science, Historical, Indigenous, Longitudinal, Qualitative, Crime & Justice and International. Along with Australian data, ADA International is also a repository for studies by Australian researchers conducted in other countries, particularly throughout the Asia-Pacific region. The ADA International data catalogue includes links to studies from countries including New Zealand, Bangladesh, Cambodia, China, Indonesia, and several other countries. In 2017 the archive systems moved from the existing Nesstar platform to the new ADA Dataverse platform https://dataverse.ada.edu.au/
The Harvard Dataverse is open to all scientific data from all disciplines worldwide. It includes the world's largest collection of social science research data. It is hosting data for projects, archives, researchers, journals, organizations, and institutions.
Databrary is a data library for researchers to share research data and analytical tools with other investigators. It is a web-based repository for open sharing and preservation of video data and associated metadata in the area of behavioral sciences. The project aims to increase the openness in scientific research and dedicated to transforming the culture of science through building a community of researchers empowering them with an unprecedented set of tools for discovery. Databrary is complemented by Datavyu (an open source video-coding software).
The Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC) is a team of researchers, data specialists and computer system developers who are supporting the development of a data management system to store scientific data generated by Gulf of Mexico researchers. The Master Research Agreement between BP and the Gulf of Mexico Alliance that established the Gulf of Mexico Research Initiative (GoMRI) included provisions that all data collected or generated through the agreement must be made available to the public. The Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC) is the vehicle through which GoMRI is fulfilling this requirement. The mission of GRIIDC is to ensure a data and information legacy that promotes continual scientific discovery and public awareness of the Gulf of Mexico Ecosystem.
The Government is releasing public data to help people understand how government works and how policies are made. Some of this data is already available, but data.gov.uk brings it together in one searchable website. Making this data easily available means it will be easier for people to make decisions and suggestions about government policies based on detailed information.
Polish CLARIN node – CLARIN-PL Language Technology Centre – is being built at Wrocław University of Technology. The LTC is addressed to scholars in the humanities and social sciences. Registered users are granted free access to digital language resources and advanced tools to explore them. They can also archive and share their own language data (in written, spoken, video or multimodal form).
The DesignSafe Data Depot Repository (DDR) is the platform for curation and publication of datasets generated in the course of natural hazards research. The DDR is an open access data repository that enables data producers to safely store, share, organize, and describe research data, towards permanent publication, distribution, and impact evaluation. The DDR allows data consumers to discover, search for, access, and reuse published data in an effort to accelerate research discovery. It is a component of the DesignSafe cyberinfrastructure, which represents a comprehensive research environment that provides cloud-based tools to manage, analyze, curate, and publish critical data for research to understand the impacts of natural hazards. DesignSafe is part of the NSF-supported Natural Hazards Engineering Research Infrastructure (NHERI), and aligns with its mission to provide the natural hazards research community with open access, shared-use scholarship, education, and community resources aimed at supporting civil and social infrastructure prior to, during, and following natural disasters. It serves a broad national and international audience of natural hazard researchers (both engineers and social scientists), students, practitioners, policy makers, as well as the general public. It has been in operation since 2016, and also provides access to legacy data dating from about 2005. These legacy data were generated as part of the NSF-supported Network for Earthquake Engineering Simulation (NEES), a predecessor to NHERI. Legacy data and metadata belonging to NEES were transferred to the DDR for continuous preservation and access.
OpenML is an open ecosystem for machine learning. By organizing all resources and results online, research becomes more efficient, useful and fun. OpenML is a platform to share detailed experimental results with the community at large and organize them for future reuse. Moreover, it will be directly integrated in today’s most popular data mining tools (for now: R, KNIME, RapidMiner and WEKA). Such an easy and free exchange of experiments has tremendous potential to speed up machine learning research, to engender larger, more detailed studies and to offer accurate advice to practitioners. Finally, it will also be a valuable resource for education in machine learning and data mining.
The United States Census Bureau (officially the Bureau of the Census, as defined in Title 13 U.S.C. § 11) is the government agency that is responsible for the United States Census. It also gathers other national demographic and economic data. As a part of the United States Department of Commerce, the Census Bureau serves as a leading source of data about America's people and economy. The most visible role of the Census Bureau is to perform the official decennial (every 10 years) count of people living in the U.S. The most important result is the reallocation of the number of seats each state is allowed in the House of Representatives, but the results also affect a range of government programs received by each state. The agency director is a political appointee selected by the President of the United States.
Data.gov increases the ability of the public to easily find, download, and use datasets that are generated and held by the Federal Government. Data.gov provides descriptions of the Federal datasets (metadata), information about how to access the datasets, and tools that leverage government datasets
The Cornell Center for Social Sciences (CCSS) houses an extensive collection of research data files in the social sciences with particular emphasis on data that matches the interests of Cornell University researchers. CCSS intentionally uses a broad definition of social sciences in recognition of the interdisciplinary nature of Cornell research. CCSS collects and maintains digital research data files in the social sciences, with a current emphasis on Cornell-based social science research, Results Reproduction packages, and potentially at-risk datasets. Our archive historically has focused on a broad range of social science data, including data on demography, economics and labor, political and social behavior, family life, and health. You can search our holdings or browse studies by subject area.
This is the KONECT project, a project in the area of network science with the goal to collect network datasets, analyse them, and make available all analyses online. KONECT stands for Koblenz Network Collection, as the project has roots at the University of Koblenz–Landau in Germany. All source code is made available as Free Software, and includes a network analysis toolbox for GNU Octave, a network extraction library, as well as code to generate these web pages, including all statistics and plots. KONECT contains over a hundred network datasets of various types, including directed, undirected, bipartite, weighted, unweighted, signed and rating networks. The networks of KONECT are collected from many diverse areas such as social networks, hyperlink networks, authorship networks, physical networks, interaction networks and communication networks. The KONECT project has developed network analysis tools which are used to compute network statistics, to draw plots and to implement various link prediction algorithms. The result of these analyses are presented on these pages. Whenever we are allowed to do so, we provide a download of the networks.