Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 18 result(s)
The Eurac Research CLARIN Centre (ERCC) is a dedicated repository for language data. It is hosted by the Institute for Applied Linguistics (IAL) at Eurac Research, a private research centre based in Bolzano, South Tyrol. The Centre is part of the Europe-wide CLARIN infrastructure, which means that it follows well-defined international standards for (meta)data and procedures and is well-embedded in the wider European Linguistics infrastructure. The repository hosts data collected at the IAL, but is also open for data deposits from external collaborators.
Stanford Network Analysis Platform (SNAP) is a general purpose network analysis and graph mining library. It is written in C++ and easily scales to massive networks with hundreds of millions of nodes, and billions of edges. It efficiently manipulates large graphs, calculates structural properties, generates regular and random graphs, and supports attributes on nodes and edges. SNAP is also available through the NodeXL which is a graphical front-end that integrates network analysis into Microsoft Office and Excel. The SNAP library is being actively developed since 2004 and is organically growing as a result of our research pursuits in analysis of large social and information networks. Largest network we analyzed so far using the library was the Microsoft Instant Messenger network from 2006 with 240 million nodes and 1.3 billion edges. The datasets available on the website were mostly collected (scraped) for the purposes of our research. The website was launched in July 2009.
The Alternative Fuels Data Center (AFDC) is a comprehensive clearinghouse of information about advanced transportation technologies. The AFDC offers transportation decision makers unbiased information, data, and tools related to the deployment of alternative fuels and advanced vehicles. The AFDC launched in 1991 in response to the Alternative Motor Fuels Act of 1988 and the Clean Air Act Amendments of 1990. It originally served as a repository for alternative fuel performance data. The AFDC has since evolved to offer a broad array of information resources that support efforts to reduce petroleum use in transportation. The AFDC serves Clean Cities stakeholders, fleets regulated by the Energy Policy Act, businesses, policymakers, government agencies, and the general public.
>>>!!!<<< 2018-01-18: no data nor programs can be found >>>!!!<<< These archives contain public domain programs for calculations in physics and other programs that we suppose about will help during work with computer. Physical constants and experimental or theoretical data as cross sections, rate constants, swarm parameters, etc., that are necessary for physical calculations are stored here, too. Programs are mainly dedicated to computers compatible with PC IBM. If programs do not use graphic units it is possible to use them on other computers, too. It is necessary to reprogram the graphic parts of programs in the other cases.
In order to meet the needs of research data management for Peking University. The PKU library cooperate with the NSFC-PKU data center for management science, PKU science and research department, PKU social sciences department to jointly launch the Peking University Open Research Data Platform. PKU Open research data provides preservation, management and distribution services for research data. It encourage data owner to share data and data users to reuse data.
Academic Torrents is a distributed data repository. The academic torrents network is built for researchers, by researchers. Its distributed peer-to-peer library system automatically replicates your datasets on many servers, so you don't have to worry about managing your own servers or file availability. Everyone who has data becomes a mirror for those data so the system is fault-tolerant.
The UA Campus Repository is an institutional repository that facilitates access to the research, creative works, publications and teaching materials of the University by collecting, sharing and archiving content selected and deposited by faculty, researchers, staff and affiliated contributors.
Country
sciencedata.dk is a research data store provided by DTU, the Danish Technical University, specifically aimed at researchers and scientists at Danish academic institutions. The service is intended for working with and sharing active research data as well as for safekeeping of large datasets. The data can be accessed and manipulated via a web interface, synchronization clients, file transfer clients or the command line. The service is built on and with open-source software from the ground up: FreeBSD, ZFS, Apache, PHP, ownCloud/Nextcloud. DTU is actively engaged in community efforts on developing research-specific functionality for data stores. Our servers are attached directly to the 10-Gigabit backbone of "Forskningsnettet" (the National Research and Education Network of Denmark) - implying that up and download speed from Danish academic institutions is in principle comparable to those of an external USB hard drive. Data store for research data allowing private sharing and sharing via links / persistent URLs.
The Unidata community of over 260 universities is building a system for disseminating near real-time earth observations via the Internet. Unlike other systems, which are based on data centers where the information can be accessed, the Unidata IDD is designed so a university can request that certain data sets be delivered to computers at their site as soon as they are available from the observing system. The IDD system also allows any site with access to specialized observations to inject the dataset into the IDD for delivery to other interested sites.
The Energy Data eXchange (EDX) is an online collection of capabilities and resources that advance research and customize energy-related needs. EDX is developed and maintained by NETL-RIC researchers and technical computing teams to support private collaboration for ongoing research efforts, and tech transfer of finalized DOE NETL research products. EDX supports NETL-affiliated research by: Coordinating historical and current data and information from a wide variety of sources to facilitate access to research that crosscuts multiple NETL projects/programs; Providing external access to technical products and data published by NETL-affiliated research teams; Collaborating with a variety of organizations and institutions in a secure environment through EDX’s ;Collaborative Workspaces
The aim of this repository is for it to be a location from which a wide variety of well analysed IFC-based data files can be sourced. It is planned that over time the number of data files will expand to provide significant coverage of the major aspects that would need to be tested for interoperability.
For datasets from individual researchers or research groups affiliated with Stockholm University, who do not want set up a separate Dataverse for a project or institution. Metadata provisions for Geospatial, Social Science, Humanities, Astronomy, Astrophysics, Life Sciences and Journals (all optional, by choice) are included. Data curation help from Stockholm University Library possible on request.
This is the KONECT project, a project in the area of network science with the goal to collect network datasets, analyse them, and make available all analyses online. KONECT stands for Koblenz Network Collection, as the project has roots at the University of Koblenz–Landau in Germany. All source code is made available as Free Software, and includes a network analysis toolbox for GNU Octave, a network extraction library, as well as code to generate these web pages, including all statistics and plots. KONECT contains over a hundred network datasets of various types, including directed, undirected, bipartite, weighted, unweighted, signed and rating networks. The networks of KONECT are collected from many diverse areas such as social networks, hyperlink networks, authorship networks, physical networks, interaction networks and communication networks. The KONECT project has developed network analysis tools which are used to compute network statistics, to draw plots and to implement various link prediction algorithms. The result of these analyses are presented on these pages. Whenever we are allowed to do so, we provide a download of the networks.
Research Data Repository of the Instituto Federal Goiano - Campus Urutaí, a Brazilian public institution of the Ministry of Education. The project is an initiative of the Directorate of Post-Graduate Studies, Research and Innovation of the Federal Institute of Goiás - Campus Urutaí, which follows the philosophy of Open Science, for expansion and valuation of scientific research, aiming to provide data from technical-scientific observations and experimentation, ensuring that its authors, researchers and students receive all the credit they deserve as agents generating data. At the same time, the appropriate reuse of data is envisaged, whether in didactic-pedagogical activities or in new research.