Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 77 result(s)
<<<!!!<<< USHIK was archived because some of the metadata are maintained by other sites and there is no need for duplication. The USHIK metadata registry was a neutral repository of metadata from an authoritative source used to promote interoperability and reuse of data. The registry did not attempt to change the metadata content but rather provided a structured way to view data for the technical or casual user. Complete information see: https://www.ahrq.gov/data/ushik.html >>>!!!>>>
The National Center for Education Statistics (NCES) is responsible for collecting and analyzing data related to education, including assessing the performance of students from early childhood through secondary education as well as the literacy level of adults and post-secondary education surveys. Users can access data on public and private schools as well as public libraries and a college navigator tool containing information on over 7,000 post-secondary institutions.
Social Computing Data Repository hosts data from a collection of many different social media sites, most of which have blogging capacity. Some of the prominent social media sites included in this repository are BlogCatalog, Twitter, MyBlogLog, Digg, StumbleUpon, del.icio.us, MySpace, LiveJournal, The Unofficial Apple Weblog (TUAW), Reddit, etc. The repository contains various facets of blog data including blog site metadata like, user defined tags, predefined categories, blog site description; blog post level metadata like, user defined tags, date and time of posting; blog posts; blog post mood (which is defined as the blogger's emotions when (s)he wrote the blog post); blogger name; blog post comments; and blogger social network.
DEPOD - the human DEPhOsphorylation Database (version 1.1) is a manually curated database collecting human active phosphatases, their experimentally verified protein and non-protein substrates and dephosphorylation site information, and pathways in which they are involved. It also provides links to popular kinase databases and protein-protein interaction databases for these phosphatases and substrates. DEPOD aims to be a valuable resource for studying human phosphatases and their substrate specificities and molecular mechanisms; phosphatase-targeted drug discovery and development; connecting phosphatases with kinases through their common substrates; completing the human phosphorylation/dephosphorylation network.
Country
<<<!!!<<< Entry will be updated within the next weeks. --- In the meantime, look for some information at: https://www.klimadiagramme.de/ and https://www.klimadiagramme.de/Europa/Karlsruhe/ka_klima.htm >>>!!!>>> Day averages, maximum, minimum and month sums of the precipitation of about 70 German stations with archive since 2008.
B2SAFE is a robust, safe and highly available service which allows community and departmental repositories to implement data management policies on their research data across multiple administrative domains in a trustworthy manner. A solution to: provide an abstraction layer which virtualizes large-scale data resources, guard against data loss in long-term archiving and preservation, optimize access for users from different regions, bring data closer to powerful computers for compute-intensive analysis
The Energy Data eXchange (EDX) is an online collection of capabilities and resources that advance research and customize energy-related needs. EDX is developed and maintained by NETL-RIC researchers and technical computing teams to support private collaboration for ongoing research efforts, and tech transfer of finalized DOE NETL research products. EDX supports NETL-affiliated research by: Coordinating historical and current data and information from a wide variety of sources to facilitate access to research that crosscuts multiple NETL projects/programs; Providing external access to technical products and data published by NETL-affiliated research teams; Collaborating with a variety of organizations and institutions in a secure environment through EDX’s ;Collaborative Workspaces
Country
APID Interactomes is a database that provides a comprehensive collection of protein interactomes for more than 400 organisms based in the integration of known experimentally validated protein-protein physical interactions (PPIs). Construction of the interactomes is done with a methodological approach to report quality levels and coverage over the proteomes for each organism included. In this way, APID provides interactomes from specific organisms that in 25 cases have more than 500 proteins. As a whole APID includes a comprehensive compendium of 90,379 distinct proteins and 678,441 singular interactions. The analytical and integrative effort done in APID unifies PPIs from primary databases of molecular interactions (BIND, BioGRID, DIP, HPRD, IntAct, MINT) and also from experimentally resolved 3D structures (PDB) where more than two distinct proteins have been identified. In this way, 8,388 structures have been analyzed to find specific protein-protein interactions reported with details of their molecular interfaces. APID also includes a new data visualization web-tool that allows the construction of sub-interactomes using query lists of proteins of interest and the visual exploration of the corresponding networks, including an interactive selection of the properties of the interactions (i.e. the reliability of the "edges" in the network) and an interactive mapping of the functional environment of the proteins (i.e. the functional annotations of the "nodes" in the network).
Regionaal Archief Alkmaar (RAA) is a joint arrangement that operates within a large region in the province of Noord-Holland. The first purpose of this arrangement is to fulfill the function of a regional knowledge and information center through the acquisition and preservation of a broad collection of historical sources. The second purpose is to make these sources actively available. It does so according to the Dutch Public Records Act (Archiefwet 1995). At the time of writing, the joint arrangement services include 9 municipalities, namely: Alkmaar, Bergen, Castricum, Den Helder, Heiloo, Hollands Kroon, Schagen, Dijk en Waard and Texel. The arrangement also includes other joint arrangements. These are the GGD Hollands Noorden and Veiligheidsregio Noord-Holland Noord. Also, the RAA keeps the archives of the water authority Hoogheemraadschap Hollands Noorderkwartier and its predecessors. This is being done on the basis of a service agreement. Finally many archives of families, persons of interest, companies and non-governmental organizations are being collected and managed. This is a secondary task of the RAA, but these archives are also being managed on the ground of the Dutch Public Records Act.
ZENODO builds and operates a simple and innovative service that enables researchers, scientists, EU projects and institutions to share and showcase multidisciplinary research results (data and publications) that are not part of the existing institutional or subject-based repositories of the research communities. ZENODO enables researchers, scientists, EU projects and institutions to: easily share the long tail of small research results in a wide variety of formats including text, spreadsheets, audio, video, and images across all fields of science. display their research results and get credited by making the research results citable and integrate them into existing reporting lines to funding agencies like the European Commission. easily access and reuse shared research results.
Country
Trove helps you find and use resources relating to Australia. It’s more than a search engine. Trove brings together content from libraries, museums, archives and other research organisations and gives you tools to explore and build.
York Digital Library (YODL) is a University-wide Digital Library service for multimedia resources used in or created through teaching, research and study at the University of York. YODL complements the University's research publications, held in White Rose Research Online and PURE, and the digital teaching materials in the University's Yorkshare Virtual Learning Environment. YODL contains a range of collections, including images, past exam papers, masters dissertations and audio. Some of these are available only to members of the University of York, whilst other material is available to the public. YODL is expanding with more content being added all the time
The United States Census Bureau (officially the Bureau of the Census, as defined in Title 13 U.S.C. § 11) is the government agency that is responsible for the United States Census. It also gathers other national demographic and economic data. As a part of the United States Department of Commerce, the Census Bureau serves as a leading source of data about America's people and economy. The most visible role of the Census Bureau is to perform the official decennial (every 10 years) count of people living in the U.S. The most important result is the reallocation of the number of seats each state is allowed in the House of Representatives, but the results also affect a range of government programs received by each state. The agency director is a political appointee selected by the President of the United States.
Country
DBT is the institutional repository of the FSU Jena, the TU Ilmenau and the University of Erfurt as well as members of the other Thuringian universities and colleges can publish scientific documents in the DBT. In individual cases, land users (via the ThULB Jena) can also archive documents in the DBT.
The Integrated Catalogue (InK) of Mediathek of the Basel Academy of Art and Design (Hochschule für Gestaltung und Kunst Basel, HGK) hosts, collects, archives and makes available digital resources of HGK and its digital, special collections. It is available both to members of the Academy of Applied Sciences of Northwestern Switzerland (Fachhochschule Nordwestschweiz, FHNW) to which the HGK belongs and to the general public. In addition to data for internal university use (login area), there is a large amount of unrestricted, freely accessible content. The thematic focus is on contemporary art and design, art and design research, and topics related to the HGK. The sources cover a wide range of media: in addition to thesis and PDFs based documents, there are cluster objects, which assign several images, videos, audio and/or text files to a defined data set. The InK serves as an institutional repository for research data management and as a platform for hybrid publications.
This is the KONECT project, a project in the area of network science with the goal to collect network datasets, analyse them, and make available all analyses online. KONECT stands for Koblenz Network Collection, as the project has roots at the University of Koblenz–Landau in Germany. All source code is made available as Free Software, and includes a network analysis toolbox for GNU Octave, a network extraction library, as well as code to generate these web pages, including all statistics and plots. KONECT contains over a hundred network datasets of various types, including directed, undirected, bipartite, weighted, unweighted, signed and rating networks. The networks of KONECT are collected from many diverse areas such as social networks, hyperlink networks, authorship networks, physical networks, interaction networks and communication networks. The KONECT project has developed network analysis tools which are used to compute network statistics, to draw plots and to implement various link prediction algorithms. The result of these analyses are presented on these pages. Whenever we are allowed to do so, we provide a download of the networks.
California Digital Library (CDL) seeks to be a catalyst for deeply collaborative solutions providing a rich, intuitive and seamless environment for publishing, sharing and preserving our scholars’ increasingly diverse outputs, as well as for acquiring and accessing information critical to the University of California’s scholarly enterprise. University of California Curation Center (UC3) is the digital curation program within CDL. The mission of UC3 is to provide transformative preservation, curation, and research data management systems, services, and initiatives that sustain and promote open scholarship.
The Arctic Data Center is the primary data and software repository for the Arctic section of NSF Polar Programs. The Center helps the research community to reproducibly preserve and discover all products of NSF-funded research in the Arctic, including data, metadata, software, documents, and provenance that links these together. The repository is open to contributions from NSF Arctic investigators, and data are released under an open license (CC-BY, CC0, depending on the choice of the contributor). All science, engineering, and education research supported by the NSF Arctic research program are included, such as Natural Sciences (Geoscience, Earth Science, Oceanography, Ecology, Atmospheric Science, Biology, etc.) and Social Sciences (Archeology, Anthropology, Social Science, etc.). Key to the initiative is the partnership between NCEAS at UC Santa Barbara, DataONE, and NOAA’s NCEI, each of which bring critical capabilities to the Center. Infrastructure from the successful NSF-sponsored DataONE federation of data repositories enables data replication to NCEI, providing both offsite and institutional diversity that are critical to long term preservation.
Country
The ZBW Digital Long-Term Archive is a dark archive whose sole purpose is to guarantee the long term availability of the objects stored in it. The storage for the ZBW’s digital objects and their representation platforms is maintained by the ZBW division IT-Infrastructures and is not part of the tasks of the group Digital Preservation. The content that the ZBW provides is accessible via special representation platforms. The special representation platforms are: EconStor: an open access publication server for literature on business and economics. ZBW DIGITAL ARCHIVE: it contains born digital material from the domains of business and economics. The content of this archive is accessible in open access via EconBiz, the subject portal for business and economics of the ZBW. National and Alliance Licenses: the ZBW negotiates and curates licenses for electronic products on a national level. This is processed under the framework of the German Research Foundation as well as the Alliance of Science Associations, partly with third party funding, partly solely funded by the ZBW. A part of these electronic products is already hosted by the ZBW and counts among the items that are preserved in the digital archive. 20th Century Press Archive: a portal with access to archival material consisting of press clippings from newspapers covering the time period from the beginning of the 20th century to the year 1949.