Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 66 result(s)
virus mentha archives evidence about viral interactions collected from different sources and presents these data in a complete and comprehensive way. Its data comes from manually curated protein-protein interaction databases that have adhered to the IMEx consortium. virus mentha is a resource that offers a series of tools to analyse selected proteins in the context of a network of interactions. Protein interaction databases archive protein-protein interaction (PPI) information from published articles. However, no database alone has sufficient literature coverage to offer a complete resource to investigate "the interactome". virus mentha's approach generates every week a consistent interactome (graph). Most importantly, the procedure assigns to each interaction a reliability score that takes into account all the supporting evidence. virus mentha offers direct access to viral families such as: Orthomyxoviridae, Orthoretrovirinae and Herpesviridae plus, it offers the unique possibility of searching by host organism. The website and the graphical application are designed to make the data stored in virus mentha accessible and analysable to all users.virus mentha superseeds VirusMINT. The Source databases are: MINT, DIP, IntAct, MatrixDB, BioGRID.
Country
DataStream is an open access platform for sharing information on freshwater health. It currently allows users to access, visualize, and download full water quality datasets collected by Indigenous Nations, community groups, researchers and governments throughout five regional hubs: Atlantic Canada, the Great Lakes and Saint Lawrence region, the Lake Winnipeg Basin, the Mackenzie River Basin and the Pacific region. DataStream was developed by The Gordon Foundation and is carried out in collaboration with regional monitoring networks.
Country
<<<!!!<<< Entry will be updated within the next weeks. --- In the meantime, look for some information at: https://www.klimadiagramme.de/ and https://www.klimadiagramme.de/Europa/Karlsruhe/ka_klima.htm >>>!!!>>> Wetter, Wolken, Klima is a collection of actual and archived climate dates of Germany since 2004. Based at KIT Meteorological Institute it includes special Cloud images from Karlsruhe, actual weather records based on 70 german stations, average snowfall and precipitation of Germany, weather warnings worldwide with archive, satellite images worldwide, actual weather radar worldwide, analyses and prognosis and precipitation rate of Baden-Württemberg.
Country
As the third center for oceanography of the World Data Center following WDC-A of the United States and WDC-B of Russia, WDC-D for oceanography boasts long-term and stable sources of domestic marine basic data. The State Oceanic Administration now has long-term observations obtained from the fixed coastal ocean stations, offshore and oceanic research vessels, moored and drifting buoys. More and more marine data have been available from the Chinese-foreign marine cooperative surveys, analysis and measurement of laboratory samples, reception by the satellite ground station, aerial telemeter and remote sensing, the GOOS program and global ships of opportunity reports, etc; More marine data are being and will be obtained from the ongoing “863” program, one of the state key projects during the Ninth Five-year plan and the seasat No 1 which is scheduled to be launched next year. Through many years’ effort, the WDC-D for oceanography has established formal relationship of marine data exchange with over 130 marine institutions in more than 60 countries in the world and is maintaining a close relationship of data exchange with over 30 major national oceanographic data centers. The established China Oceanic Information Network has joined the international marine data exchange system via Internet. Through these channels, a large amount data have been acquired of through international exchange, which, plus the marine data collected at home for many years, has brought the WDC-D for Oceanography over 100 years’ global marine data with a total data amounting to more than 10 billion bytes. In the meantime, a vast amount of work has been done in the standardized and normalized processing and management of the data, and a series of national and professional standards have been formulated and implemented successively. Moreover, appropriate standards and norms are being formulated as required.
The miRBase database is a searchable database of published miRNA sequences and annotation. Each entry in the miRBase Sequence database represents a predicted hairpin portion of a miRNA transcript (termed mir in the database), with information on the location and sequence of the mature miRNA sequence (termed miR). Both hairpin and mature sequences are available for searching and browsing, and entries can also be retrieved by name, keyword, references and annotation. All sequence and annotation data are also available for download. The miRBase Registry provides miRNA gene hunters with unique names for novel miRNA genes prior to publication of results.
B2SAFE is a robust, safe and highly available service which allows community and departmental repositories to implement data management policies on their research data across multiple administrative domains in a trustworthy manner. A solution to: provide an abstraction layer which virtualizes large-scale data resources, guard against data loss in long-term archiving and preservation, optimize access for users from different regions, bring data closer to powerful computers for compute-intensive analysis
The Energy Data eXchange (EDX) is an online collection of capabilities and resources that advance research and customize energy-related needs. EDX is developed and maintained by NETL-RIC researchers and technical computing teams to support private collaboration for ongoing research efforts, and tech transfer of finalized DOE NETL research products. EDX supports NETL-affiliated research by: Coordinating historical and current data and information from a wide variety of sources to facilitate access to research that crosscuts multiple NETL projects/programs; Providing external access to technical products and data published by NETL-affiliated research teams; Collaborating with a variety of organizations and institutions in a secure environment through EDX’s ;Collaborative Workspaces
The Alvin Frame-Grabber system provides the NDSF community on-line access to Alvin's video imagery co-registered with vehicle navigation and attitude data for shipboard analysis, planning deep submergence research cruises, and synoptic review of data post-cruise. The system is built upon the methodology and technology developed for the JasonII Virtual Control Van and a prototype system that was deployed on 13 Alvin dives in the East Pacific Rise and the Galapagos (AT7-12, AT7-13). The deployed prototype system was extremely valuable in facilitating real-time dive planning, review, and shipboard analysis.
The NASA Exoplanet Archive collects and serves public data to support the search for and characterization of extra-solar planets (exoplanets) and their host stars. The data include published light curves, images, spectra and parameters, and time-series data from surveys that aim to discover transiting exoplanets. Tools are provided to work with the data, particularly the display and analysis of transit data sets from Kepler and CoRoT. All data are validated by the Exoplanet Archive science staff and traced to their sources. The Exoplanet Archive is the U.S. data portal for the CoRoT mission.
The WDC Geomagnetism, Edinburgh has a comprehensive set of digital geomagnetic data as well as indices of geomagnetic activity supplied from a worldwide network of magnetic observatories. The data and services at the WDC are available for scientific use without restrictions.
The United States Census Bureau (officially the Bureau of the Census, as defined in Title 13 U.S.C. § 11) is the government agency that is responsible for the United States Census. It also gathers other national demographic and economic data. As a part of the United States Department of Commerce, the Census Bureau serves as a leading source of data about America's people and economy. The most visible role of the Census Bureau is to perform the official decennial (every 10 years) count of people living in the U.S. The most important result is the reallocation of the number of seats each state is allowed in the House of Representatives, but the results also affect a range of government programs received by each state. The agency director is a political appointee selected by the President of the United States.
This is the KONECT project, a project in the area of network science with the goal to collect network datasets, analyse them, and make available all analyses online. KONECT stands for Koblenz Network Collection, as the project has roots at the University of Koblenz–Landau in Germany. All source code is made available as Free Software, and includes a network analysis toolbox for GNU Octave, a network extraction library, as well as code to generate these web pages, including all statistics and plots. KONECT contains over a hundred network datasets of various types, including directed, undirected, bipartite, weighted, unweighted, signed and rating networks. The networks of KONECT are collected from many diverse areas such as social networks, hyperlink networks, authorship networks, physical networks, interaction networks and communication networks. The KONECT project has developed network analysis tools which are used to compute network statistics, to draw plots and to implement various link prediction algorithms. The result of these analyses are presented on these pages. Whenever we are allowed to do so, we provide a download of the networks.
The Arctic Data Center is the primary data and software repository for the Arctic section of NSF Polar Programs. The Center helps the research community to reproducibly preserve and discover all products of NSF-funded research in the Arctic, including data, metadata, software, documents, and provenance that links these together. The repository is open to contributions from NSF Arctic investigators, and data are released under an open license (CC-BY, CC0, depending on the choice of the contributor). All science, engineering, and education research supported by the NSF Arctic research program are included, such as Natural Sciences (Geoscience, Earth Science, Oceanography, Ecology, Atmospheric Science, Biology, etc.) and Social Sciences (Archeology, Anthropology, Social Science, etc.). Key to the initiative is the partnership between NCEAS at UC Santa Barbara, DataONE, and NOAA’s NCEI, each of which bring critical capabilities to the Center. Infrastructure from the successful NSF-sponsored DataONE federation of data repositories enables data replication to NCEI, providing both offsite and institutional diversity that are critical to long term preservation.
Country
The Universidad del Rosario Research data repository is an institutional iniciative launched in 2019 to preserve, provide access and promote the use of data resulting from Universidad del Rosario research projects. The Repository aims to consolidate an online, collaborative working space and data-sharing platform to support Universidad del Rosario researchers and their collaborators, and to ensure that research data is available to the community, in order to support further research and contribute to the democratization of knowledge. The Research data repository is the heart of an institutional strategy that seeks to ensure the generation of Findable, Accessible, Interoperable and Reusable (FAIR) data, with the aim of increasing its impact and visibility. This strategy follows the international philosophy of making research data “as open as possible and as closed as necessary”, in order to foster the expansion, valuation, acceleration and reusability of scientific research, but at the same time, safeguard the privacy of the subjects. The platform storage, preserves and facilitates the management of research data from all disciplines, generated by the researchers of all the schools and faculties of the University, that work together to ensure research with the highest standards of quality and scientific integrity, encouraging innovation for the benefit of society.