Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 25 result(s)
The repository is no longer available. >>>!!!<<< 2021-01-25: no more access to California Water CyberInfrastructure >>>!!!<<<
Jason is a remote-controlled deep-diving vessel that gives shipboard scientists immediate, real-time access to the sea floor. Instead of making short, expensive dives in a submarine, scientists can stay on deck and guide Jason as deep as 6,500 meters (4 miles) to explore for days on end. Jason is a type of remotely operated vehicle (ROV), a free-swimming vessel connected by a long fiberoptic tether to its research ship. The 10-km (6 mile) tether delivers power and instructions to Jason and fetches data from it.
Country
IDOC-DATA is a department of IDOC IDOC (Integrated Data & Operation Center) has existed since 2003 as a satellite operations center and data center for the Institute of Space Astrophysics (IAS) in Orsay, France. Since then, it has operated within the OSUPS (Observatoire des Sciences de l'Univers de l'Université Paris-Saclay - first french university in shanghai ranking), which includes three institutes: IAS, AIM (Astrophysique, Interprétation, Modélisation - IRFU, CEA) and GEOPS (Geosciences Paris-Saclay) . IDOC participates in the space missions of OSUPS and its partners, from mission design to long-term scientific data archiving. For each phase of the missions, IDOC offers three kinds of services in the scientific themes of OSUPS and therefore IDOC's activities are divided into three departments: IDOC-INSTR: instrument design and testing, IDOC-OPE: instrument operations, IDOC-DATA: data management and data value chain: to produce the different levels of data constructed from observations of these instruments and make them available to users for ergonomic and efficient scientific interpretation (IDOC-DATA). It includes the responsibility: - To build access to these datasets. - To offer the corresponding services such as catalogue management, visualization tools, software pipeline automation, etc. - To preserve the availability and reliability of this hardware and software infrastructure, its confidentiality where applicable and its security.
STRING is a database of known and predicted protein interactions. The interactions include direct (physical) and indirect (functional) associations; they are derived from four sources: - Genomic Context - High-throughput Experiments - (Conserved) Coexpression - Previous Knowledge STRING quantitatively integrates interaction data from these sources for a large number of organisms, and transfers information between these organisms where applicable.
NC OneMap is a public service providing comprehensive discovery and access to North Carolina's geospatial data resources. NC OneMap, the State's Clearinghouse for geospatial information, relies on data sharing and partnerships.
The DIP database catalogs experimentally determined interactions between proteins. It combines information from a variety of sources to create a single, consistent set of protein-protein interactions. The data stored within the DIP database were curated, both, manually by expert curators and also automatically using computational approaches that utilize the the knowledge about the protein-protein interaction networks extracted from the most reliable, core subset of the DIP data. Please, check the reference page to find articles describing the DIP database in greater detail. The Database of Ligand-Receptor Partners (DLRP) is a subset of DIP (Database of Interacting Proteins). The DLRP is a database of protein ligand and protein receptor pairs that are known to interact with each other. By interact we mean that the ligand and receptor are members of a ligand-receptor complex and, unless otherwise noted, transduce a signal. In some instances the ligand and/or receptor may form a heterocomplex with other ligands/receptors in order to be functional. We have entered the majority of interactions in DLRP as full DIP entries, with links to references and additional information
DBpedia is a crowd-sourced community effort to extract structured information from Wikipedia and make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia, and to link the different data sets on the Web to Wikipedia data. We hope that this work will make it easier for the huge amount of information in Wikipedia to be used in some new interesting ways. Furthermore, it might inspire new mechanisms for navigating, linking, and improving the encyclopedia itself.
Government of Yukon open data provides an easy way to find, access and reuse the government's public datasets. This service brings all of the government's data together in one searchable website. Our datasets are created and managed by different government departments. We cannot guarantee the quality or timeliness of all data. If you have any feedback you can get in touch with the department that produced the dataset. This is a pilot project. We are in the process of adding a quality framework to make it easier for you to access high quality, reliable data.
The Nuclear Data Portal is a new generation of nuclear data services using modern and powerful DELL servers, Sybase relational database software, the Linux operating system with programming in Java. The Portal includes nuclear structure, decay and reaction data, as well as literature information. Data can be searched for using optimized query forms; results are presented in tables and interactive plots. Additionally, a number of nuclear science tools, codes, applications, and links are provided. The databases includes are: CINDA - Computer Index of Nuclear Reaction Data, CSISRS alias EXFOR - Experimental nuclear reaction data, ENDF - Evaluated Nuclear Data File , ENSDF - Evaluated Nuclear Structure Data File, MIRD - Medical Internal Radiation Dose, NSR - Nuclear Science References, NuDat - Nuclear Structure & Decay Data, XUNDL - Experimental Unevaluated Nuclear Data List, Chart of Nuclides. Nuclear Data Portal is a web service of National Nuclear Data Center.
Country
Rodare is the institutional research data repository at HZDR (Helmholtz-Zentrum Dresden-Rossendorf). Rodare allows HZDR researchers to upload their research software and data and enrich those with metadata to make them findable, accessible, interoperable and retrievable (FAIR). By publishing all associated research software and data via Rodare research reproducibility can be improved. Uploads receive a Digital Object Identfier (DOI) and can be harvested via a OAI-PMH interface.
dictyBase is an integrated genetic and literature database that contains published Dictyostelium discoideum literature, genes, expressed sequence tags (ESTs), as well as the chromosomal and mitochondrial genome sequences. Direct access to the genome browser, a Blast search tool, the Dictyostelium Stock Center, research tools, colleague databases, and much much more are just a mouse click away. Dictybase is a genome portal for the Amoebozoa. dictyBase is funded by a grant from the National Institute for General Medical Sciences.
B2SHARE is a user-friendly, reliable and trustworthy way for researchers, scientific communities and citizen scientists to store and share small-scale research data from diverse contexts and disciplines. B2SHARE is able to add value to your research data via (domain tailored) metadata, and assigning citable Persistent Identifiers PIDs (Handles) to ensure long-lasting access and references. B2SHARE is one of the B2 services developed via EUDAT and long tail data deposits do not cost money. Special arrangements such as branding and special metadata elements can be made on request.
The European Nucleotide Archive (ENA) captures and presents information relating to experimental workflows that are based around nucleotide sequencing. A typical workflow includes the isolation and preparation of material for sequencing, a run of a sequencing machine in which sequencing data are produced and a subsequent bioinformatic analysis pipeline. ENA records this information in a data model that covers input information (sample, experimental setup, machine configuration), output machine data (sequence traces, reads and quality scores) and interpreted information (assembly, mapping, functional annotation). Data arrive at ENA from a variety of sources. These include submissions of raw data, assembled sequences and annotation from small-scale sequencing efforts, data provision from the major European sequencing centres and routine and comprehensive exchange with our partners in the International Nucleotide Sequence Database Collaboration (INSDC). Provision of nucleotide sequence data to ENA or its INSDC partners has become a central and mandatory step in the dissemination of research findings to the scientific community. ENA works with publishers of scientific literature and funding bodies to ensure compliance with these principles and to provide optimal submission systems and data access tools that work seamlessly with the published literature.
InnateDB is a publicly available database of the genes, proteins, experimentally-verified interactions and signaling pathways involved in the innate immune response of humans, mice and bovines to microbial infection. The database captures an improved coverage of the innate immunity interactome by integrating known interactions and pathways from major public databases together with manually-curated data into a centralised resource. The database can be mined as a knowledgebase or used with our integrated bioinformatics and visualization tools for the systems level analysis of the innate immune response.
Biological collections are replete with taxonomic, geographic, temporal, numerical, and historical information. This information is crucial for understanding and properly managing biodiversity and ecosystems, but is often difficult to access. Canadensys, operated from the Université de Montréal Biodiversity Centre, is a Canada-wide effort to unlock the biodiversity information held in biological collections.
Country
DataStream is an open access platform for sharing information on freshwater health. It currently allows users to access, visualize, and download full water quality datasets collected by Indigenous Nations, community groups, researchers and governments throughout four regional hubs in the Atlantic Provinces, Great Lakes and Saint Lawrence Basin, Lake Winnipeg Basin and Mackenzie River Basin. DataStream was developed by The Gordon Foundation and is carried out in collaboration with regional monitoring networks.
Social Computing Data Repository hosts data from a collection of many different social media sites, most of which have blogging capacity. Some of the prominent social media sites included in this repository are BlogCatalog, Twitter, MyBlogLog, Digg, StumbleUpon, del.icio.us, MySpace, LiveJournal, The Unofficial Apple Weblog (TUAW), Reddit, etc. The repository contains various facets of blog data including blog site metadata like, user defined tags, predefined categories, blog site description; blog post level metadata like, user defined tags, date and time of posting; blog posts; blog post mood (which is defined as the blogger's emotions when (s)he wrote the blog post); blogger name; blog post comments; and blogger social network.
B2SAFE is a robust, safe and highly available service which allows community and departmental repositories to implement data management policies on their research data across multiple administrative domains in a trustworthy manner. A solution to: provide an abstraction layer which virtualizes large-scale data resources, guard against data loss in long-term archiving and preservation, optimize access for users from different regions, bring data closer to powerful computers for compute-intensive analysis
Regionaal Archief Alkmaar (RAA) is a joint arrangement that operates within a large region in the province of Noord-Holland. The first purpose of this arrangement is to fulfill the function of a regional knowledge and information center through the acquisition and preservation of a broad collection of historical sources. The second purpose is to make these sources actively available. It does so according to the Dutch Public Records Act (Archiefwet 1995). At the time of writing, the joint arrangement services include 9 municipalities, namely: Alkmaar, Bergen, Castricum, Den Helder, Heiloo, Hollands Kroon, Schagen, Dijk en Waard and Texel. The arrangement also includes other joint arrangements. These are the GGD Hollands Noorden and Veiligheidsregio Noord-Holland Noord. Also, the RAA keeps the archives of the water authority Hoogheemraadschap Hollands Noorderkwartier and its predecessors. This is being done on the basis of a service agreement. Finally many archives of families, persons of interest, companies and non-governmental organizations are being collected and managed. This is a secondary task of the RAA, but these archives are also being managed on the ground of the Dutch Public Records Act.
Country
DBT is the institutional repository of the FSU Jena, the TU Ilmenau and the University of Erfurt as well as members of the other Thuringian universities and colleges can publish scientific documents in the DBT. In individual cases, land users (via the ThULB Jena) can also archive documents in the DBT.
The Integrated Catalogue (InK) of Mediathek of the Basel Academy of Art and Design (Hochschule für Gestaltung und Kunst Basel, HGK) hosts, collects, archives and makes available digital resources of HGK and its digital, special collections. It is available both to members of the Academy of Applied Sciences of Northwestern Switzerland (Fachhochschule Nordwestschweiz, FHNW) to which the HGK belongs and to the general public. In addition to data for internal university use (login area), there is a large amount of unrestricted, freely accessible content. The thematic focus is on contemporary art and design, art and design research, and topics related to the HGK. The sources cover a wide range of media: in addition to thesis and PDFs based documents, there are cluster objects, which assign several images, videos, audio and/or text files to a defined data set. The InK serves as an institutional repository for research data management and as a platform for hybrid publications.
The Arctic Data Center is the primary data and software repository for the Arctic section of NSF Polar Programs. The Center helps the research community to reproducibly preserve and discover all products of NSF-funded research in the Arctic, including data, metadata, software, documents, and provenance that links these together. The repository is open to contributions from NSF Arctic investigators, and data are released under an open license (CC-BY, CC0, depending on the choice of the contributor). All science, engineering, and education research supported by the NSF Arctic research program are included, such as Natural Sciences (Geoscience, Earth Science, Oceanography, Ecology, Atmospheric Science, Biology, etc.) and Social Sciences (Archeology, Anthropology, Social Science, etc.). Key to the initiative is the partnership between NCEAS at UC Santa Barbara, DataONE, and NOAA’s NCEI, each of which bring critical capabilities to the Center. Infrastructure from the successful NSF-sponsored DataONE federation of data repositories enables data replication to NCEI, providing both offsite and institutional diversity that are critical to long term preservation.
Country
The ZBW Digital Long-Term Archive is a dark archive whose sole purpose is to guarantee the long term availability of the objects stored in it. The storage for the ZBW’s digital objects and their representation platforms is maintained by the ZBW division IT-Infrastructures and is not part of the tasks of the group Digital Preservation. The content that the ZBW provides is accessible via special representation platforms. The special representation platforms are: EconStor: an open access publication server for literature on business and economics. ZBW DIGITAL ARCHIVE: it contains born digital material from the domains of business and economics. The content of this archive is accessible in open access via EconBiz, the subject portal for business and economics of the ZBW. National and Alliance Licenses: the ZBW negotiates and curates licenses for electronic products on a national level. This is processed under the framework of the German Research Foundation as well as the Alliance of Science Associations, partly with third party funding, partly solely funded by the ZBW. A part of these electronic products is already hosted by the ZBW and counts among the items that are preserved in the digital archive. 20th Century Press Archive: a portal with access to archival material consisting of press clippings from newspapers covering the time period from the beginning of the 20th century to the year 1949.
ArcGIS 'Living Atlas of the World is a unique collection of worldwide geographic information. It contains maps, apps and data layers that support you in your work. Corona Virus resources https://coronavirus-resources.esri.com/