Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 124 result(s)
<<<!!!<<< See UniProt entry https://www.re3data.org/repository/r3d100011521 >>>!!!>>> UniProtKB/Swiss-Prot is the manually annotated and reviewed section of the UniProt Knowledgebase (UniProtKB). It is a high quality annotated and non-redundant protein sequence database, which brings together experimental results, computed features and scientific conclusions. Since 2002, it is maintained by the UniProt consortium and is accessible via the UniProt website.
The Tropospheric Ozone Assessment Report (TOAR) database of global surface observations is the world's most extensive collection of surface ozone measurements and includes also data on other air pollutants and on weather for some regions. Measurements from 1970 to 2019 (Version 1) have been collected in a relational database, and are made available via a graphical web interface, a REST service (https://toar-data.fz-juelich.de/api/v1) and as aggregated products on PANGAEA (https://doi.pangaea.de/10.1594/PANGAEA.876108). Measurements from 1970 to present (Version 2) are being collected in a relational database, and are made available via a REST service (https://toar-data.fz-juelich.de/api/v2).
Country
The arctic data archive system (ADS) collects observation data and modeling products obtained by various Japanese research projects and gives researchers to access the results. By centrally managing a wide variety of Arctic observation data, we promote the use of data across multiple disciplines. Researchers use these integrated databases to clarify the mechanisms of environmental change in the atmosphere, ocean, land-surface and cryosphere. That ADS will be provide an opportunity of collaboration between modelers and field scientists, can be expected.
The Arctic Data Center is the primary data and software repository for the Arctic section of NSF Polar Programs. The Center helps the research community to reproducibly preserve and discover all products of NSF-funded research in the Arctic, including data, metadata, software, documents, and provenance that links these together. The repository is open to contributions from NSF Arctic investigators, and data are released under an open license (CC-BY, CC0, depending on the choice of the contributor). All science, engineering, and education research supported by the NSF Arctic research program are included, such as Natural Sciences (Geoscience, Earth Science, Oceanography, Ecology, Atmospheric Science, Biology, etc.) and Social Sciences (Archeology, Anthropology, Social Science, etc.). Key to the initiative is the partnership between NCEAS at UC Santa Barbara, DataONE, and NOAA’s NCEI, each of which bring critical capabilities to the Center. Infrastructure from the successful NSF-sponsored DataONE federation of data repositories enables data replication to NCEI, providing both offsite and institutional diversity that are critical to long term preservation.
Country
The ZBW Digital Long-Term Archive is a dark archive whose sole purpose is to guarantee the long term availability of the objects stored in it. The storage for the ZBW’s digital objects and their representation platforms is maintained by the ZBW division IT-Infrastructures and is not part of the tasks of the group Digital Preservation. The content that the ZBW provides is accessible via special representation platforms. The special representation platforms are: EconStor: an open access publication server for literature on business and economics. ZBW DIGITAL ARCHIVE: it contains born digital material from the domains of business and economics. The content of this archive is accessible in open access via EconBiz, the subject portal for business and economics of the ZBW. National and Alliance Licenses: the ZBW negotiates and curates licenses for electronic products on a national level. This is processed under the framework of the German Research Foundation as well as the Alliance of Science Associations, partly with third party funding, partly solely funded by the ZBW. A part of these electronic products is already hosted by the ZBW and counts among the items that are preserved in the digital archive. 20th Century Press Archive: a portal with access to archival material consisting of press clippings from newspapers covering the time period from the beginning of the 20th century to the year 1949.
The NDEx Project provides an open-source framework where scientists and organizations can share, store, manipulate, and publish biological network knowledge. The NDEx Project maintains a free, public website; alternatively, users can also decide to run their own copies of the NDEx Server software in cases where the stored networks must be kept in a highly secure environment (such as for HIPAA compliance) or where high application load is incompatible with a shared public resource.
virus mentha archives evidence about viral interactions collected from different sources and presents these data in a complete and comprehensive way. Its data comes from manually curated protein-protein interaction databases that have adhered to the IMEx consortium. virus mentha is a resource that offers a series of tools to analyse selected proteins in the context of a network of interactions. Protein interaction databases archive protein-protein interaction (PPI) information from published articles. However, no database alone has sufficient literature coverage to offer a complete resource to investigate "the interactome". virus mentha's approach generates every week a consistent interactome (graph). Most importantly, the procedure assigns to each interaction a reliability score that takes into account all the supporting evidence. virus mentha offers direct access to viral families such as: Orthomyxoviridae, Orthoretrovirinae and Herpesviridae plus, it offers the unique possibility of searching by host organism. The website and the graphical application are designed to make the data stored in virus mentha accessible and analysable to all users.virus mentha superseeds VirusMINT. The Source databases are: MINT, DIP, IntAct, MatrixDB, BioGRID.
California Digital Library (CDL) seeks to be a catalyst for deeply collaborative solutions providing a rich, intuitive and seamless environment for publishing, sharing and preserving our scholars’ increasingly diverse outputs, as well as for acquiring and accessing information critical to the University of California’s scholarly enterprise. University of California Curation Center (UC3) is the digital curation program within CDL. The mission of UC3 is to provide transformative preservation, curation, and research data management systems, services, and initiatives that sustain and promote open scholarship.
The German Text Archive (Deutsches Textarchiv, DTA) presents online a selection of key German-language works in various disciplines from the 17th to 19th centuries. The electronic full-texts are indexed linguistically and the search facilities tolerate a range of spelling variants. The DTA presents German-language printed works from around 1650 to 1900 as full text and as digital facsimile. The selection of texts was made on the basis of lexicographical criteria and includes scientific or scholarly texts, texts from everyday life, and literary works. The digitalisation was made from the first edition of each work. Using the digital images of these editions, the text was first typed up manually twice (‘double keying’). To represent the structure of the text, the electronic full-text was encoded in conformity with the XML standard TEI P5. The next stages complete the linguistic analysis, i.e. the text is tokenised, lemmatised, and the parts of speech are annotated. The DTA thus presents a linguistically analysed, historical full-text corpus, available for a range of questions in corpus linguistics. Thanks to the interdisciplinary nature of the DTA Corpus, it also offers valuable source-texts for neighbouring disciplines in the humanities, and for scientists, legal scholars and economists.
The GSA Data Repository is an open file in which authors of articles in our journals can place information that supplements and expands on their article. These supplements will not appear in print but may be obtained from GSA.
The IMPC is a confederation of international mouse phenotyping projects working towards the agreed goals of the consortium: To undertake the phenotyping of 20,000 mouse mutants over a ten year period, providing the first functional annotation of a mammalian genome. Maintain and expand a world-wide consortium of institutions with capacity and expertise to produce germ line transmission of targeted knockout mutations in embryonic stem cells for 20,000 known and predicted mouse genes. Test each mutant mouse line through a broad based primary phenotyping pipeline in all the major adult organ systems and most areas of major human disease. Through this activity and employing data annotation tools, systematically aim to discover and ascribe biological function to each gene, driving new ideas and underpinning future research into biological systems; Maintain and expand collaborative “networks” with specialist phenotyping consortia or laboratories, providing standardized secondary level phenotyping that enriches the primary dataset, and end-user, project specific tertiary level phenotyping that adds value to the mammalian gene functional annotation and fosters hypothesis driven research; and Provide a centralized data centre and portal for free, unrestricted access to primary and secondary data by the scientific community, promoting sharing of data, genotype-phenotype annotation, standard operating protocols, and the development of open source data analysis tools. Members of the IMPC may include research centers, funding organizations and corporations.
This is the KONECT project, a project in the area of network science with the goal to collect network datasets, analyse them, and make available all analyses online. KONECT stands for Koblenz Network Collection, as the project has roots at the University of Koblenz–Landau in Germany. All source code is made available as Free Software, and includes a network analysis toolbox for GNU Octave, a network extraction library, as well as code to generate these web pages, including all statistics and plots. KONECT contains over a hundred network datasets of various types, including directed, undirected, bipartite, weighted, unweighted, signed and rating networks. The networks of KONECT are collected from many diverse areas such as social networks, hyperlink networks, authorship networks, physical networks, interaction networks and communication networks. The KONECT project has developed network analysis tools which are used to compute network statistics, to draw plots and to implement various link prediction algorithms. The result of these analyses are presented on these pages. Whenever we are allowed to do so, we provide a download of the networks.
Country
PRISM Dataverse is the institutional data repository of the University of Calgary, which has its purpose in digital archiving and sharing of research data from researchers. PRISM Dataverse is a data repository hosted through Borealis, a service of the Ontario Council of University Libraries and supported by University of Calgary's Libraries and Cultural Resources. PRISM Dataverse enables scholars to easily deposit data, create data-specific metadata for searchability and publish their datasets.
Country
The CORA. Repositori de dades de Recerca is a repository of open, curated and FAIR data that covers all academic disciplines. CORA. Repositori de dades de Recerca is a shared service provided by participating Catalan institutions (Universities and CERCA Research Centers). The repository is managed by the CSUC and technical infrastructure is based on the Dataverse application, developed by international developers and users led by Harvard University (https://dataverse.org).
The National Archives is home to millions of historical documents, known as records, which were created and collected by UK central government departments and major courts of law. Data of the fomer National Digital Archive of Datasets (NDAD) collection, which was active from 1997 to 2010 and preserves and provides online access to archived digital datasets and documents from UK central government departments, is integrated. Access to records held by The National Archives and more than 2,500 other archives.
Country
Research Data Unipd is a data archive and supports research produced by the members of the University of Padova. The service aims to facilitate data discovery, data sharing, and reuse, as required by funding institutions (eg. European Commission). Datasets published in the archive have a set of metadata that ensure proper description and discoverability.
The Solar Dynamics Observatory (SDO) studies the solar atmosphere on small scales of space and time, in multiple wavelengths. This is a searchable database of all SDO data, including citizen scientist images, space weather and near real time data, and helioseismology data.
The Registry of Open Data on AWS provides a centralized repository of public data sets that can be seamlessly integrated into AWS cloud-based applications. AWS is hosting the public data sets at no charge to their users. Anyone can access these data sets from their Amazon Elastic Compute Cloud (Amazon EC2) instances and start computing on the data within minutes. Users can also leverage the entire AWS ecosystem and easily collaborate with other AWS users.
Network Repository is the first interactive data repository for graph and network data. It hosts graph and network datasets, containing hundreds of real-world networks and benchmark datasets. Unlike other data repositories, Network Repository provides interactive analysis and visualization capabilities to allow researchers to explore, compare, and investigate graph data in real-time on the web.
The World Glacier Monitoring Service (WGMS) collects standardized observations on changes in mass, volume, area and length of glaciers with time (glacier fluctuations), as well as statistical information on the distribution of perennial surface ice in space (glacier inventories). Such glacier fluctuation and inventory data are high priority key variables in climate system monitoring; they form a basis for hydrological modelling with respect to possible effects of atmospheric warming, and provide fundamental information in glaciology, glacial geomorphology and quaternary geology. The highest information density is found for the Alps and Scandinavia, where long and uninterrupted records are available. As a contribution to the Global Terrestrial/Climate Observing System (GTOS, GCOS), the Division of Early Warning and Assessment and the Global Environment Outlook of UNEP, and the International Hydrological Programme of UNESCO, the WGMS collects and publishes worldwide standardized glacier data.
Biological collections are replete with taxonomic, geographic, temporal, numerical, and historical information. This information is crucial for understanding and properly managing biodiversity and ecosystems, but is often difficult to access. Canadensys, operated from the Université de Montréal Biodiversity Centre, is a Canada-wide effort to unlock the biodiversity information held in biological collections.
The National Center for Education Statistics (NCES) is responsible for collecting and analyzing data related to education, including assessing the performance of students from early childhood through secondary education as well as the literacy level of adults and post-secondary education surveys. Users can access data on public and private schools as well as public libraries and a college navigator tool containing information on over 7,000 post-secondary institutions.