Filter

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 1275 result(s)
Polish Platform of Medical Research (PPM) is a digital platform presenting the scientific achievements and research potential of 8 Polish medical universities from Bialystok, Gdansk, Katowice, Lublin, Szczecin, Warsaw, Wroclaw, the Nofer Institute of Occupational Medicine in Lodz and the Jagiellonian University Medical College in Cracow that form a partnership for the PPM Project. It incorporates the features of a Current Research Information System and a consortium repository and uses OMEGA-PSIR software. It provides open access to full texts of publications, doctoral theses, research data and other documents. PPM is a central platform that aggregates data from the local platforms of the PPM Project Partners. PPM is accessible for any Internet user.
Country
NSSDC is the nation-level space science data center which recognized by the Ministry of Science and Technology of China. As a repository for space science data, NSSDC assumes the responsibility of the long-term stewardship and offering a reliable service of space science data in China. It also has been the Chinese center for space science of the World Data Center (WDC) since 1988. In 2013, NSSDC became a regular member of World Data System. Data resources are concentrated in the following fields of space physics and space environment, space astronomy, lunar and planetary science, space application and engineering. In space physics, the NSSDC maintains space-based observation data and ground-based observation data of the middle and upper atmosphere, ionosphere and earth surface, from Geo-space Double Star Exploration Program and Meridian Project. In space astronomy, NSSDC archived pointed observation data of Hard X-ray Modulation Telescope. In lunar and planetary science, space application and engineering, NSSDC also collects detection data of Chang’E from Lunar Exploration Program and science products of BeiDou satellites.
PDBe is the European resource for the collection, organisation and dissemination of data on biological macromolecular structures. In collaboration with the other worldwide Protein Data Bank (wwPDB) partners - the Research Collaboratory for Structural Bioinformatics (RCSB) and BioMagResBank (BMRB) in the USA and the Protein Data Bank of Japan (PDBj) - we work to collate, maintain and provide access to the global repository of macromolecular structure data. We develop tools, services and resources to make structure-related data more accessible to the biomedical community.
The Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC) is a team of researchers, data specialists and computer system developers who are supporting the development of a data management system to store scientific data generated by Gulf of Mexico researchers. The Master Research Agreement between BP and the Gulf of Mexico Alliance that established the Gulf of Mexico Research Initiative (GoMRI) included provisions that all data collected or generated through the agreement must be made available to the public. The Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC) is the vehicle through which GoMRI is fulfilling this requirement. The mission of GRIIDC is to ensure a data and information legacy that promotes continual scientific discovery and public awareness of the Gulf of Mexico Ecosystem.
<<<!!!<<<The repository is no longer available. The printversion see: https://www.taylorfrancis.com/books/mono/10.1201/9781003220435/encyclopedia-astronomy-astrophysics-murdin >>>!!!>>> This unique resource covers the entire field of astronomy and astrophysics and this online version includes the full text of over 2,750 articles, plus sophisticated search and retrieval functionality, links to the primary literature, and is frequently updated with new material. An active editorial team, headed by the Encyclopedia's editor-in-chief, Paul Murdin, oversees the continual commissioning, reviewing and loading of new and revised content.In a unique collaboration, Nature Publishing Group and Institute of Physics Publishing published the most extensive and comprehensive reference work in astronomy and astrophysics in both print and online formats. First published as a four volume print edition in 2001, the initial Web version went live in 2002, and contained the original print material and was rapidly supplemented with numerous updates and newly commissioned material. Since July 2006 the Encyclopedia is published solely by Taylor & Francis.
The Social Science Data Archive is still active and maintained as part of the UCLA Library Data Science Center. SSDA Dataverse is one of the archiving opportunities of SSDA, the others are: Data can be archived by SSDA itself or by ICPSR or by UCLA Library or by California Digital Library. The Social Science Data Archives serves the UCLA campus as an archive of faculty and graduate student survey research. We provide long term storage of data files and documentation. We ensure that the data are useable in the future by migrating files to new operating systems. We follow government standards and archival best practices. The mission of the Social Science Data Archive has been and continues to be to provide a foundation for social science research with faculty support throughout an entire research project involving original data collection or the reuse of publicly available studies. Data Archive staff and researchers work as partners throughout all stages of the research process, beginning when a hypothesis or area of study is being developed, during grant and funding activities, while data collection and/or analysis is ongoing, and finally in long term preservation of research results. Our role is to provide a collaborative environment where the focus is on understanding the nature and scope of research approach and management of research output throughout the entire life cycle of the project. Instructional support, especially support that links research with instruction is also a mainstay of operations.
The European Genome-phenome Archive (EGA) is designed to be a repository for all types of sequence and genotype experiments, including case-control, population, and family studies. We will include SNP and CNV genotypes from array based methods and genotyping done with re-sequencing methods. The EGA will serve as a permanent archive that will archive several levels of data including the raw data (which could, for example, be re-analysed in the future by other algorithms) as well as the genotype calls provided by the submitters. We are developing data mining and access tools for the database. For controlled access data, the EGA will provide the necessary security required to control access, and maintain patient confidentiality, while providing access to those researchers and clinicians authorised to view the data. In all cases, data access decisions will be made by the appropriate data access-granting organisation (DAO) and not by the EGA. The DAO will normally be the same organisation that approved and monitored the initial study protocol or a designate of this approving organisation. The European Genome-phenome Archive (EGA) allows you to explore datasets from genomic studies, provided by a range of data providers. Access to datasets must be approved by the specified Data Access Committee (DAC).
Polish CLARIN node – CLARIN-PL Language Technology Centre – is being built at Wrocław University of Technology. The LTC is addressed to scholars in the humanities and social sciences. Registered users are granted free access to digital language resources and advanced tools to explore them. They can also archive and share their own language data (in written, spoken, video or multimodal form).
The EZRC at KIT houses the largest experimental fish facility in Europe with a capacity of more than 300,000 fish. Zebrafish stocks are maintained mostly as frozen sperm. Frequently requested lines are also kept alive as well as a selection of wildtype strains. Several thousand mutations in protein coding genes generated by TILLING in the Stemple lab of the Sanger Centre, Hinxton, UK and lines generated by ENU mutagenesis by the Nüsslein-Volhard lab in addition to transgenic lines and mutants generated by KIT groups or brought in through collaborations. We also accept submissions on an individual basis and ship fish upon request to PIs in Europe and elsewhere. EZRC also provides screening services and technologies such as imaging and high-throughput sequencing. Key areas include automation of embryo handling and automated image acquisition and processing. Our platform also involves the development of novel microscopy techniques (e.g. SPIM, DSLM, robotic macroscope) to permit high-resolution, real-time imaging in 4D. By association with the ComPlat platform, we can support also chemical screens and offer libraries with up to 20,000 compounds in total for external users. As another service to the community the EZRC provides plasmids (cDNAs, transgenes, Talen, Crispr/cas9) maintained by the Helmholtz repository of Bioparts (HERBI) to the scientific community. In addition the fish facility keeps a range of medaka stocks, maintained by the Loosli group.
Country
>>>!!!<<< The repository is no longer available. >>>!!!<<< C3-Grid is an ALREADY FINISHED project within D-Grid, the initiative to promote a grid-based e-Science framework in Germany. The goal of C3-Grid is to support the workflow of Earth system researchers. A grid infrastructure will be implemented that allows efficient distributed data processing and inter-institutional data exchange. Aim of the effort was to develop an infrastructure for uniform access to heterogeneous data and distributed data processing. The work was structured in two projects funded by the Federal Ministry of Education and Research. The first project was part of the D-Grid initiative and explored the potential of grid technology for climate research and developed a prototype infrastructure. Details about the C3Grid architecture are described in “Earth System Modelling – Volume 6”. In the second phase "C3Grid - INAD: Towards an Infrastructure for General Access to Climate Data" this infrastructure was improved especially with respect to interoperability to Earth System Grid Federation (ESGF). Further the portfolio of available diagnostic workflows was expanded. These workflows can be re-used now in adjacent infrastructures MiKlip Evaluation Tool (http://www.fona-miklip.de/en/index.php) and as Web Processes within the Birdhouse Framework (http://bird-house.github.io/). The Birdhouse Framework is now funded as part of the European Copernicus Climate Change Service (https://climate.copernicus.eu/) managed by ECMWF and will be extended to provide scalable processing services for ESGF hosted data at DKRZ as well as IPSL and BADC.
DaSCH is the trusted platform and partner for open research data in the Humanities. DaSCH develops and operates a FAIR long-term repository and a generic virtual research environment for open research data in the humanities in Switzerland. We provide long-term direct access to the data, enable their continuous editing and allow for precise citation of single objects within a dataset. We ensure interoperability with tools used by the Humanities and Cultural Sciences communities and foster the use of standards. The development of our platform happens in close cooperation with these communities. We provide training and advice in the area of research data management, promote open data and the use of standards. DaSCH is the coordinating institution and representative of Switzerland in the European Research Infrastructure Consortium ‘Digital Research Infrastructure for the Arts and Humanities’ (DARIAH ERIC). Within this mandate, we actively engage in community building within Switzerland and abroad. DaSCH cooperates with national and international organizations and initiatives in order to provide services that are fit for purpose within the broader Swiss open research data landscape and that are coordinated with other institutions such as FORS. We base our actions on the values reliability, flexibility, appreciation, curiosity, and persistence. Furthermore, DARIAH’s activities in Switzerland are coordinated by DaSCH and DaSCH is acting as DARIAH-CH Coordination Office.
ARCHE (A Resource Centre for the HumanitiEs) is a service aimed at offering stable and persistent hosting as well as dissemination of digital research data and resources for the Austrian humanities community. ARCHE welcomes data from all humanities fields. ARCHE is the successor of the Language Resources Portal (LRP) and acts as Austria’s connection point to the European network of CLARIN Centres for language resources.
dbEST is a division of GenBank that contains sequence data and other information on "single-pass" cDNA sequences, or "Expressed Sequence Tags", from a number of organisms. Expressed Sequence Tags (ESTs) are short (usually about 300-500 bp), single-pass sequence reads from mRNA (cDNA). Typically they are produced in large batches. They represent a snapshot of genes expressed in a given tissue and/or at a given developmental stage. They are tags (some coding, others not) of expression for a given cDNA library. Most EST projects develop large numbers of sequences. These are commonly submitted to GenBank and dbEST as batches of dozens to thousands of entries, with a great deal of redundancy in the citation, submitter and library information. To improve the efficiency of the submission process for this type of data, we have designed a special streamlined submission process and data format. dbEST also includes sequences that are longer than the traditional ESTs, or are produced as single sequences or in small batches. Among these sequences are products of differential display experiments and RACE experiments. The thing that these sequences have in common with traditional ESTs, regardless of length, quality, or quantity, is that there is little information that can be annotated in the record. If a sequence is later characterized and annotated with biological features such as a coding region, 5'UTR, or 3'UTR, it should be submitted through the regular GenBank submissions procedure (via BankIt or Sequin), even if part of the sequence is already in dbEST. dbEST is reserved for single-pass reads. Assembled sequences should not be submitted to dbEST. GenBank will accept assembled EST submissions for the forthcoming TSA (Transcriptome Shotgun Assembly) division. The individual reads which make up the assembly should be submitted to dbEST, the Trace archive or the Short Read Archive (SRA) prior to the submission of the assemblies.
Country
Lithuanian Data Archive for Social Sciences and Humanities (LiDA) is a virtual digital infrastructure for SSH data and research resources acquisition, long-term preservation and dissemination. All the data and research resources are documented in both English and Lithuanian according to international standards. Access to the resources is provided via Dataverse repository. LiDA curates different types of resources and they are published into catalogues according to the type: Survey Data, Aggregated Data (including Historical Statistics), Encoded Data (including News Media Studies), and Textual Data. Also, LiDA holds collections of social sciences and humanities data deposited by Lithuanian science and higher education institutions and Lithuanian state institutions (Data of Other Institutions). LiDA is hosted by the Centre for Data Analysis and Archiving of Kaunas University of Technology (data.ktu.edu).
Country
IDOC-DATA is a department of IDOC IDOC (Integrated Data & Operation Center) has existed since 2003 as a satellite operations center and data center for the Institute of Space Astrophysics (IAS) in Orsay, France. Since then, it has operated within the OSUPS (Observatoire des Sciences de l'Univers de l'Université Paris-Saclay - first french university in shanghai ranking), which includes three institutes: IAS, AIM (Astrophysique, Interprétation, Modélisation - IRFU, CEA) and GEOPS (Geosciences Paris-Saclay) . IDOC participates in the space missions of OSUPS and its partners, from mission design to long-term scientific data archiving. For each phase of the missions, IDOC offers three kinds of services in the scientific themes of OSUPS and therefore IDOC's activities are divided into three departments: IDOC-INSTR: instrument design and testing, IDOC-OPE: instrument operations, IDOC-DATA: data management and data value chain: to produce the different levels of data constructed from observations of these instruments and make them available to users for ergonomic and efficient scientific interpretation (IDOC-DATA). It includes the responsibility: - To build access to these datasets. - To offer the corresponding services such as catalogue management, visualization tools, software pipeline automation, etc. - To preserve the availability and reliability of this hardware and software infrastructure, its confidentiality where applicable and its security.
The IMEx consortium is an international collaboration between a group of major public interaction data providers who have agreed to share curation effort and develop and work to a single set of curation rules when capturing data from both directly deposited interaction data or from publications in peer-reviewed journals, capture full details of an interaction in a “deep” curation model, perform a complete curation of all protein-protein interactions experimentally demonstrated within a publication, make these interaction available in a single search interface on a common website, provide the data in standards compliant download formats, make all IMEx records freely accessible under the Creative Commons Attribution License
Country
PANGAEA - Data Publisher for Earth & Environmental Sciences has an almost 30-year history as an open-access library for archiving, publishing, and disseminating georeferenced data from the Earth, environmental, and biodiversity sciences. Originally evolving from a database for sediment cores, it is operated as a joint facility of the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI) and the Center for Marine Environmental Sciences (MARUM) at the University of Bremen. PANGAEA holds a mandate from the World Meteorological Organization (WMO) and is accredited as a World Radiation Monitoring Center (WRMC). It was further accredited as a World Data Center by the International Council for Science (ICS) in 2001 and has been certified with the Core Trust Seal since 2019. The successful cooperation between PANGAEA and the publishing industry along with the correspondent technical implementation enables the cross-referencing of scientific publications and datasets archived as supplements to these publications. PANGAEA is the recommended data repository of numerous international scientific journals.
Country
The BCDC serves the research data obtained, and the data syntheses assembled, by researchers within the Bjerknes Centre for Climate Research. Furthermore it is open for all interested scientists independent of institution. All data from the different disciplines (e.g. geology, oceanography, biology, model community) will be archived in a long-term repository, interconnected and made publicly available by the BCDC. BCDC has collaborations with many international data repositories and actively archives metadata and data at those ensuring quality and FAIRness. BCDC has it's main focus on services for data management for external and internal funded projects in the field of climate research, provides data management plans and ensures that data is archived accordingly according to the best practices in the field. The data management services rank from project work for small external funded project to top-of-the-art data management services for research infrastructures on the ESFRI roadmap (e.g. RI ICOS – Integrated Carbon Observation System) and for provides products and services for Copernicus Marine Environmental Monitoring Services. In addition BCDC is advising various communities on data management services e.g. IOC UNESCO, OECD, IAEA and various funding agencies. BCDC will become an Associated Data Unit (ADU) under IODE, International Oceanographic Data and Information Exchange, a worldwide network that operates under the auspices of the Intergovernmental Oceanographic Commission of UNESCO and aims at becoming a part of ICSU World Data System.
The Magnetics Information Consortium (MagIC) improves research capacity in the Earth and Ocean sciences by maintaining an open community digital data archive for rock magnetic, geomagnetic, archeomagnetic (archaeomagnetic) and paleomagnetic (palaeomagnetic) data. Different parts of the website allow users access to archive, search, visualize, and download these data. MagIC supports the international rock magnetism, geomagnetism, archeomagnetism (archaeomagnetism), and paleomagnetism (palaeomagnetism) research and endeavors to bring data out of private archives, making them accessible to all and (re-)useable for new, creative, collaborative scientific and educational activities. The data in MagIC is used for many types of studies including tectonic plate reconstructions, geomagnetic field models, paleomagnetic field reversal studies, magnetohydrodynamical studies of the Earth's core, magnetostratigraphy, and archeology. MagIC is a domain-specific data repository and directed by PIs who are both producers and consumers of rock, geo, and paleomagnetic data. Funded by NSF since 2003, MagIC forms a major part of https://earthref.org which integrates four independent cyber-initiatives rooted in various parts of the Earth, Ocean and Life sciences and education.
The Index to Marine and Lacustrine Geological Samples is a tool to help scientists locate and obtain geologic material from sea floor and lakebed cores, grabs, and dredges archived by participating institutions around the world. Data and images related to the samples are prepared and contributed by the institutions for access via the IMLGS and long-term archive at NGDC. Before proposing research on any sample, please contact the curator for sample condition and availability. A consortium of Curators guides the IMLGS, maintained on behalf of the group by NGDC, since 1977.
<<<!!!<<< The repository is no longer available. >>>!!!>>> Selected TOXMAP data can be accesse from the following sites: U.S. EPA Toxics Release Program (TRI) (https://www.epa.gov/toxics-release-inventory-tri-program) U.S. EPA Superfund Program (https://www.epa.gov/superfund) U.S. EPA Facilities Registry System (FRS) (https://www.epa.gov/frs) U.S. EPA Clean Air Markets Program (https://www.epa.gov/airmarkets) U.S. EPA Geospatial Applications (https://www.epa.gov/geospatial/epa-geospatial-applications) U.S. NIH NCI Surveillance, Epidemiology, and End Results Program (SEER) (https://seer.cancer.gov/) Government of Canada National Pollutant Release Inventory (NPRI) (https://www.canada.ca/en/services/environment/pollution-waste-management/national-pollutant-release-inventory.html) U.S. Census Bureau (https://www.census.gov/) U.S. Nuclear Regulatory Commission (NRC) (https://www.nrc.gov/) >>>!!!>>>
Our knowledge of the many life-forms on Earth - of animals, plants, fungi, protists and bacteria - is scattered around the world in books, journals, databases, websites, specimen collections, and in the minds of people everywhere. Imagine what it would mean if this information could be gathered together and made available to everyone – anywhere – at a moment’s notice. This dream is becoming a reality through the Encyclopedia of Life.
Country
<<<!!!<<< This repository is no longer available. The Social Sciences Library of the former Center for Advanced Studies in Social Sciences (CEACS) of the Juan March Institute has been integrated into the Social and Legal Sciences Library of the Carlos III University of Madrid since September 2013. In the University's catalog you can consult what used to be its collection of monographs and journals. >>>!!!>>>
Country
The China National GeneBank database (CNGBdb) is a unified platform for biological big data sharing and application services. CNGBdb has now integrated a large amount of internal and external biological data from resources such as CNGB, NCBI, and the EBI. There are several sub-databases in CNGBdb, including literature, variation, gene, genome, protein, sequence, organism, project, sample, experiment, run, and assembly. Based on underlying big data and cloud computing technologies, it provides various data services, including archive, analysis, knowledge search, and management authorization of biological data. CNGBdb adopts data structures and standards of international omics, health, and medicine, such as The International Nucleotide Sequence Database Collaboration (INSDC), The Global Alliance for Genomics and Health GA4GH (GA4GH), Global Genome Biodiversity Network (GGBN), American College of Medical Genetics and Genomics (ACMG), and constructs standardized data and structures with wide compatibility. All public data and services provided by CNGBdb are freely available to all users worldwide. CNGB Sequence Archive (CNSA) is the bionomics data repository of CNGBdb. CNGB Sequence Archive (CNSA) is a convenient and efficient archiving system of multi-omics data in life science, which provides archiving services for raw sequencing reads and further analyzed results. CNSA follows the international data standards for omics data, and supports online and batch submission of multiple data types such as Project, Sample, Experiment/Run, Assembly, Variation, Metabolism, Single cell, and Sequence. Moreover, CNSA has achieved the correlation of sample entities, sample information, and analyzed data on some projects. Its data submission service can be used as a supplement to the literature publishing process to support early data sharing.CNGB Sequence Archive (CNSA) is a convenient and efficient archiving system of multi-omics data in the life science of CNGBdb, which provides archiving services for raw sequencing reads and further analyzed results. CNSA follows the international data standards for omics data, and supports online and batch submission of multiple data types such as Project, Sample, Experiment/Run, Assembly, Variation, Metabolism, Single cell, Sequence. Its data submission service can be used as a supplement to the literature publishing process to support early data sharing.