Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 26 result(s)
Country
The project brings together national key players providing environmentally related biological data and services to develop the ‘German Federation for Biological Data' (GFBio). The overall goal is to provide a sustainable, service oriented, national data infrastructure facilitating data sharing and stimulating data intensive science in the fields of biological and environmental research.
The Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC) is a team of researchers, data specialists and computer system developers who are supporting the development of a data management system to store scientific data generated by Gulf of Mexico researchers. The Master Research Agreement between BP and the Gulf of Mexico Alliance that established the Gulf of Mexico Research Initiative (GoMRI) included provisions that all data collected or generated through the agreement must be made available to the public. The Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC) is the vehicle through which GoMRI is fulfilling this requirement. The mission of GRIIDC is to ensure a data and information legacy that promotes continual scientific discovery and public awareness of the Gulf of Mexico Ecosystem.
EMPIAR, the Electron Microscopy Public Image Archive, is a public resource for raw, 2D electron microscopy images. Here, you can browse, upload, download and reprocess the thousands of raw, 2D images used to build a 3D structure. The purpose of EMPIAR is to provide an easy access to the state-of-the-art raw data to facilitate methods development and validation, which will lead to better 3D structures. It complements the Electron Microscopy Data Bank (EMDB), where 3D images are stored, and uses the fault-tolerant Aspera platform for data transfers
The UC San Diego Library Digital Collections website gathers two categories of content managed by the Library: library collections (including digitized versions of selected collections covering topics such as art, film, music, history and anthropology) and research data collections (including research data generated by UC San Diego researchers).
Open access to macromolecular X-ray diffraction and MicroED datasets. The repository complements the Worldwide Protein Data Bank. SBDG also hosts reference collection of biomedical datasets contributed by members of SBGrid, Harvard and pilot communities.
Country
Research Data Online (RDO) provides access to research datasets held at the University of Western Australia. RDO is managed by the University Library. The information about each dataset has been provided by UWA research groups. Information about the datasets in this service is automatically harvested into Research Data Australia (RDA: https://researchdata.ands.org.au/). Language: The user interface language of the research data repository.
The ETH Data Archive is ETH Zurich's institutional digital long-term archive. Researchers who are affiliated with ETH Zurich, the Swiss Federal Institute of Technology, may deposit file based research data from all domains. In particular, supplementary material to publications is deposited and published here. Research data includes raw data, processed data, software code and other data considered relevant to ensure reproducibility of research results or to facilitate re-use for new research questions. The ETH Data Archive contains both public research data with DOI and data with restricted access. Beyond this, born digital and digitized documents and other data from libraries, collections and archives are preserved in the ETH Data Archive, usually in the form of a dark archive without public access. You find open access data by searching the Knowledge Portal. You may either narrow your search to the Resource Type "Research Data" or the Collection "ETH Data Archive".
The Environmental Data Initiative Repository concentrates on studies of ecological processes that play out at time scales spanning decades to centuries including those of the NSF Long Term Ecological Research (LTER) program, the NSF Macrosystems Biology Program, the NSF Long Term Research in Environmental Biology (LTREB) program, the Organization of Biological Field Stations, and others. The repository hosts data that provide a context to evaluate the nature and pace of ecological change, to interpret its effects, and to forecast the range of future biological responses to change.
The KNB Data Repository is an international repository intended to facilitate ecological, environmental and earth science research in the broadest senses. For scientists, the KNB Data Repository is an efficient way to share, discover, access and interpret complex ecological, environmental, earth science, and sociological data and the software used to create and manage those data. Due to rich contextual information provided with data in the KNB, scientists are able to integrate and analyze data with less effort. The data originate from a highly-distributed set of field stations, laboratories, research sites, and individual researchers. The KNB supports rich, detailed metadata to promote data discovery as well as automated and manual integration of data into new projects. The KNB supports a rich set of modern repository services, including the ability to assign Digital Object Identifiers (DOIs) so data sets can be confidently referenced in any publication, the ability to track the versions of datasets as they evolve through time, and metadata to establish the provenance relationships between source and derived data.
The Protein Data Bank (PDB) archive is the single worldwide repository of information about the 3D structures of large biological molecules, including proteins and nucleic acids. These are the molecules of life that are found in all organisms including bacteria, yeast, plants, flies, other animals, and humans. Understanding the shape of a molecule helps to understand how it works. This knowledge can be used to help deduce a structure's role in human health and disease, and in drug development. The structures in the archive range from tiny proteins and bits of DNA to complex molecular machines like the ribosome.
Country
Edmond is the institutional repository of the Max Planck Society for public research data. It enables Max Planck scientists to create citable scientific assets by describing, enriching, sharing, exposing, linking, publishing and archiving research data of all kinds. A unique feature of Edmond is the dedicated metadata management, which supports a non-restrictive metadata schema definition, as simple as you like or as complex as your parameters require. Further on, all objects within Edmond have a unique identifier and therefore can be clearly referenced in publications or reused in other contexts.
Synapse is an open source software platform that clinical and biological data scientists can use to carry out, track, and communicate their research in real time. Synapse enables co-location of scientific content (data, code, results) and narrative descriptions of that work.
Country
The CyberCell database (CCDB) is a comprehensive collection of detailed enzymatic, biological, chemical, genetic, and molecular biological data about E. coli (strain K12, MG1655). It is intended to provide sufficient information and querying capacity for biologists and computer scientists to use computers or detailed mathematical models to simulate all or part of a bacterial cell at a nanoscopic (10-9 m), mesoscopic (10-8 m).The CyberCell database CCDB actually consists of 4 browsable databases: 1) the main CyberCell database (CCDB - containing gene and protein information), 2) the 3D structure database (CC3D – containing information for structural proteomics), 3) the RNA database (CCRD – containing tRNA and rRNA information), and 4) the metabolite database (CCMD – containing metabolite information). Each of these databases is accessible through hyperlinked buttons located at the top of the CCDB homepage. All CCDB sub-databases are fully web enabled, permitting a wide variety of interactive browsing, search and display operations. and microscopic (10-6 m) level.
GallusReactome is a free, online, open-source, curated resource of core pathways and reactions in chicken biology. Information is authored by expert biological researchers, maintained by the GallusReactome editorial staff and cross-referenced to the NCBI Entrez Gene, Ensembl and UniProt databases, the KEGG and ChEBI small molecule databases, PubMed, and the Gene Ontology (GO).
Academic Commons is a freely accessible digital collection of research and scholarship produced at Columbia University or one of its affiliate institutions (Barnard College, Teachers College, Union Theological Seminary, and Jewish Theological Seminary). The mission of Academic Commons is to collect and preserve the digital outputs of research and scholarship produced at Columbia and its affiliate institutions and present them to a global audience. Academic Commons accepts articles, dissertations, research data, presentations, working papers, videos, and more.
The Open Science Framework (OSF) is part network of research materials, part version control system, and part collaboration software. The purpose of the software is to support the scientist's workflow and help increase the alignment between scientific values and scientific practices. Document and archive studies. Move the organization and management of study materials from the desktop into the cloud. Labs can organize, share, and archive study materials among team members. Web-based project management reduces the likelihood of losing study materials due to computer malfunction, changing personnel, or just forgetting where you put the damn thing. Share and find materials. With a click, make study materials public so that other researchers can find, use and cite them. Find materials by other researchers to avoid reinventing something that already exists. Detail individual contribution. Assign citable, contributor credit to any research material - tools, analysis scripts, methods, measures, data. Increase transparency. Make as much of the scientific workflow public as desired - as it is developed or after publication of reports. Find public projects here. Registration. Registering materials can certify what was done in advance of data analysis, or confirm the exact state of the project at important points of the lifecycle such as manuscript submission or at the onset of data collection. Discover public registrations here. Manage scientific workflow. A structured, flexible system can provide efficiency gain to workflow and clarity to project objectives, as pictured.
The Australian Drosophila Ecology and Evolution Resource (ADEER) from the Hoffmann lab and other contributors is a nationally significant life science collection. The Drosophila Clinal Data Collection contains data on populations along the eastern coast of Australia. It remains an excellent resource for understanding past and future evolutionary responses to climate change. The Drosophila Genomic Data Collection hosts Drosophila genomes sequenced as part of the Genomic Basis for Adaptation to Climate Change Project. 23 genomes have been sequenced as part of this project. Currently assemblies and annotations are available for Drosophila birchii, D. bunnanda, D. hydei, and D. repleta. The Drosophila Species Distribution Data Collection contains distribution data of nine drosophilid species that have been collected in Australia by the Hoffmann lab and other research groups between 1924 and 2005. More than 300 drosophilid species have been identified in the tropical and temperate forests located on the east coast of Australia. Many species are restricted to the tropics, a few are temperate specialists, and some have broad distributions across climatic regions. Their varied distribution along the tropical - temperate cline provide a powerful tool for studying climate adaptation and species distribution limits.
The Melanoma Molecular Map Project (MMMP) is an open access, interactive web-based multidatabase dedicated to the research on melanoma biology and therapy. The aim of this non-profit project is to create an organized and continuously updated databank collecting the huge and ever growing amount of scientific knowledge on melanoma currently scattered in thousands of articles published in hundreds of Journals.
The FAIRDOMHub is built upon the SEEK software suite, which is an open source web platform for sharing scientific research assets, processes and outcomes. FAIRDOM (Web Site) will establish a support and service network for European Systems Biology. It will serve projects in standardizing, managing and disseminating data and models in a FAIR manner: Findable, Accessible, Interoperable and Reusable. FAIRDOM is an initiative to develop a community, and establish an internationally sustained Data and Model Management service to the European Systems Biology community. FAIRDOM is a joint action of ERA-Net EraSysAPP and European Research Infrastructure ISBE.
Country
openLandscapes is an open access information portal for landscape research. Amongst other things, the platform provides information about current research projects. In addition, it offers the scientific community the possibility to maintain a Wiki on landscape-related contents and to make available future primary data from landscape research. In openLandscapes, all technical contents are stored and organised in a networked manner, enabling technical terms to be linked to experts or institutions, as well as to data in the future.
GigaDB primarily serves as a repository to host data and tools associated with articles in GigaScience (GigaScience is an online, open-access journal). GigaDB defines a dataset as a group of files (e.g., sequencing data, analyses, imaging files, software programs) that are related to and support an article or study. GigaDB allows the integration of manuscript publication with supporting data and tools.