Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 294 result(s)
IEEE DataPort™ is a universally accessible online data repository created, owned, and supported by IEEE, the world’s largest technical professional organization. It enables all researchers and data owners to upload their dataset without cost. IEEE DataPort makes data available in three ways: standard datasets, open access datasets, and data competition datasets. By default, all "standard" datasets that are uploaded are accessible to paid IEEE DataPort subscribers. Data owners have an option to pay a fee to make their dataset “open access”, so it is available to all IEEE DataPort users (no subscription required). The third option is to host a "data competition" and make a dataset accessible for free for a specific duration with instructions for the data competition and how to participate. IEEE DataPort provides workflows for uploading data, searching, and accessing data, and initiating or participating in data competitions. All datasets are stored on Amazon AWS S3, and each dataset uploaded by an individual can be up to 2TB in size. Institutional subscriptions are available to the platform to make it easy for all members of a given institution to utilize the platform and upload datasets.
Brainlife promotes engagement and education in reproducible neuroscience. We do this by providing an online platform where users can publish code (Apps), Data, and make it "alive" by integragrate various HPC and cloud computing resources to run those Apps. Brainlife also provide mechanisms to publish all research assets associated with a scientific project (data and analyses) embedded in a cloud computing environment and referenced by a single digital-object-identifier (DOI). The platform is unique because of its focus on supporting scientific reproducibility beyond open code and open data, by providing fundamental smart mechanisms for what we refer to as “Open Services.”
The German Text Archive (Deutsches Textarchiv, DTA) presents online a selection of key German-language works in various disciplines from the 17th to 19th centuries. The electronic full-texts are indexed linguistically and the search facilities tolerate a range of spelling variants. The DTA presents German-language printed works from around 1650 to 1900 as full text and as digital facsimile. The selection of texts was made on the basis of lexicographical criteria and includes scientific or scholarly texts, texts from everyday life, and literary works. The digitalisation was made from the first edition of each work. Using the digital images of these editions, the text was first typed up manually twice (‘double keying’). To represent the structure of the text, the electronic full-text was encoded in conformity with the XML standard TEI P5. The next stages complete the linguistic analysis, i.e. the text is tokenised, lemmatised, and the parts of speech are annotated. The DTA thus presents a linguistically analysed, historical full-text corpus, available for a range of questions in corpus linguistics. Thanks to the interdisciplinary nature of the DTA Corpus, it also offers valuable source-texts for neighbouring disciplines in the humanities, and for scientists, legal scholars and economists.
Country
Jülich DATA is a registry service to index all research data created at or in the context of Forschungszentrum Jülich. As an institutionial repository, it may also be used for data and software publications.
Established in 1965, the CSD is the world’s repository for small-molecule organic and metal-organic crystal structures. Containing the results of over one million x-ray and neutron diffraction analyses this unique database of accurate 3D structures has become an essential resource to scientists around the world. The CSD records bibliographic, chemical and crystallographic information for:organic molecules, metal-organic compounds whose 3D structures have been determined using X-ray diffraction, neutron diffraction. The CSD records results of: single crystal studies, powder diffraction studies which yield 3D atomic coordinate data for at least all non-H atoms. In some cases the CCDC is unable to obtain coordinates, and incomplete entries are archived to the CSD. The CSD includes crystal structure data arising from: publications in the open literature and Private Communications to the CSD (via direct data deposition). The CSD contains directly deposited data that are not available anywhere else, known as CSD Communications.
Academic Torrents is a distributed data repository. The academic torrents network is built for researchers, by researchers. Its distributed peer-to-peer library system automatically replicates your datasets on many servers, so you don't have to worry about managing your own servers or file availability. Everyone who has data becomes a mirror for those data so the system is fault-tolerant.
The Infectious Diseases Data Observatory (IDDO) assembles clinical, laboratory and epidemiological data on a collaborative platform to be shared with the research and humanitarian communities. The data are analysed to generate reliable evidence and innovative resources that enable research-driven responses to the major challenges of emerging and neglected infections. Access is available to individual patient data held for malaria and Ebola virus disease. Resources for visceral leishmaniasis, schistosomiasis and soil transmitted helminths, Chagas disease and COVID-19 are under development. IDDO contains the following repositories : COVID-19 Data Platform, Chagas Data Platform, Schistosomiasis & Soil Transmitted Helminths Data Platform, Visceral Leishmaniasis Data Platform, Ebola Data Platform, WorldWide Antimalarial Resistance Network (WWARN)
The Earth System Grid Federation (ESGF) is an international collaboration with a current focus on serving the World Climate Research Programme's (WCRP) Coupled Model Intercomparison Project (CMIP) and supporting climate and environmental science in general. Data is searchable and available for download at the Federated ESGF-CoG Nodes https://esgf.llnl.gov/nodes.html
The JPL Tropical Cyclone Information System (TCIS) was developed to support hurricane research. There are three components to TCIS; a global archive of multi-satellite hurricane observations 1999-2010 (Tropical Cyclone Data Archive), North Atlantic Hurricane Watch and ASA Convective Processes Experiment (CPEX) aircraft campaign. Together, data and visualizations from the real time system and data archive can be used to study hurricane process, validate and improve models, and assist in developing new algorithms and data assimilation techniques.
A premier source for United States cancer statistics, SEER gathers information related to incidence, prevalence, and survival from specific geographic areas that represent 28 percent of the population, as well as compiles related reports and reports on the national cancer mortality rates. Their aim is to provide information related to cancer statistics and decrease the burden of cancer in the national population. SEER has been collecting data from cancer cases since 1973.
Central data management of the USGS for water data that provides access to water-resources data collected at approximately 1.5 million sites in all 50 States, the District of Columbia, Puerto Rico, the Virgin Islands, Guam, American Samoa and the Commonwealth of the Northern Mariana Islands. Includes data on water use and quality, groundwater, and surface water.
Country
The Australian Data Archive (ADA) provides a national service for the collection and preservation of digital research data and to make these data available for secondary analysis by academic researchers and other users. Data are stored in seven sub-archives: Social Science, Historical, Indigenous, Longitudinal, Qualitative, Crime & Justice and International. Along with Australian data, ADA International is also a repository for studies by Australian researchers conducted in other countries, particularly throughout the Asia-Pacific region. The ADA International data catalogue includes links to studies from countries including New Zealand, Bangladesh, Cambodia, China, Indonesia, and several other countries. In 2017 the archive systems moved from the existing Nesstar platform to the new ADA Dataverse platform https://dataverse.ada.edu.au/
>>>!!!<<<The IGETS data base at GFZ Potsdam http://www.re3data.org/repository/r3d100010300 continues the activities of the International Center for Earth Tides (ICET), in particular, in collecting, archiving and distributing Earth tide records from long series of gravimeters, tiltmeters, strainmeters and other geodynamic sensors. >>>!!!<<< The ICET Data Bank contains results from 360 tidal gravity stations: hourly values, main tidal waves obtained by least squares analyses, residual vectors, oceanic attraction and loading vectors. The Data Bank contains also data from tiltmeters and extensometers. ICET is responsible for the Information System and Data Center of the Global Geodynamic Project (GGP). The tasks ascribed to ICET are : to collect all available measurements of Earth tides (which is its task as World Data Centre C), to evaluate these data by convenient methods of analysis in order to reduce the very large amount of measurements to a limited number of parameters which should contain all the desired and needed geophysical information, to compare the data from different instruments and different stations distributed all over the world, evaluate their precision and accuracy from the point of view of internal errors as well as external errors, to help to solve the basic problem of calibrations and to organize reference stations or build reference calibration devices, to fill gaps in information or data as far as feasible, to build a data bank allowing immediate and easy comparison of Earth tide parameters with different Earth models and other geodetical and geophysical parameters like geographical position, Bouguer anomaly, crustal thickness and age, heat flow, ... to ensure a broad diffusion of the results and information to all interested laboratories and individual scientists.
The Harvard Dataverse is open to all scientific data from all disciplines worldwide. It includes the world's largest collection of social science research data. It is hosting data for projects, archives, researchers, journals, organizations, and institutions.
DATA.NASA.GOV is NASA's clearinghouse site for open-data provided to the public. Tens of thousands of datasets are available for you. This site is a continually growing catalog of publicly available NASA Datasets, APIs, Visualizations, and more.
OpenWorm aims to build the first comprehensive computational model of the Caenorhabditis elegans (C. elegans), a microscopic roundworm. With only a thousand cells, it solves basic problems such as feeding, mate-finding and predator avoidance. Despite being extremely well studied in biology, this organism still eludes a deep, principled understanding of its biology. We are using a bottom-up approach, aimed at observing the worm behaviour emerge from a simulation of data derived from scientific experiments carried out over the past decade. To do so we are incorporating the data available in the scientific community into software models. We are engineering Geppetto and Sibernetic, open-source simulation platforms, to be able to run these different models in concert. We are also forging new collaborations with universities and research institutes to collect data that fill in the gaps All the code we produce in the OpenWorm project is Open Source and available on GitHub.
The Linguistic Data Consortium (LDC) is an open consortium of universities, libraries, corporations and government research laboratories. It was formed in 1992 to address the critical data shortage then facing language technology research and development. Initially, LDC's primary role was as a repository and distribution point for language resources. Since that time, and with the help of its members, LDC has grown into an organization that creates and distributes a wide array of language resources. LDC also supports sponsored research programs and language-based technology evaluations by providing resources and contributing organizational expertise. LDC is hosted by the University of Pennsylvania and is a center within the University’s School of Arts and Sciences.
<<<!!!<<< This repository is no longer available. >>>!!!>>> BioVeL is a virtual e-laboratory that supports research on biodiversity issues using large amounts of data from cross-disciplinary sources. BioVeL supports the development and use of workflows to process data. It offers the possibility to either use already made workflows or create own. BioVeL workflows are stored in MyExperiment - Biovel Group http://www.myexperiment.org/groups/643/content. They are underpinned by a range of analytical and data processing functions (generally provided as Web Services or R scripts) to support common biodiversity analysis tasks. You can find the Web Services catalogued in the BiodiversityCatalogue.
The Sol Genomics Network (SGN) is a clade-oriented database dedicated to the biology of the Solanaceae family which includes a large number of closely related and many agronomically important species such as tomato, potato, tobacco, eggplant, pepper, and the ornamental Petunia hybrida. SGN is part of the International Solanaceae Initiative (SOL), which has the long-term goal of creating a network of resources and information to address key questions in plant adaptation and diversification
Country
Reptiles and amphibians are collectively known as herpetofauna and are a unique part of Ontario’s biodiversity. An earlier atlas, called the Ontario Herpetofaunal Summary Atlas, provided extensive information about where many of the province’s reptiles and amphibians occurred. The Atlas is transitioning into a new era, with Ontario Nature wrapping-up the data collection phase of this project as of December 1, 2019. Now that we have discontinued our app and online form, we encourage you to continue submitting any future observations through the ‘Herps of Ontario’ project (https://www.inaturalist.org/projects/herps-of-ontario) on iNaturalist or directly to the Natural Heritage Information Centre (nhicrequests@ontario.ca) for species at risk. To learn more about the transition, read our blog (https://ontarionature.org/ontario-reptile-and-amphibian-atlas-changes/)
>>>!!!<<< Noticed 26.08.2020: The NCI CBIIT instance of the CGAP no longer exist on this website. The Mitelman Database of Chromosome Aberrations and Gene Fusions in Cancer has a new home at the NCI-funded Institute for Systems Biology Cancer Genomics Cloud available at the following location: https://mitelmandatabase.isb-cgc.org >>>!!!<<<
TreeGenes is a genomic, phenotypic, and environmental data resource for forest tree species. The TreeGenes database and Dendrome project provide custom informatics tools to manage the flood of information.The database contains several curated modules that support the storage of data and provide the foundation for web-based searches and visualization tools. GMOD GUI tools such as CMAP for genetic maps and GBrowse for genome and transcriptome assemblies are implemented here. A sample tracking system, known as the Forest Tree Genetic Stock Center, sits at the forefront of most large-scale projects. Barcode identifiers assigned to the trees during sample collection are maintained in the database to identify an individual through DNA extraction, resequencing, genotyping and phenotyping. DiversiTree, a user-friendly desktop-style interface, queries the TreeGenes database and is designed for bulk retrieval of resequencing data. CartograTree combines geo-referenced individuals with relevant ecological and trait databases in a user-friendly map-based interface. ---- The Conifer Genome Network (CGN) is a virtual nexus for researchers working in conifer genomics. The CGN web site is maintained by the Dendrome Project at the University of California, Davis.