Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
  • 1 (current)
Found 22 result(s)
The Ozone Mapping and Profiler Suite measures the ozone layer in our upper atmosphere—tracking the status of global ozone distributions, including the ‘ozone hole.’ It also monitors ozone levels in the troposphere, the lowest layer of our atmosphere. OMPS extends out 40-year long record ozone layer measurements while also providing improved vertical resolution compared to previous operational instruments. Closer to the ground, OMPS’s measurements of harmful ozone improve air quality monitoring and when combined with cloud predictions; help to create the Ultraviolet Index, a guide to safe levels of sunlight exposure. OMPS has two sensors, both new designs, composed of three advanced hyperspectralimaging spectrometers.The three spectrometers: a downward-looking nadir mapper, nadir profiler and limb profiler. The entire OMPS suite currently fly on board the Suomi NPP spacecraft and are scheduled to fly on the JPSS-2 satellite mission. NASA will provide the OMPS-Limb profiler.
The UC San Diego Library Digital Collections website gathers two categories of content managed by the Library: library collections (including digitized versions of selected collections covering topics such as art, film, music, history and anthropology) and research data collections (including research data generated by UC San Diego researchers).
The Research Collection is ETH Zurich's publication platform. It unites the functions of a university bibliography, an open access repository and a research data repository within one platform. Researchers who are affiliated with ETH Zurich, the Swiss Federal Institute of Technology, may deposit research data from all domains. They can publish data as a standalone publication, publish it as supplementary material for an article, dissertation or another text, share it with colleagues or a research group, or deposit it for archiving purposes. Research-data-specific features include flexible access rights settings, DOI registration and a DOI preview workflow, content previews for zip- and tar-containers, as well as download statistics and altmetrics for published data. All data uploaded to the Research Collection are also transferred to the ETH Data Archive, ETH Zurich’s long-term archive.
Welcome to the largest bibliographic database dedicated to Economics and available freely on the Internet. This site is part of a large volunteer effort to enhance the free dissemination of research in Economics, RePEc, which includes bibliographic metadata from over 1,800 participating archives, including all the major publishers and research outlets. IDEAS is just one of several services that use RePEc data. Authors are invited to register with RePEc to create an online profile. Then, anyone finding some of your research here can find your latest contact details and a listing of your other research. You will also receive a monthly mailing about the popularity of your works, your ranking and newly found citations. Besides that IDEAS provides software and public accessible data from Federal Reserve Bank.
ETH Data Archive is ETH Zurich's long-term preservation solution for digital information such as research data, documents or images. It serves as the backbone of data curation and for most of its content, it is a “dark archive” without public access. In this capacity, the ETH Data Archive also archives the content of ETH Zurich’s Research Collection which is the primary repository for members of the university and the first point of contact for publication of data at ETH Zurich. All data that was produced in the context of research at the ETH Zurich, can be published and archived in the Research Collection. In the following cases, a direct data upload into the ETH Data Archive though, has to be considered: - Upload and registration of software code according to ETH transfer’s requirements for Software Disclosure. - A substantial number of files, have to be regularly submitted for long-term archiving and/or publishing and browser-based upload is not an option: the ETH Data Archive may offer automated data and metadata transfers from source applications (e.g. from a LIMS) via API. - Files for a project on a local computer have to be collected and metadata has to be added before uploading the data to the ETH Data Archive: -- we provide you with the local file editor docuteam packer. Docuteam packer allows to structure, describe, and organise data for an upload into the ETH Data Archive and the depositor decides when submission is due.
>>>!!!<<< 2018-01-18: no data nor programs can be found >>>!!!<<< These archives contain public domain programs for calculations in physics and other programs that we suppose about will help during work with computer. Physical constants and experimental or theoretical data as cross sections, rate constants, swarm parameters, etc., that are necessary for physical calculations are stored here, too. Programs are mainly dedicated to computers compatible with PC IBM. If programs do not use graphic units it is possible to use them on other computers, too. It is necessary to reprogram the graphic parts of programs in the other cases.
The Information Marketplace for Policy and Analysis of Cyber-risk & Trust (IMPACT) program supports global cyber risk research & development by coordinating, enhancing and developing real world data, analytics and information sharing capabilities, tools, models, and methodologies. In order to accelerate solutions around cyber risk issues and infrastructure security, IMPACT makes these data sharing components broadly available as national and international resources to support the three-way partnership among cyber security researchers, technology developers and policymakers in academia, industry and the government.
CaltechDATA is an institutional data repository for Caltech. Caltech library runs the repository to preserve the accomplishments of Caltech researchers and share their results with the world. Caltech-associated researchers can upload data, link data with their publications, and assign a permanent DOI so that others can reference the data set. The repository also preserves software and has automatic Github integration. All files present in the repository are open access or embargoed, and all metadata is always available to the public.
BioVeL is a virtual e-laboratory that supports research on biodiversity issues using large amounts of data from cross-disciplinary sources. BioVeL supports the development and use of workflows to process data. It offers the possibility to either use already made workflows or create own. BioVeL workflows are stored in MyExperiment - Biovel Group http://www.myexperiment.org/groups/643/content. They are underpinned by a range of analytical and data processing functions (generally provided as Web Services or R scripts) to support common biodiversity analysis tasks. You can find the Web Services catalogued in the BiodiversityCatalogue.
OpenML is an open ecosystem for machine learning. By organizing all resources and results online, research becomes more efficient, useful and fun. OpenML is a platform to share detailed experimental results with the community at large and organize them for future reuse. Moreover, it will be directly integrated in today’s most popular data mining tools (for now: R, KNIME, RapidMiner and WEKA). Such an easy and free exchange of experiments has tremendous potential to speed up machine learning research, to engender larger, more detailed studies and to offer accurate advice to practitioners. Finally, it will also be a valuable resource for education in machine learning and data mining.
RunMyCode is a novel cloud-based platform that enables scientists to openly share the code and data that underlie their research publications. The web service only requires a web browser as all calculations are done on a dedicated cloud computer. Once the results are ready, they are automatically displayed to the user.
Country
During cell cycle, numerous proteins temporally and spatially localized in distinct sub-cellular regions including centrosome (spindle pole in budding yeast), kinetochore/centromere, cleavage furrow/midbody (related or homolog structures in plants and budding yeast called as phragmoplast and bud neck, respectively), telomere and spindle spatially and temporally. These sub-cellular regions play important roles in various biological processes. In this work, we have collected all proteins identified to be localized on kinetochore, centrosome, midbody, telomere and spindle from two fungi (S. cerevisiae and S. pombe) and five animals, including C. elegans, D. melanogaster, X. laevis, M. musculus and H. sapiens based on the rationale of "Seeing is believing" (Bloom K et al., 2005). Through ortholog searches, the proteins potentially localized at these sub-cellular regions were detected in 144 eukaryotes. Then the integrated and searchable database MiCroKiTS - Midbody, Centrosome, Kinetochore, Telomere and Spindle has been established.
Interface to Los Alamos Atomic Physics Codes is your gateway to the set of atomic physics codes developed at the Los Alamos National Laboratory. The well known Hartree-Fock method of R.D. Cowan, developed at Group home page of the Los Alamos National Laboratory, is used for the atomic structure calculations. Electron impact excitation cross sections are calculated using either the distorted wave approximation (DWA) or the first order many body theory (FOMBT). Electron impact ionization cross sections can be calculated using the scaled hydrogenic method developed by Sampson and co-workers, the binary encounter method or the distorted wave method. Photoionization cross sections and, where appropriate, autoionizations are also calculated.
Country
The Federated Research Data Repository (FRDR) is a place for Canadian researchers to deposit and share research data and to facilitate discovery of research data in Canadian repositories. Le Dépôt fédéré de données de recherche (DFDR) est une place où les chercheurs canadiens peuvent déposer et partager des données de recherche et faciliter la découverte de données de recherche dans des dépôts canadiens.
Country
ResearchGate is a network where 15+ million scientists and researchers worldwide connect to share their work. Researchers can upload data of any type and receive DOIs, detailed statistics and real-time feedback. In Data discovery Section of ResearchGate you can explore the added datasets.
OpenKIM is an online suite of open source tools for molecular simulation of materials. These tools help to make molecular simulation more accessible and more reliable. Within OpenKIM, you will find an online resource for standardized testing and long-term warehousing of interatomic models and data, and an application programming interface (API) standard for coupling atomistic simulation codes and interatomic potential subroutines.
For datasets big and small; Store your research data online. Quickly and easily upload files of any type and we will host your research data for you. Your experimental research data will have a permanent home on the web that you can refer to.
Ag Data Commons provides access to a wide variety of open data relevant to agricultural research. We are a centralized repository for data already on the web, as well as for new data being published for the first time. While compliance with the U.S. Federal public access and open data directives is important, we aim to surpass them. Our goal is to foster innovative data re-use, integration, and visualization to support bigger, better science and policy.