Filter
Reset all

Subjects

Content Types

Countries

AID systems

API

Certificates

Data access

Data access restrictions

Database access

Database access restrictions

Database licenses

Data licenses

Data upload

Data upload restrictions

Enhanced publication

Institution responsibility type

Institution type

Keywords

Metadata standards

PID systems

Provider types

Quality management

Repository languages

Software

Syndications

Repository types

Versioning

  • * at the end of a keyword allows wildcard searches
  • " quotes can be used for searching phrases
  • + represents an AND search (default)
  • | represents an OR search
  • - represents a NOT operation
  • ( and ) implies priority
  • ~N after a word specifies the desired edit distance (fuzziness)
  • ~N after a phrase specifies the desired slop amount
Found 165 result(s)
Country
The Flanders Marine Institute (VLIZ) is a centre for marine and coastal research. As a partner in various projects and networks it promotes and supports the international image of Flemish marine scientific research and international marine education. In its capacity as a coordination and information platform, the Flanders Marine Institute (VLIZ) supports some thousand marine scientists in Flanders by disseminating their knowledge to policymakers, educators, the general public and scientists.
The COordinated Molecular Probe Line Extinction Thermal Emission Survey of Star Forming Regions (COMPLETE) provides a range of data complementary to the Spitzer Legacy Program "From Molecular Cores to Planet Forming Disks" (c2d) for the Perseus, Ophiuchus and Serpens regions. In combination with the Spitzer observations, COMPLETE will allow for detailed analysis and understanding of the physics of star formation on scales from 500 A.U. to 10 pc.
The NDEx Project provides an open-source framework where scientists and organizations can share, store, manipulate, and publish biological network knowledge. The NDEx Project maintains a free, public website; alternatively, users can also decide to run their own copies of the NDEx Server software in cases where the stored networks must be kept in a highly secure environment (such as for HIPAA compliance) or where high application load is incompatible with a shared public resource.
Country
RiuNet is intended to save the University community's production, personal or institutional, in collections. These can be made up of different types of documents such as Objects of learning (Polimedia, virtual labs and educational articles), theses, journal articles, maps, scholary works, creative works, institutional heritage, multimedia, teaching material, institutional production, electronic journals, conference proceedings and research data.
Country
TUdatalib is the institutional repository of the TU Darmstadt for research data. It enables the structured storage of research data and descriptive metadata, long-term archiving (at least 10 years) and, if desired, the publication of data including DOI assignment. In addition there is a fine granular rights and role management.
UNC Dataverse is an open-source repository software application for archiving, sharing, and accessing research data of all kinds. Each dataverse within the larger repository contains a multitude of datasets, and each dataset contains descriptive metadata and data files. UNC Dataverse is hosted by Odum Institute for Research in Social Science.
Country
Publication of scans of photographic plates from the so-called "Potsdam zone" of the Carte du Ciel project (32 deg to 39 deg). A total of 977 plates of 2 square degree sky regions was observed and recorded between 1913 to 1924. Since Potsdam Observatory ended participation in the Carte du Ciel project, these observations were so far not analysed or published. Plates were scanned in by a flat bed scanner in 2007-2008. Limitations in astrometric precision as well are to be expected, and specific observational restrains apply, such as multiple exposures on certain plates (see literature)
Country
The International Network of Nuclear Reaction Data Centres (NRDC) constitutes a worldwide cooperation of nuclear data centres under the auspices of the International Atomic Energy Agency. The Network was established to coordinate the world-wide collection, compilation and dissemination of nuclear reaction data.
Country
The eAtlas is a website, mapping system and set of data visualisation tools for presenting research data in an accessible form that promotes greater use of this information. The eAtlas will serve as the primary data and knowledge repository for all NERP Tropical Ecosystems Hub projects, which focus on the on the Great Barrier Reef, Wet Tropics rainforest and Torres Strait. The eAtlas will capture and record research outcomes and make them available to research-users in a timely, readily accessible manner. It will host meta-data records and provide an enduring repository for raw data. It will also develop and host web visualisations to view information using a simple and intuitive interface. This will assist scientists with data discovery and allow environmental managers to access and investigate research data.
The Leicester Database and Archive Service (LEDAS) is an easy to use on-line astronomical database and archive access service, dealing mainly with data from high energy astrophysics missions, but also providing full database functionality for over 200 astronomical catalogues from ground-based observations and space missions. The LEDAS also allows access to images, spectra and light curves in graphics, HDS and FITS formats, as well as access to raw and processed event data. LEDAS provides the primary means of access for the UK astronomical community to the ROSAT Public Data Archive, the ASCA Public Data Archive and the Ginga Products Archive by its Archive Network Interface ARNIE.
Knoema is a knowledge platform. The basic idea is to connect data with analytical and presentation tools. As a result, we end with one uniformed platform for users to access, present and share data-driven content. Within Knoema, we capture most aspects of a typical data use cycle: accessing data from multiple sources, bringing relevant indicators into a common space, visualizing figures, applying analytical functions, creating a set of dashboards, and presenting the outcome.
SimTK is a free project-hosting platform for the biomedical computation community that enables researchers to easily share their software, data, and models and provides the infrastructure so they can support and grow a community around their projects. It has over 126.656 members, hosts 1.648 projects from researchers around the world, and has had more than 2.095.783 files downloaded from it. Individuals have created SimTK projects to meet publisher and funding agencies’ software and data sharing requirements, run scientific challenges, create a collection of their community’s resources, and much more.
CLARIN-UK is a consortium of centres of expertise involved in research and resource creation involving digital language data and tools. The consortium includes the national library, and academic departments and university centres in linguistics, languages, literature and computer science.
The Energy Data eXchange (EDX) is an online collection of capabilities and resources that advance research and customize energy-related needs. EDX is developed and maintained by NETL-RIC researchers and technical computing teams to support private collaboration for ongoing research efforts, and tech transfer of finalized DOE NETL research products. EDX supports NETL-affiliated research by: Coordinating historical and current data and information from a wide variety of sources to facilitate access to research that crosscuts multiple NETL projects/programs; Providing external access to technical products and data published by NETL-affiliated research teams; Collaborating with a variety of organizations and institutions in a secure environment through EDX’s ;Collaborative Workspaces
IRIS offers free and open access to a comprehensive data store of raw geophysical time-series data collected from a large variety of sensors, courtesy of a vast array of US and International scientific networks, including seismometers (permanent and temporary), tilt and strain meters, infrasound, temperature, atmospheric pressure and gravimeters, to support basic research aimed at imaging the Earth's interior.
One of twelve NASA Science Mission Directorate (SMD) Data Centers that provide Earth science data, information, and services to research scientists, applications scientists, applications users, and students. The GES DISC is the home (archive) of NASA Precipitation and Hydrology, as well as Atmospheric Composition and Dynamics remote sensing data and information. The DISC also houses the Modern Era Retrospective-Analysis for Research and Applications (MERRA) data assimilation datasets (generated by GSFC’s Global Modeling and Assimilation Office), and the North American Land Data Assimilation System (NLDAS) and Global Land Data Assimilation System (GLDAS) data products (both generated by GSFC's Hydrological Sciences Branch).
The GRSF, the Global Record of Stocks and Fisheries, integrates data from three authoritative sources: FIRMS (Fisheries and Resources Monitoring System), RAM (RAM Legacy Stock Assessment Database) and FishSource (Program of the Sustainable Fisheries Partnership). The GRSF content publicly disseminated through this catalogue is distributed as a beta version to test the logic to generate unique identifiers for stocks and fisheries. The access to and review of collated stock and fishery data is restricted to selected users. This beta release can contain errors and we welcome feedback on content and software performance, as well as the overall usability. Beta users are advised that information on this site is provided on an "as is" and "as available" basis. The accuracy, completeness or authenticity of the information on the GRSF catalogue is not guaranteed. It is reserved the right to alter, limit or discontinue any part of this service at its discretion. Under no circumstances shall the GRSF be liable for any loss, damage, liability or expense suffered that is claimed to result from the use of information posted on this site, including without limitation, any fault, error, omission, interruption or delay. The GRSF is an active database, updates and additions will continue after the beta release. For further information, or for using the GRSF unique identifiers as a beta tester please contact FIRMS-Secretariat@fao.org.
I2D (Interologous Interaction Database) is an on-line database of known and predicted mammalian and eukaryotic protein-protein interactions. It has been built by mapping high-throughput (HTP) data between species. Thus, until experimentally verified, these interactions should be considered "predictions". It remains one of the most comprehensive sources of known and predicted eukaryotic PPI. I2D includes data for S. cerevisiae, C. elegans, D. melonogaster, R. norvegicus, M. musculus, and H. sapiens.
US Department of Energy’s Atmospheric Radiation Measurement (ARM) Data Center is a long-term archive and distribution facility for various ground-based, aerial and model data products in support of atmospheric and climate research. ARM facility currently operates over 400 instruments at various observatories (https://www.arm.gov/capabilities/observatories/). ARM Data Center (ADC) Archive currently holds over 11,000 data products with a total holding of over 3 petabytes of data that dates back to 1993, these include data from instruments, value added products, model outputs, field campaign and PI contributed data. The data center archive also includes data collected by ARM from related program (e.g., external data such as NASA satellite).
GitHub is the best place to share code with friends, co-workers, classmates, and complete strangers. Over three million people use GitHub to build amazing things together. With the collaborative features of GitHub.com, our desktop and mobile apps, and GitHub Enterprise, it has never been easier for individuals and teams to write better code, faster. Originally founded by Tom Preston-Werner, Chris Wanstrath, and PJ Hyett to simplify sharing code, GitHub has grown into the largest code host in the world.
OpenML is an open ecosystem for machine learning. By organizing all resources and results online, research becomes more efficient, useful and fun. OpenML is a platform to share detailed experimental results with the community at large and organize them for future reuse. Moreover, it will be directly integrated in today’s most popular data mining tools (for now: R, KNIME, RapidMiner and WEKA). Such an easy and free exchange of experiments has tremendous potential to speed up machine learning research, to engender larger, more detailed studies and to offer accurate advice to practitioners. Finally, it will also be a valuable resource for education in machine learning and data mining.
When published in 2005, the Millennium Run was the largest ever simulation of the formation of structure within the ΛCDM cosmology. It uses 10(10) particles to follow the dark matter distribution in a cubic region 500h(−1)Mpc on a side, and has a spatial resolution of 5h−1kpc. Application of simplified modelling techniques to the stored output of this calculation allows the formation and evolution of the ~10(7) galaxies more luminous than the Small Magellanic Cloud to be simulated for a variety of assumptions about the detailed physics involved. As part of the activities of the German Astrophysical Virtual Observatory we have created relational databases to store the detailed assembly histories both of all the haloes and subhaloes resolved by the simulation, and of all the galaxies that form within these structures for two independent models of the galaxy formation physics. We have implemented a Structured Query Language (SQL) server on these databases. This allows easy access to many properties of the galaxies and halos, as well as to the spatial and temporal relations between them. Information is output in table format compatible with standard Virtual Observatory tools. With this announcement (from 1/8/2006) we are making these structures fully accessible to all users. Interested scientists can learn SQL and test queries on a small, openly accessible version of the Millennium Run (with volume 1/512 that of the full simulation). They can then request accounts to run similar queries on the databases for the full simulations. In 2008 and 2012 the simulations were repeated.