Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Oct 6, 2017
Earlier this year, the U.S. Department of Energy tasked elite teams at six major computing companies with researching and developing an exascale supercomputer. Hewlett Packard Enterprise was one of those six.
An exascale supercomputer would represent a massive leap forward in high-performance computing and usher in a new era of possibilities for computer modeling and simulation. With the ability to run a quintillion calculations per second—that’s a one with eighteen zeros after it—the implications of an exascale computer would touch nearly every facet of our lives.
We could build a supercomputer capable of an exascale level of computation today, but because of current hardware and software limitations, it would require a dedicated power plant and fourteen football fields of space to operate. So experts are completely rethinking the supercomputing architecture we’ve built on for decades.
Why is the U.S. government throwing down this gauntlet? Many countries are engaged in what has been referred to as a race to exascale. But getting there isn’t just for national bragging rights. Getting to exascale means reaching a new frontier for humanity, and the opportunity to potentially solve humanity’s most pressing problems.
In Order of Appearance: Nicolas Dube - Chief Strategist for HPC, Hewlett Packard Enterprise Paolo Faraboschi - Fellow, Hewlett Packard Enterprise Bill Mannel - Vice President & General Manager, Hewlett Packard Enterprise Tiffany Trader - Managing Editor, HPCWire Mike Vildibill - Vice President of Advanced Technology Group, Hewlett Packard Enterprise Dona Crawford - Associate Director Computation Emeritus, Lawrence Livermore National Laboratory