 Live from Madrid, Spain. It's theCUBE, covering HPE Discover Madrid 2017. Brought to you by Hewlett Packard Enterprise. Welcome back to Madrid, Spain, everybody. This is theCUBE, the leader in live tech coverage. We're here covering HPE Discover 2017. I'm Dave Vellante with my co-host for the week. Peter Burris, Randy Myers-Bach, he's the vice president and general manager of Synergy and Mission Critical Solutions at Hewlett Packard Enterprise. Paul Schellert is here, the director of the Center for Theoretical Cosmology at Cambridge University. Thank you very much for coming on theCUBE. It's a pleasure to see you again. Yeah, good to be back for the second time this week. I think that's a nice day out, let's play two, right? Talking about the computing meets the cosmos. Well, it's exciting. Yesterday we talked about Superdome Flex that we announced. We talked about it in a commercial space where it's taking HANA and Oracle databases to the next level. But there's a whole different side to what you can do within memory compute. And it's all in this high-performance computing space. You think about the problems people want to solve in fluid dynamics, in forecasting, in all sorts of analytics problems. High-performance compute, one of the things it does is it generates massive amounts of data that people then want to do things with. They want to compare that data to what their model said. Okay, can I run that against? They want to take that data and visualize it, right? Okay, how do I go do that? And the more you can do that in memory, it means it's just faster to deal with because you're not going and writing this stuff off the disk. You're not moving it to another cluster back and forth. So we're seeing this burgeoning, the HPC guys would call it fat nodes, right? Where you want to put lots of memory and eliminate the IO to go make their jobs easier. And Professor Chalard will talk about a lot of that in terms of what they're doing at the Cosmos Institute. But this is a trend, you don't have to be a university. We're seeing this inside of oil and gas companies, aerospace engineering companies, anybody that's solving these kind of complex, computational problems that have an analytical element to whether it's, compared to the model, visualize, do something with that once you've done that. Okay, so Paul, explain more about what it is you do. Well, in the Cosmos group of which I'm the head, we're interested in two things, cosmology, which is looking, I'm trying to understand where the universe comes from, the hobby bang, and then we're interested in black holes and particularly their collisions which produce gravitational waves. So they're the two main areas, relativity and cosmology. So, it's a big topic, so. Okay. I mean, I don't even know where to start. I just want to know, okay, what have you learned and can you summarize it for a lay person? I mean, where are you today? What can you share with us that we can understand? Well, what we do is we take our mathematical models and we make predictions about the real universe and so we try and compare those to the latest observational data. And we're in a particularly exciting period of time at the moment because of all a flood of new data about the universe and about black holes and in the last two years, gravitational waves were discovered. Okay, so there's a Nobel Prize this year and so lots of things are happening. So it's a very data-driven science and so we have to try and keep up with this flood of new data, which is getting larger and larger and also with new types of data because suddenly gravitational waves are the latest thing to look at. And what are the sources of data and new sources of data that you're tapping? Well, in cosmology, we're mainly interested in the cosmic microwave background. Yeah, the sources of data are the cosmos. Yeah, right. And so this is relic radiation left over from the Big Bang Fireball, okay? So it's like a photograph of the universe, a blueprint and then also in the distribution of galaxies, so 3D maps of the universe. And we're kind of in a new age of exploration. We've only got a tiny fraction of the universe map so far and we're trying to extract new information about the origin of the universe from that data. In relativity, we've got these gravitational waves. You know, these are ripples in space-time. They're traversing across the universe. They're essentially earthquakes in the universe and they're sound waves or seismic waves that propagate to us from these very violent events. So I want to take you to the gravitational waves because in many respects, it's an example of a lot of what's here in action. Here's what I mean. That the experiment, and correct me if I'm wrong, but it's basically you create a, have a laser, two lasers perpendicular to each other, shooting a signal about two or three miles in that direction, and it is the most precise experiment ever undertaken because what you're doing is you're measuring the time it takes for one laser versus another laser and that time is a function of the slight stretching that comes from the gravitational rays. So that is an unbelievable example of edge computing where you have just, the tolerances to do that cannot, that's not something you can send back to the cloud. You've got to do a lot of the compute right there, right? That's right, that's right, yeah. So a gravitational wave comes by and you shrink one way and you'll stretch the other. It distorts the space-time. So yeah, you become thinner. Off the wave, right. Yeah, and these tiny, tiny changes are what's measured and nobody expected gravitational waves to be discovered in 2015. We all thought, ah, another five years, another five years, you know, they've always been saying we'll discover them, we'll discover them, but it happened. And since then, it's been used two or three times to discover new types of things, and there's now a whole, and I'm sure this is very central to what you're doing, there's now a whole concept of gravitational information can, in fact, becomes an entirely new branch of cosmology. Have I got that right? Yeah, you have, it's called multi-messenger astronomy now because you don't just see the universe in electromagnetic waves in light, you hear the universe. This is qualitatively different, it's sound waves coming across the universe. And so combining these two, the latest event was where they heard the event first, then they turned their telescope and they saw it, right? And so much information came out of that, even information about cosmology. Because these signals are traveling hundreds of billions of light years across to us, we're getting a picture of the whole universe as they propagate all that way. So we were able to measure the expansion rate of the universe. And the techniques for the observational, the technology for observation, what is that? How does that evolve? Well, you've got the wrong guy here. I'm from the theory group, right? We're doing the predictions and these guys, with their incredible technology, are seeing the data, seeing, you know, and it's a magic, the whole point is you've got to get the predictions and then you've got to look in the data for a needle in the haystack, which is this signature of these black holes colliding. You think about that. I have a model, I'm looking for the needle in the haystack. That's a different way to describe an in-memory analytic search pattern recognition problem. That's really what it is. This is the world's largest pattern recognition problem, you know. Most precise, and I mean literally. And that's an observation that confirms your theory, right? Is that? Well, it confirms the theory. Maybe it was your theory. Well, at least. I think it was Einstein's theory. I'm a cosmologist. So in my group, we have relatives who are actively working on these black hole collisions and making predictions about this stuff. But they're dampening vibration from passing trucks in these things and collecting it? Yeah, no, no, the technology's amazing. But coming back to the technology, the technology is one of the reasons why this becomes so exciting and becomes practical is because for the first time, the technology has gotten to the point where you can assume that the problem that you're trying to solve can be focused on and you don't have to translate it in technology terms. Right, yeah. So talk a little bit about, because in many respects, that's where business is. Business wants to be able to focus on the problem and how to think the problem differently and then have the technology just respond. They don't want to have to start with the technology and then imagine what they can do with it. So I think from our point of view, it's very fast moving field. Things are changing, new data's coming in. The data's getting bigger and bigger because the instruments are getting packed tighter and tighter and there's more information. So we've got a computational problem as well, so we've got to get more computational power. But there's new types of data, like suddenly there's gravitational waves. There's new types of analysis that we want to do. So we want to be able to look at this data in a very flexible way and ingest it and explore new ideas more quickly, you know, because things are happening so fast. And so that's why we've adopted this in-memory paradigm for a number of years now and the latest incarnation of this is the HPE Superdome Flex. That's a shared memory system, so you can just pull in all your data and explore it without carefully programming how the memory is distributed around, okay? So we find this is very easy for our users to develop pipelines, data analytic pipelines, to develop their new theoretical models and to compare the two on a single system, okay? So it's also very easy for new users to use. You don't have to be an advanced programmer to get going. You can just stay with the science in a sense. You know, a PhD in physics to do great physics. You don't have to have a PhD in physics and technology. That's right, yeah. So it's a very flexible program, flexible architecture with which program. So you can more or less take your laptop pipeline, develop your pipeline on a laptop, take it to the Superdome and then scale it up to these huge memory problems. And get it done faster, you can iterate. Yeah, yeah, yeah. And I mean, these are the most brilliant scientists in the world, bar none, right? I made the analogy the other day. I made the analogy of imagine IMP or Frank Lloyd Wright or someone had to be their own general contractor, right? No, they're brilliant at designing architectures and imagining things that no one else could imagine. And then they had people to go do that. This allows the people to focus on the brilliance of the science without having to go become the expert programmer. We see that in business too, right? You can't just be like, I'm not going to be like, I'm not going to be like, I'm not going to be like, I'm not going to be like, I'm not going to be like, I'm not going to be like, I'm not going to be like, I'm not doing business too, right? Parallel programming techniques are difficult, right? Spoken like an old tandem guy, parallelism is hard. But to the extent that you can free yourself up and focus on the problem and not have to mess around with that, it makes life easier. Some problems parallelize well but a lot of them don't need to be and you can allow the data to shine. You can allow the science to shine. Is it correct that the barrier in your ability to reach a conclusion or make a discovery, is the ability to find that needle in a haystack or maybe there are many, but is... Well, if you're talking about obstacles to progress, yeah? I would say computational power isn't the obstacle. It's developing the software pipelines and it's the human personnel, the smart people writing the codes that can look for the needle in the haystack who have the efficient algorithms to do that. And if they're cobbled by having to think very hard about the hardware and the architecture they're working with and how they parallelize the problem, our philosophy is much more that you solve the problem, you validate it. I mean, it can be quite inefficient if you like, but as long as it's a working program that gets you to where you want, then at the second stage you worry about making it efficient, putting it on accelerators, putting it on GPUs, make it go really fast. And that's, for many years now, we've bought these very flexible shared memory or in-memory is the new word for it. In-memory architectures which allow new users, graduate students to come straight in without a master's degree in high performance computing. They can start to tackle problems straight away. What it's interesting, we hear the same, you talk about it at the outer reaches of the universe. I hear it at the inner reaches of the universe from the life sciences companies. We want to map the genome and we want to understand the interaction of various drug combinations with that genetic structure to say, can I tune exactly a vaccine or a drug or something else for that patient's genetic makeup to improve medical outcomes? The same kind of problem. I want to have all this data that I have to run against a complex genome sequence to find the one that gets me to the answer. So there's kind of the, you know, from the macro to the micro, we hear this problem in all different sorts of languages. So one of the things that we have our clients, mainly in business, asking us all the time is, with each, well, let me step back. In, as analysts, not the smartest people in the world. You'll attest, I'm sure, for real. But as analysts, we like to talk about change and we always talked about, you know, mainframe being replaced by mini computer being replaced by this or that. I like to talk in terms of the problems that computing's been able to take on. We've been able to take on increasingly complex, challenging, more difficult problems as a consequence of the advance of technology. Very much like you're saying, the advance of technology allows us to focus increasingly on the problem. What kinds of problems do you think physicists are going to be able to attack in the next five years or so as we think about the combination of increasingly powerful computing and an increasingly simple approach to use it? Yeah, well, I think the simplification you're indicating here is really going to more memory. Holding your whole workflow in memory so that you, one of the biggest bottlenecks we find is ingesting the data and then writing it out. But if you can do everything at once, then that's the key element. So, one of the things we've been working on a great deal is in situ visualization, for example, so that you see the black holes coming together and you see that you've set the right parameters. They haven't missed each other or something's gone wrong with your simulation. In the same way, so that you do the post-processing at the same time. You never need the intermediate data products. So, larger and larger memory and the computational power that balances with that large memory. It's all very well to get a fat node, but you don't have the computational power to use all those terabytes. And so, that's why you, there's this sort of in-memory architecture with Superdome Flex much more balance between the two. So, what are the problems that we're looking forward to in terms of physics? Well, I mean, in cosmology, we're looking for these hints about the origin of the universe, okay? And we've made a lot of progress analyzing the Planck satellite data about the cosmic microbe background. We're honing in on theories of inflation, which is where all the structure in the universe comes from, from Heisenberg's uncertainty principle, rapid period of expansion, just like inflation in the financial markets in the very early universe. Okay, and so we're trying to identify, can we distinguish between different types and are they going to tell us whether the universe comes from a higher-dimensional theory, 10 dimensions, gets reduced to three plus one, or, you know, so lots of clues like that. We're looking for statistical fingerprints of these different models. In gravitational waves, of course, this whole new area, I mean, we think of the cosmic microwave background as a photograph of the early universe. Well, in fact, gravitational waves look right back to the earliest moment, you know, fractions of a nanosecond after the Big Bang. And so it may be that the answers, the clues that we're looking for come from gravitational waves. And, of course, there's so much in astrophysics that we'll learn about compact objects, about neutron stars, you know, about the most energetic events there are in the whole universe. I never thought about the idea, because cosmic radiation background goes back, what, about 300,000 years after Big Bang? Yeah, that's right. You're very well informed. 400,000 years. 400,000, 300,000. I was going to say 400, not that well informed. 370,000. But I never thought about the idea of gravitational waves effectively being noise from the Big Bang. Yeah, yeah, yeah. Well, well, with the cosmic microwave background, we're actually looking for a primordial signal from the Big Bang, you know, from inflation. So it's, yeah. Well, anyway, what are you going to say, Randy? Well, no, I just, I mean, it's amazing the frontiers were heading down. It's kind of an honor to be able to enable some of these things, right? I've spent 30 years in the technology business and heard customers tell me you transform my business or you help me save cost or you help me enter a new market. Never before in 30 plus years of being in this business have I had somebody tell me the things that you're providing are helping me understand the origins of the universe. Yeah, it's just, it's an honor to be affiliated with you guys. Oh, no, no, the honor's mine, Randy, you're producing the hardware, the tools that allow us to do this work. Well, now the honor's ours. How do we learn more about your work and your discoveries and conclusions? In terms of looking at the average. Are there popular authors that we can read other than Stephen Hawking? Well, read Stephen's books, they're very good. Okay, but even if you want. He's got a new one called The Briefer History of Time. The Briefer History? Yeah, yeah, yeah, yeah, yeah. It's more accessible than the Brief History of Time. So your website is? Yeah, our website is ctc.cam.ac.uk, the Center for Theoretical Cosmology, and we've got some popular pages there. We've got some news stories about latest things that have happened like the HPE partnership that we're developing and some nice videos about the work that we're doing actually. Very nice videos. Yeah, and certainly there were several videos run here this week that if people haven't seen them go out there, real one YouTube, they're available at your website. They're on Stephen's Facebook page also, I think. Can you share that website again? Well, actually you can get the beautiful videos of Stephen and the rest of his group on the Discover website, is that right? Yeah, yeah. Okay, so that's HPE Discover website. But your website is? It's ctc.cam.ac.uk, and we're just about to upload those videos ourselves. Can I make a marketing suggestion? Yeah. Yeah. Ctc.cam.ac.uk. Okay, well, yeah, right, thank you. Well, we got to get the cube in one of these comps. The physics comps. That's great. That's great. Well, it's bone up a little bit. Yeah, you're kind of embarrassing us here. No, no, no. 100,000 years off. We need better info than you are. Yeah, you need to remind me, sir. The brief history of time has nothing. Thanks very much for coming on the cube. Okay, it's been a pleasure. It's been a pleasure having you. Thank you. All right, keep it right there, buddy. Mr. Universe and I will be back after this short break.