 Live from San Francisco, it's theCUBE. Covering Micron Insight 2019, brought to you by Micron. Welcome back to Pier 27 in San Francisco. Beautiful day here. You're watching theCUBE, the leader in live tech coverage. We're covering Micron Insight 2019. Hashtag Micron Insight. My co-host, David Floyer, and I are pleased to welcome Michael Woodacre, CUBE alum, and a fellow at Hewlett Packard Enterprise. Michael, good to see you again. Thanks for coming on. Thanks for having me. So you're welcome. So you're talking about HPC on a panel today, but of course your role inside of HPE is as a wider scope. Talk about that a little bit. Sure, so I'm the lead technologist in our compute solutions business unit at Hewlett Packard Enterprise. So I've come from the group that worked on in-memory computing, the Superdome Flex platform around things like traditional enterprise computing, SAP HANA, but I'm now responsible not only for that mission critical solutions platform, but also looking at our blades and edge line business as well, so broader technology. Okay, and then of course today we're talking a lot about data, the growth of data, and as I said, you're sitting on a panel talking about high performance computing and the impact on science. What are you seeing? What are the big trends in terms of the intersection between data and the collision with HPC and science? So what we're seeing is just this explosion of data and this real move from traditionally science was based around how you put equations into supercomputers, run simulations, you test your theories out and look at results. Come back in a couple weeks. Exactly, or months even or potentially years. Now we're seeing a lot of work around collecting data from instruments, so whether it's genomic analysis, satellite observations of the planet or of the universe, these are all generating data in vast quantities, very high rates. And so we need to rethink how we're doing our science to gain insights from this massive data increase we're seeing. You know when we first started covering, and this is the 10th year of the Cuban, so in 2010, you could look at the high performance computing market as sort of an indicator of some of the things that were going to happen in so-called big data. And some of those things have played out and I think it probably still is a harbinger. I wonder, how are you seeing machine intelligence applied to all this data, and what can we learn from that in your opinion, in terms of its commercial applications? So, as we all know, this massive data explosion is like, how do we gain insights from this data? And so, as I mentioned, we used to have equations for things like computational fluid dynamics, but now, things are progressing, so we need to use other techniques to gain understanding. And so we're using artificial intelligence, and particularly today, deep learning techniques, to basically gain insights from this data where we don't have equations that we can use to mine this information. So we're using these AI techniques to effectively generate the algorithms that can then bring patterns of interest to our focus, to then really understand what is the scientific phenomena that's driving this particular pattern we're seeing within the data. So, it's just beyond the ability of the number of HPC programmers we have, the sort of traditional equation-based methodologies, algorithms to gain insight. We're moving into this world where we just have outstripped knowledge and capabilities to gain insight. So how is that being made possible? What are the differences in the architecture that you've had to put in, for example, to make this sort of thing possible? Yeah, it's a really interesting time. Actually, a few years ago, seemed like computing was starting to get boring, because whereas now we've got this explosion of new hardware devices being built basically moving into more of a heterogeneous world, because we have this exponential growth of data, but traditional computing techniques are slowing down. So people are looking at accelerators to close that gap and all sorts of heterogeneous devices. So, and we've really been thinking, how do we change the whole computing infrastructure to move from a compute-centric world to a memory-centric world? And how can we use memory-driven computing techniques to close that gap to gain insight? So kind of rethinking the whole architectural direction, basically merge, sort of collapsing down the traditional hierarchy you have from storage to memory to the CPU to get rid of the legacy bottlenecks in converting protocols from process and memory storage down to just a simple, basically memory-driven architecture where you have access to the entire dataset you're looking at, which could be many terabytes to petabytes to exabytes, but you can do simple programming, just directly load store to that huge dataset to gain insights. So that's really the change. That's fascinating, isn't it? So it's the Gen Z, the hope of Gen Z is actually taking place now? Yeah, so Gen Z is an industry-led consulting around a memory fabric. And the Hewlett Packard Enterprise and a whole host of industry partners are part of that ecosystem looking at building a memory fabric where people can bring different innovations to operate, whether it's processing types, memory types, but having that common infrastructure. I mean, there's other work too in the industry, the compute express link consortium as well. So there's a lot of interest now in getting memory semantics out of the processor and into a common fabric for people to innovate on. Do you have some examples of where this is making a difference now from the work in the HPE and your commercial work? Certainly, yeah, we're working with customers in areas like precision medicine, genomics, basically accelerating the ability to gain insights into what medical pathway to go on for a particular disease. We're working in cyber security, looking at how, you know, we're all worried about security of our data and things like network intrusion. So we're looking at how can you gain insights, not only into known attacking patterns on a network, but the unknown patterns that are just appearing. So we're actually applying machine learning techniques on sort of graph data to understand those things. So there's really a very broad spectrum where you can apply these techniques to data analytics. Are all scientists now data scientists? And what's the relationship between sort of a classic data scientist where you think of somebody with stats and math and maybe a little bit of coding expertise and a scientist that has, you know, much more domain expertise. You see data scientists sort of traverse domains. Well, how are those two worlds coming together? Yeah, it's funny you mentioned that I had that exact conversation with one of the members that the Cosmos group in Cambridge is the Stephen Hawking's cosmology team. And he said, actually, he realized a couple of years ago, maybe he should call himself a data scientist, not a cosmologist, because it seemed like what he was doing was exactly what you said. It's all about, in their case, they're taking their theoretical ideas about the early universe, taking the data measurements from surveys of the sky, the background, the cosmic background radiation, and trying to pair these together. So I think data science is tremendously important right now to accelerate, as they are, insights into data. But it's not without, you can't really do it in isolation because a data scientist in isolation is just pointing out peaks or troughs or trends, but it's like, how do you relate that to the underlying scientific phenomenon? So you need experts in whatever the area you're looking at data to work with data scientists to really bridge that gap. Well, with all this data and all this performance, computing capacity, and all this memory, it's going to be fascinating to see what kind of insights come out in the next 10 years. Michael, thanks so much for coming on theCUBE. It was great to have you. Thank you very much. All right, you're welcome, and thank you for watching everybody. We'll be right back at Micron Insight 2019 from San Francisco. You're watching theCUBE.