 From Denver, Colorado, it's theCUBE. Covering Supercomputing 17, brought to you by Intel. Hey, welcome back everybody. Jeff Frick here with theCUBE. We're getting towards the end of the day here at Supercomputing 2017 in Denver, Colorado. 12,000 people talking really about the outer limits of what you can do with compute and compute power and looking out into the universe and black holes and all kinds of exciting stuff. But we're kind of bringing it back, right? We're all about democratization of technology for people to solve real problems and we're really excited to have our last guest of the day, bringing the energy, Armageddon Ahmed. He's the SVP and GM hybrid cloud and ready solutions for Dell EMC and a many-time CUBE alumni. I'm going to great to see you. Yeah, good to see you, Jeff. So first off, just impressions of the show. 12,000 people, we had no idea. We've never been to the show before. Yeah, this is great. I mean, this is a show that has been around, if you know the history of the show, this was an IEEE engineering show that actually turned into high performance computing around research-based analytics and other things that came out of it. But it's just grown and we're seeing now yesterday the supercomputing top petaflops were released here. So it's fascinating. You have some of the brightest minds in the world that actually come to this event and 12,000 of them. Yeah, and Dell EMC is here for us. So a lot of announcements, a lot of excitement. What are you guys excited about participating in this type of show? Yeah, Jeff, so when we come to an event like this, right? At HPC, we know that HPC has also evolved from your traditional HPC, which was around modeling and simulation and how it started from engineering to then clusters. It's now evolving more towards machine learning, deep learning and artificial intelligence. So what we announced here yesterday, our press release went out, was really related to how our strategy of advancing HPC, but also democratizing HPC's working. So on the advancing on the HPC side, the top 500 supercomputing list came out and we're powering some of the top 500 of those. One big one is TAC, which is Texas Institute out of UT, you know what I'm saying, Texas. And they now have, I believe, the number 12 spot and the top 500 supercomputers in the world running in 8.2 petaflops of computing. So a lot of zeros, I have no idea what a petaflop is. It's very, very big, it's very big. And it's available for machine learning, but also eventually going to be available for deep learning. But more importantly, we're also moving towards democratizing HPC, because we feel that democratizing is also very important, where HPC should not only be for the research and the academia, but it should also be focused towards the manufacturing customers, the financial customers, our commercial customers, so that they can actually take the complexity of HPC out, and that's where we call it our HPC 2.0 strategy of learning from the advancements that we continue to drive to then also democratizing it for our customers. It's interesting, I think back to the old days of Intel microprocessors getting better and better and better and you had Spark and you had Silicon Graphics and these things that were way better. This huge differentiation. But the Intel 8.32 just kept plugging along and it really begs the question, where's the distinction now? You have huge clusters of computers you can put together with virtualization. Where's the difference between just a really big cluster and HPC and supercomputing? So I think if you look at HPC, HPC is also evolving. So let's look at the customer view, right? So the other part of our announcement here was artificial intelligence, which is really, what is artificial intelligence? If you look at a customer retailer, a retailer has, they start with data, for example, you buy beer and chips at J's retailer, for example. You come in and do that, you usually used to run a SQL database and you used to run a RDBMS database, and then that would basically tell you, okay, these are the people who come purchase from me, you know, they purchase history. But then you evolved into BI and then if that data got really very large, you then had an HPC cluster, which basically analyzed a lot of that data for you and show you trends and things. And that would then tell you, okay, you know what, these are my customers, this is how many times they are frequent. But now it's moving more towards machine learning and deep learning as well. So as the data gets larger and larger, we're seeing data is becoming larger, not just by social media, but your traditional computational frameworks, your traditional applications and others. We're finding that data is also growing at the edge. So by 2020, about 20 billion devices are going to wake up at the edge and start generating data. So now internet data is going to look very small over over the next three, four years as the edge data comes up. So you actually need to now start thinking of machine learning and deep learning a lot more. So you ask the question, how do you see that evolving? So you see an RDBMS, traditional SQL evolve into BI. BI then evolves into either an HPC or Hadoop. And then from HPC and Hadoop, what do you do next? What you do next is you start to now feed predictive analytics into machine learning kind of solutions. And then once those predictive analytics are there, then you really truly start thinking about the full deep learning frameworks. Right, well clearly like the data in motion. I think it's funny, we used to make decisions on a sample of data in the past. Now we have the opportunity to take all the data in real time and make those decisions with Kafka and Spark and Flink and all these crazy systems that are coming to play. It makes the duplicate ancient tired in yesterday. But it's still valid, right? A lot of customers are still big, customers are using it and that's where we feel we need to simplify the complex for our customers. That's how we announced our machine learning ready bundle and our deep learning ready bundle. We announced it with Intel and NVIDIA together because they feel like our customers either go to the GPU route, which is your accelerator's route. We announced you were talking to Ravi from our server team earlier where he talked about the C4140, which has the quad GPU power and it's perfect for deep learning. But with Intel, we've also worked on the same where we worked on the AI software with Intel. And why are we doing all of this? We're saying that if you thought that our DBMS was difficult and if you thought that building a Hadoop cluster or HPC was a little challenging and time consuming as the customers move to machine learning and deep learning, you now have to think about the whole stack. So let me explain the stack to you. You think of a compute storage and network stack. Then you think of- The whole eternity. Yeah, that's right. The whole eternity of our data center. Then you talk about these frameworks like Tiano, Cafe, TensorFlow, right? These are new frameworks. They're machine learning and deep learning frameworks. They're open source and others. Then you go to libraries. Then you go to accelerators, which accelerators you choose. Then you go to your operating systems. Now, you haven't even talked about your use case, retail use case or genomic sequencing use case. All you're trying to do is now figure out TensorFlow works with this accelerator or does not work with this accelerator? Or does Cafe and Tiano work with this operating system or not? And that is the complexity that is way more complex. So that's where we felt that we really needed to launch these new solutions and we pre-launched them here at supercomputing because we feel the evolution of HPC towards AI is happening. And then we're going to start shipping these ready bundles for machine learning and deep learning in first half of 2018. Well, that's where the ready solutions are. You're basically putting the solution together for the client. Then they can start new work together to build the application to fix whatever it is they're trying to do. That's exactly it. But not just fix it, it's an outcome. So I'm going to go back to the retailer. So if you are the CEO, the biggest retailer, and you were saying, hey, I just don't want to know who buys from me. I want to now do predictive analytics, which is who buys chips and beer, but who can I sell more things to? So you now start thinking about demographic data. You start thinking about payroll data and other data that's around. You start feeding that data into it. So your machine now starts to learn a lot more of those frameworks and then can actually give you predictive analytics. But imagine a day where you actually, the machine or the deep learning AI actually tells you that it's not just who you want to sell chips and beer to, it's who's going to buy the 4K TV. And the 4K, I'm glad you're turning the 4K TV. So that's important, right? And that is where our customers need to understand how predictive analytics are going to move towards cognitive analytics. So this is complex, but we're trying to make that complex simple with these ready solutions for machine learning and deep learning. So I want to just get your take on, you kind of talked about these three things a couple of times, how you delineate between, you know, AI, machine learning and deep learning. Yeah. So as I said, right, there's an evolution. I don't think a customer can achieve artificial intelligence unless they go through the whole crawl, walk, run space. There's no shortcuts there, right? What do you do? So if you think about, you know, MasterCard is a great customer of ours. They do an incredible amount of transactions per day, as you can think, right, in millions. And they want to, you know, do facial recognitions at kiosks or, you know, they're looking at different policies based on your buying behavior that, hey, you know, Jeff doesn't buy $20,000 Rolexes every year, maybe once every week, you know, that just depends on how your mood is. I was in the Emirates. Yeah, exactly, you're in Dubai. And then you think about, well, you know, his credit card is being used where, and based on your behaviors, that's important. Now think about, even for MasterCard, right, they had traditional RDBMS databases. They went to BI, they have high-performance computing clusters. Then they developed a Hadoop cluster. So what we did with them, it said, okay, all that is good. That data that has been generated for you, you know, through customers and through internal IT organizations, those things are all very important. But at the same time, now you need to start going through this data and start analyzing this data for predictive analytics. So they had 1.2 million policies, for example, that they had to crunch. Now think about 1.2 million policies, policies that they had to say. And which they had to take decisions on. That they had to take decisions on. One of the policies could be, hey, does Jeff go to Dubai to buy a Rolex or not? Or does Jeff do these other patterns? Or is Armagan taking his card and having a field day with it? So those are policies that they feed into machine learning frameworks. And then machine learning actually gives you patterns that they can now see what your behavior is. And then based on that, eventually deep learning is when they move to next. And deep learning now, not only you actually talk about your behavior patterns on the credit card, but your entire other life data starts to also, it starts to also come into that. And then now you're actually talking about something before catching a fraud, you can actually be a lot more predictive about it, right? And cognitive about it. So that's where we feel that our ready solutions around machine learning and deep learning are really geared towards. So taking HBC to then democratizing and advancing it, and then now helping our customers move towards machine learning and deep learning. Because these buzzwords of AIs are out there. If you're a financial institution, you're trying to figure out who's that customer who's going to buy the next mortgage from you, or who you're going to lend to next, you want the machine and others to tell you this, not to take over your life, but to actually help you make these decisions so that your bottom line can go up along with your top line, right? Revenue and margins are important to every customer. It's amazing on the credit card example, because people get so pissed if there's a false positive. The amount of effort that they put in to keep you from making fraudulent transactions, your credit card ever gets denied. People go bananas, right? The behavior just isn't amazing. But I want to ask you, we're coming to the end of 2017, which is hard to believe. Things are rolling at Dell EMC. Michael Dell, ever since he took that thing private, you could see the sparkle in his eye. We got it on a CUBE interview a few years back. A year from now, 2018, what are we going to talk about? What are your top priorities for 2018? So number one, Michael continues to talk about that our vision is advancing human progress through technology, right? That's our vision. We want to get there. But at the same time, we know that we have to drive IT transformation. We have to drive workforce transformation. We have to drive digital transformation. And we have to drive security transformation. All those things are important because lots of customers, I mean, Jeff, do you know like 75% of the S&P 500 companies will not exist by 2027 because they're either not going to be able to, you know, make that shift from blockbuster to Netflix or, you know, Uber, taxi. Or Frasigie, or the last little while. You can think about any customer. That's where Michael did. Michael actually disrupted, right? Dell with Dell Technologies and the acquisition of EMC and Pivotal and VMware. In a year from now, you know, our strategy is really about edge to core to the cloud. We think the world is going to be all three because the rise of 20 billion devices at the edge is going to require new computational frameworks. But at the same time, people are going to bring them into the core and then cloud will still exist. But a lot of times, let me ask you, if you were driving a autonomous vehicle, do you want that data? I'm an edge guy. I know where you're going with it. You know, it's not going to go, right? You want it at the edge because data gravity is important. And that's where we're going. So it's going to be huge. We feel data gravity is going to be big. We think core is going to be big and we think cloud is going to be big. And we really want to play in all three of those areas. And that's when the speed of light is just too damn slow. The car example. You don't want to send it to the data center. You want those decisions to be made at the edge. You know, your manufacturing floor needs to make the decision at the edge as well. You don't want a lot of that data going back to the cloud. All right, Ron McGon, thanks for bringing the energy to wrap up our day and it's great to see you as always. Always good to see you guys. Thank you. I'm Jeff Frick. You're watching The Cube from Supercuting, Supercomputing Summit 2017. Thanks for watching. We'll see you next time.