 From Denver, Colorado, it's theCUBE. Covering Supercomputing 17, brought to you by Intel. Hey, welcome back everybody. Jeff Frick here with theCUBE. We're at Supercomputing 2017, Denver, Colorado. 12,000 people talking about big iron, big questions, big challenges. It's really an interesting take on computing, really out on the edge. The key note was literally light years out in space, talking about predicting the future with quarks and all kinds of things. Little over my head for sure. But we're excited to kind of get back to the ground. And we have Ravi Appendicanti. He's the Senior Vice President of Product Management and Marketing, Server Platforms, Dell EMC. It's a mouthful. Ravi, great to see you. Great to see you too, Jeff. And thanks for having me here. Absolutely. So we were talking before we turned the cameras on, one of your big themes, which I love, is kind of democratizing this whole concept of high performance computing. So it's not just the academics answering their really, really, really big questions. You're absolutely right. I mean, think about it, Jeff. 20 years ago, even 10 years ago, when people talked about high performance computing, it was what I call as being in the back alleys of research and development. There were a few research scientists working on it. But we're at a time in our journey towards helping humanity in a bigger way that HPC has found its way into almost every single mainstream industry you can think of, whether it is fraud detection. You see, Mastercard is using it for ensuring that they can see and detect any of the fraud that can be committed earlier than, you know, the perpetrators come in and actually hack the system. Or if you go into life sciences, if you talk about genomics, I mean, this is what might be good for our next set of generations where they can probably go ahead and tweak some of the things in a genome sequence so that we don't have the same issues that we have had in the past, right? So likewise, you can pick any favorite industry. I mean, we're coming up to the holiday season soon. I know a lot of our customers are looking at, how do they come up with the right schema to ensure that they can stock the right product and ensure that it's available for everyone at the right time, because timing is important. I don't think any kid wants to go with no tie and have this product ship later. So bottom line is, yes, we are looking at ensuring that HPC reaches every single industry you can think of. So how do you guys parse HPC versus a really big virtualized cluster? I mean, there's so many ways that compute and store has evolved, right? So now with cloud and virtual cloud and private cloud and virtualization, I can pull quite a bit of horsepower together to attack a problem. So how do you kind of cut the line between big compute versus true HPC? It's interesting you ask. I'm actually glad you asked, because people think that is just feeding CPU or additional CPU will do the trick. It doesn't. The simple fact is, if you look at the amount of data that is being created, I'll give you a simple example. I mean, we are talking to one of the airlines right now and they're interested in capturing all the data that comes through their flights. And one of the things they're doing is capturing all the data from their engines. Because at the end of the day, you want to make sure that your engines are pristine as a fly. And every hour that an engine flies out, I mean, as an airplane flies out, it creates about 20 terabytes of data. So if you have a dual engine, which is what most flights are, in one hour they create about 40 terabytes of data. And there are supposedly about 38,000 flights taking off at any given time around the world. I mean, it's one huge data collection problem, right? I mean, I'm told it's like a real Godzilla number. So I'll let you do the computation. My point is, if you really look at the data, data has no value, right? What really is important is getting information out of it. The CPU on the other side has gone to a time and a phase where it is hitting what I call as the threshold of the Moore's law. Moore's law was all about performance doubles every two years. But today, that performance is not sufficient, which is where accelerator technologies need to be brought in. This is where the GPUs, the FPGAs, right? So when you think about these, that's where the HPC world takes off is you're augmenting your CPUs, your processors, with additional accelerator technologies such as the GPUs and the FPGAs to ensure that you have more juice to go do this kind of analytics and the massive amounts of data that you and I and the rest of the humanity is creating. It's funny that you talk about that. We were just at a Western Digital event a little while ago talking about the next generation of drives. And it was the same thing where now it's this energy assist method to change really the molecular way that it saves information to get more out of it. So that's how you kind of parse it. If you've got to juice the CPU, to kind of juice the traditional standard architecture, then you're moving into the realm of fiber-folding. Absolutely. I mean, this is why Jeff, I mean, yesterday we launched a new PowerEdge C4140, right? The first of its kind in terms of the fact that it's got two Intel Xeon processors. But beyond that, it also can support four NVIDIA GPUs. So now you're looking at a server that's got both these CPUs to your earlier comment on processors, but it's augmented by four of the GPUs that gives you immense capacity to do this kind of high performance computing. But as you said, it's not just compute. Yes. It's store. Yes. It's networking. It is. It's services. And then hopefully you packet something together in a solution, so I don't have to build the whole thing from scratch. Oh, you guys are making videos, right? Perfect lead-in. Perfect lead-in. I know my colleague Armagan will be talking to you guys shortly. What his team does is it takes all the building blocks we provide, such as the servers. Obviously it looks at the networking, the storage elements, and then puts them together to create what are called as solutions. So we've got solutions which enable our customers to go back in and easily deploy a machine learning or a deep learning solution where now our customers don't have to do what I call as the heavy lift in trying to make sure that they understand how the different pieces integrate together. So the goal behind what we are doing at Dell EMC is to remove the guesswork out so that our customers and partners can go out and spend the time deploying the solution. Whether it is for machine learning, deep learning, or pick your favorite industry, we can also verticalize it. So that's the beauty of what we are doing at Dell EMC. So the other thing we were talking about before we turned the cameras on is I called them the itties from my old Intel days. Reliability, sustainability, serviceability. And you had a different phrase for it. Oh yes, I know, you're talking about the RAS, which is a reliability, availability, serviceability. But you got a new twist on it. Oh we do. Adding something very important. We were just at a security show this week, CyberConnect. And security now cuts through everything because it's no longer a walled garden. There are no walls. There are no walls. It's got to be baked in every layer of the solution. Absolutely right. The reason is, if you really look at security, it's not about, tell about a few years ago, people used to think it's all about protecting yourself from external forces. But today, we know that 40% of the hacks happen because of the internal system processes that we don't have in place. Or we could have a person with an intent to break in for whatever reason. So the integrated security becomes part and parcel of what we do. This is where, within part of a 14G family, one of the things we said is, we need to have integrated security built in. And along with that, we want to have the scalability because no two workloads are the same. And we all know that the amount of data that's being created today is twice what it was the last year for each of us. Forget about everything else we're collecting. So when you think about it, we need integrated security. We need to have the scalability features set. Also, we want to make sure there's automation built in. These three main tenants that we talked about feed into what we call internally, the moniker we use is Paris, right? And that's where I think, Jeff, to our earlier conversation, Paris is all about P is for best price performance. Anybody can choose to get the right performance or the best performance, but you don't want to shell out a ton of dollars. Likewise, you don't want to pay minimal dollars and try and get the best performance but not going to happen. I think it's a healthy balance between price performance that's important. Availability is important. Interoperability, as much as everybody thinks that they can act on their own, it's nearly impossible. Or it's impossible that you can do it on your own. You need to have an ecosystem of partners and technologies that come together. And then end of the day, you have to go ahead and have availability and serviceability or security. To your point, security is important. So Paris is about price performance, availability, interoperability, reliability, availability, and security. That's the way we design it. It's much sexier than that. We can drop in an Eiffel Tower picture right now. There you go. You should. So Ravi, hard to believe we're at the end of 2017. If we get together a year from now, it's supercomputing 2018. What are some of your goals? What are your some objectives for 2018? What are we going to be talking about a year from today? Oh, well, looking into a crystal ball as much as I can look into that, I think that- As much as you can disclose. And as much as we can disclose, a few things that I think are going to happen. Okay. Number one, I think you will see people talk about to where we started this conversation. HPC has become mainstream. We've talked about it. But the adoption of high-performance computing, in my personal belief, is not still at a level that it needs to be. So if we go down in our next 12 to 18 months, let's say, I do think that option rates will be much higher than where we are. And we talk about security now because it's a very topical subject. But as much as we are trying to emphasize to our partners and customers that you've got to think about security from ground zero, we still see a number of customers who are nearly not ready. Some of the analysis show that nearly 40% of the CIOs are not ready in helping. And they truly understand, I should say, what it takes to have a secure system and a secure infrastructure. It's my humble belief that people will pay attention to it and move the needle on it. And we talked about four GPUs in our C4140 do anticipate that there will be a lot more of accelerated technology packed into it. So that's essentially what I could say without spilling the beans too much. All right, super. All right, Robbie, thanks for taking a couple of minutes out of your day. Appreciate it. All right, he's Robbie. I'm Jeff Frick. You're watching theCUBE from Supercomputing 2017 in Denver, Colorado. Thanks for watching.