 Live from San Diego, California, it's theCUBE. Covering Cisco Live US 2019. Brought to you by Cisco and its ecosystem partners. Welcome back, it's theCUBE here at Cisco Live, San Diego 2019, I'm Stu Miniman, my co-host is Dave Vellante. First I want to welcome back Kustub Das, KD who is the Vice President of Product Management with Cisco Compute, we talked with him a lot about HyperFlex Anywhere in Barcelona. I want to welcome to the program a first time guest, Laura Crohn who's the Vice President of Sales and Marketing Group in NSG Sales and Marketing at Intel. Laura, thanks so much for joining us. All right, so since KD's been on our program, let's start with you. We know, we've watched Cisco UCS and that Compute since it rolled out, gosh, about a decade ago now and Intel always up on stage with Cisco talking about the latest enhancements. Everywhere I go this year, people are talking about Optane and how technologies like NVMe are baking into the environment, storage class memories coming there. So let's start with Intel, what's happening in your world and your activities here at Cisco Live. Great, so I'm glad to hear you've heard a lot about Optane because I have some marketing in my organization. So Optane is the first new memory architecture in over 25 years and it is different than NAND, right? It is, you can write data to the silicon that is programs faster and has greater endurance. So when you think of Optane, it's fast like DRAM but it's persistent like NAND, 3D NAND. And it has some industry leading combinations of capabilities such as high throughput, high endurance, high quality of service and low latency. And for a storage device, what could be better than having fast performance and high consistency? Laura, as you said, yeah, no worry. But 25 years since this move, I remember when I started working with Dave, it was how do we get out of the horrible scuzzy stack is what we had lived on for decades there. And finally now it feels like we're coming through the clearing and there's just going to be wave after wave of new technologies that are free to get us high performance, low latency and the like. Yeah, and I think the other big part of that, which is part of Cisco's HyperFlex All NVMe is the NVMe standard. So we've lived in a world of legacy SATA controllers which created a lot of bottlenecks in the performance. Now that the industry is moving to NVMe, that even opens up it more. And so as we were developing Optane, we knew we had to go move the industry to a new protocol. Otherwise that pairing was not going to be very successful. All right, so Katie, all NVMe, tell us more. So we come here and we talk about all the cool innovations we do within the company. And then sometimes we come here and we talk about all the cool innovation we do with our partners, our technology partners. And Intel's been a fantastic technology partner, obviously being in the server business. You've got a partner with Intel and we've really co-innovated across the walls of two organizations to bring this to life, right? So Cisco HCI Hyperflex is one of the products we've talked about in the past. Hyperflex, all NVMe, which uses Intel's Optane technology as well as Intel's 3D NAND, all NVMe devices to power really the fastest workloads that customers want to put on this device. So you talked about free. NVMe pricing is getting to a point where it's become that much more accessible to use these for powering databases, for powering those workloads that require that latency, characteristics that require those IOPS. And that's what we've enabled with Cisco Hyperflex collaborating with Intel's NVMe portfolio. I remember when I started in the business, somebody was sharing with me to educate me on that it had a pyramid. And I said, think of the pyramid as a storage hierarchy. And at the top of it was actually an Intel solid state device, which back then it was not, it was volatile, right? So you had to put backup power supplies on it. So, but any rate, and then with all this memory architecture coming in flash storage, people have been saying, well, it's going to flatten that pyramid. But now with Optane, you're seeing the reemergence of that pyramid. So help us understand sort of where it fits from a supplier standpoint, and OEM, and ultimately the customer. Because if I understand it, so Optane is faster than NAND, but it's going to be more expensive, but it's slower than DRAM, but it's cheaper, right? So where does it fit? What are the use cases? Where does it fit in that hierarchy? Maybe you could help us understand. So if you think about the hierarchy, at the very top is DRAM, which is going to be your fastest, lowest latency product. But right below that is Optane persistent memory, the DIMS, and you get greater density, because that's one of the challenges with DRAM is they're not dense enough, nor are they affordable enough, right? And so you get, that creates a new tier in the storage hierarchy. Go below that and you have Optane SSDs, which bring even more density. So we go up to a 1.5 terabyte in Optane SSD, and you that now get performance for your storage and memory expansion, then you have 3D NAND, and then even below that, you have 3D NAND QLC, which gives you cost effective, high density capacity, and then below that is the old fashioned hard disk drive and then magnet, yeah. So you start inserting all these tiers that give architects, both hardware and software, an opportunity to rethink how they want to do storage. So the demand for this granularity is obviously coming from your buyers, your direct buyers and your customers. So what does it do for you and specifically your customers? Yeah. So the name of the game is performance and the ability to have, in a land where things are not very predictable, the ability to support anything that your end customers may throw at you. If you're an IT department, that may mean a BU or internal data scientist team or traditional architect of a traditional application. Now what Intel and Cisco can do together is truly unique because we control all parts of the stack, everything from the server itself to the storage devices, to the distributed file system that sits on top of it. So for example, in HC in Hyperflex, we're using Optane as the caching tier and because we write the distributed file system, we can balance between what we put in the caching tier, how we move that data to the non caching three NAND tier. As Intel came out with their latest processors that support memory class, storage class memory, we support that. Now we can engineer this whole system end to end so that we can deliver to customers the innovation that Intel's bringing to the table in a way that's consumable by them. One more thing I'll throw out there. So technology is great, but it needs to be resilient because IT departments will occasionally yank out the wrong wire, they occasionally yank out the wrong drive. One of the things that we worked together with Intel was how do we put RAS into this? How do we put reliability, availability, serviceability? How do we prevent accidental removal or accidental insertion? And some of those co-innovations have led to us getting out in the market, a Hyperflex system that uses these technologies in a way that's really usable by IT teams in our customers' houses. I'd love to double click on that in the context of NVMe, what you guys were talking about. Stu mentioned the horrible storage stack or I think you called it the horrible SCSI stack. And Laura, you were talking about the cheap and deep now as a spinning disk. So my understanding is that you've got a lot of overhead in the traditional SCSI protocol, but nobody ever noticed because you had this mechanical device. Now with flash storage, it all becomes exposed. NVMe allows just like a bat phone, right from there. Okay, so correct me where I got that wrong, but maybe you could give us the perspective of why NVMe is important from your standpoint and how are you guys using it? Yeah, well I think NVMe is just a much faster protocol and you're absolutely right. We have a graph that we show of the old world and how much overhead there is all the way down to when you have Optane in a dim solution with no overhead. Optane SSD still has a tiny bit, but there's a graph that shows all of that latency is removed when you deploy Optane. So NVMe gives you much greater bandwidth, right? The CPU is not bottlenecked and you get greater CPU efficiency. When you have a faster interface like NVMe. And Hyperflex is taking advantage of this, how so? Yeah, let me give you a couple of examples. So when you think performance, the first thing that comes to mind is databases. So for those kinds of workloads, this system gets you about 35% better performance. The next thing that comes to mind is people really don't know what they're going to put on the system. So sometimes they put databases, sometimes they put mixed workloads. So when we look at mixed workloads, we get about 65% or so better IOPS. We get 37% better latencies. So even in a mixed IOP environment where you have databases, you may have a web tier, you may have other things, this thing is definitely resilient to handle those workloads. So it just opens up the spectrum of use cases. So one of the other questions I had was specific to Optane. DRAM has consumer applications as does Flash, Anand. Does Optane have similar consumer applications? Can it achieve that volume so that the prices can come down? Not free, but continue to sort of drive the curves? So when we look at the overall TAM, we see the TAM growing out over time. I don't know exactly when it crosses over the volume or the bits of DRAM, but we absolutely see it growing over time. And as a technology ramps, it'll have a cost-ramping curve as well. It'll follow that curve, yeah, great. Okay, go ahead, Steve. Yeah, just, Katie, give us a little bit broad view of Hyperflex here at the show. Can people play any labs with the brand new Optane pieces or what other highlights do you and the team have this week? Yeah, absolutely. So in Barcelona, we talked about Hyperflex 4.0. That is live today, so in the show floor, people can look at the Hyperflex at the edge combined with SD-WAN, how do you control, how do you deploy thousands of edge locations from a centralized location to the part of InnerSight, which is our cloud-based management tool. So that whole experience is enabled now. At the other end of the spectrum is how do we drive even more performance? So we were always a performance leader. Now we are comparing ourselves to ourselves to say, hey, we're 35% better than our previous All Flash with the innovation Intel is bringing to the table. Some of the other pieces are actual use cases. So there's a big hospital chain where my kids go to get treated and look at, let's see the doctor. There are lots of medical use cases which require Epic, the medical software company to power it, whether it is the end terminals or it is the backend database. So there's Epic Hyperspace and Epic Cache. Those have now been validated on Hyperflex using the technology that we just talked about around Opt-Aid and all NVMe. That means there's that much more power. That means that when my doctor and the nurse pulls up the records, those show up fast, but all the medical records, all of those other high performance seeking applications also run that much more streamlined. So I would encourage people in the world of solutions to get a tremendous set of demos out there to go out there and check us out. And there's a great white paper out on this, right? The EGS. ESG has been one of the companies that has been benchmarking HCI, Hyperflex. So would they do a lab report or? What they do is they benchmark different hyperconverged infrastructure vendors. So they did this first time around and they said, well you could pack that much more VMs on a Hyperflex with rotating drives. And then they did it again and they said, well now that you've got all flashed, well you've got now the performance and the latency leadership. And then they did it again and they said, well hang on, you've kind of left the competition the dust. That's not going to make a pretty chart to show when we compare your all NVMe against your Hyperflex. So when you get that good, you compare against yourselves. So we've been the performance leader and ESG's been doing that. They have to obtain the next generation added opt-in. And this is what, a database workload? Database workload, a mix of workloads. Now you're bringing opt-in, so the latest report measures opt-in. Measures opt-in against our all flash report and then our all flash report measures across vendors. And where can I get this? Is it some part of your website? All of this is off of the Cisco Hyperflex website on our Cisco.com, but ESG is the company so I'm going to go directly to their website and get it as well. Okay, go ahead. Laura, I guess a final question for you is, you know, I think back to the early years at UCS, it was the memory enhancements that they had that allowed the densest virtualization in the industry. Back when it started, it sounds like we're just taking that to the next level with this next generation of solutions. What else would you add about the relationship with Cisco and Intel? Yeah, so Intel and Cisco have worked together for years, right? The innovation around the CPU and the platform. And it's super exciting to be expanding our relationship to storage. And I'm even more excited that the Cisco Hyperflex solution is endorsing Intel Optane and 3D NAND. And we're seeing great examples of real use workloads where our end customers can benefit from this technology. Well, Katie and Laura, thanks so much for the update. Congratulations on the progress that you've made so far. For Dave Vellante, I'm Stu Miniman. We'll be back with more coverage here at Cisco Live 2019 in San Diego. Thanks for watching theCUBE.