 From around the globe, it's theCUBE with coverage of the Global.NEXT Digital Experience brought to you by Nutanix. Hi, I'm Stu Miniman and this is theCUBE's coverage of the Nutanix.NEXT conference. This year it is the Global.NEXT Digital Experience pulling together the events that they had had dispersed across the globe, bringing to you online and happy to welcome to the program first time guests but a longtime Nutanix engineering person. Minosis Bhattacharya, he's the senior vice president of engineering at Nutanix. Mano, as everyone calls him. Mano, thanks so much for joining us. Thank you for joining theCUBE. Thank you, Stu. Thank you, yeah. All right, so Mano, we've been doing theCUBE since over 10 years now. I remember the early days of talking to Dheeraj and the team when we first bring him on theCUBE. It was about taking some of the things that the hyperscalers did and bringing that to the enterprise. It was actually one of the interesting components there. Dialback a bunch of years, Flash was new to the enterprise. And we've looked at one of the suppliers that was supplying to some of the very largest companies in the world and also some of the companies bringing the enterprise like Fusion I.O. It was a new Flash package and that was something that in the early days, Nutanix used before. It kind of went to a more, I guess, commodity Flash word. But, you know, developable engineers that I talked to came from, you know, Facebook and Oracle and others because understanding that database and that underlying substrate to be able to create what is hyper-converged infrastructure that people know is there. So maybe if we could start, just give the audience a little bit, you know, you've been with Nutanix a long time, your background and what it is that you and your team work on inside the company. Yeah, thank you, Stu. So I think I come from distributed systems from a long time. I've been working in Oracle for seven years building parts of the Exadata system, some of the convergence that, you know, databases have done with compute and storage. You could see the same hyper-convergence in other platforms like Hadoop where compute and storage was brought together. I think the Nutanix story was all about, can we get this hyper-convergence work for all types of applications? And that was the vision of the company, that whatever platform that these hyperscalers have built, this big database companies have built, can this be provided for everybody, for all types of applications? I think that was the main goal. And I think we are inching our way, but surely and safely, I think we will be there where pretty much every application will run on Nutanix HCI. All right, well, and if you look at kind of the underlying code that enables your capability, one of the challenges always out there is, you know, I build a code base with the technology and the skill sets I have, but things change. I was talking about flash adoption before, a lot of changes have happened in the storage world. Compute has gone through a lot of architectural changes, software and location with clouds and the like. So just talk about that code base, you talk about building distributed systems. How does Nutanix make sure that that underlying code doesn't kind of, you know, the window doesn't close on how long it's going to be able to take advantage of new features and functionality? Yeah, I think Nutanix from the beginning, one thing that we have made sure is that, you know, we could always give continuous innovation. The choices that we make took it like we actually separated the, you know, the concerns between storage and compute. We always had a controller VM running the storage. We actually made sure we could run all of the storage and user space and over time what has happened is every time we upgraded our software, people got, you know, faster performance, they get more secure, they got more scalable and that I think is the key sauce. It's all software, it's all software defined infrastructure on commodity hardware and the commodity hardware can be anywhere. I mean, you could pretty much build it on-prem and now that we see, you know, with the hyperscalers coming on with bare metal as a service, we see hyperconvergence as the platform or the infrastructure on which enterprises are willing to run their applications in the public cloud. I mean, look at VMware, VMC, Nutanix clusters is getting a lot of traction even before, I mean, we've just gone out but a lot of customer excitement there and that is what I think is the true nature of Nutanix being a pure software player and treating every hardware, you know, uniformly and whether this is available in the public cloud or it's available in your own data center, the storage or the hypervisor or the entire infrastructure software that we have, that doesn't change. So I think in some ways we're talking about this new HCI called the hybrid cloud infrastructure. So HCI, the hyperconverged infrastructure becomes the substrate for the new hybrid cloud infrastructure. Yeah, definitely it was a misconception for a number of years as people looked at the Nutanix solution and they thought appliance. So if I got a new generation of hardware, if I needed to choose a different hardware vendor, Nutanix is a software company as you describe. You've got some news announced here at the.next show when it comes to some of those underlying storage pieces, bring us through, you know, we always see, we go around to the events and, you know, companies like Intel and Nvidia always standing up with the next generation. I teased up a little bit that, you know, we talked about flash, what's happening with NVMe, storage class memory. So what is it that's new for the Nutanix platform? Yeah, let me start a little bit, you know, on what we have done for the last maybe a year or so before, you know, I mean, go into the details of why we did it and, you know, what are the advantages that the customers might have. So one thing that was happening particularly for the last decade or so is, you know, flash was moving on to faster and faster devices. I mean, 3D cross-point came in, memory class storage was coming in. So one thing that was very apparent was, you know, this is something that we need to get ready for. Now, at this point, what has happened is at the price point that, you know, these high-end devices can be obtained, has come where, you know, mass consumption can happen. I mean, anybody can actually, you know, get a bunch of these obtained drives at a pretty good price point, and then put it in their servers and expect the performance. And I think the important thing is we build some of the architectural pieces that can enable the, I mean, enable us to leverage the performance that these devices get. And for that, I think let's start with one of the beginning things that we did, was make sure that we have things like fine-grained metadata so that, you know, you could get things like data locality. So the data that the compute would need would stay in the server. That was a very important part, or one of the key tenets of our platform. And now, as these devices come on, we want to actually access them without going over the network. You know, in the very last year, we released a construct called Autonomous Extend Store. So, which is not only making data local, but also make sure metadata is local. So having the ability to actually have hyperconvergence where we can actually get data and metadata from the same server, it benefits all of these newer class storage devices because the faster the device is, you want it to be closer to the compute because the cost of getting to the device actually adds up to the latency, adds up to the application weight for the storage aisle. Now, in the latest, I would say, this dot next, what we are announcing is two technologies. One is called Block Store, which is our own user file system. It's a completely user space file system that is available. We are replacing EXT4 for all our, you know, disk drives, which are NBME and beyond. And we're also announcing SPDK, which is basically a way for accessing these devices from user space. So now with both of these combined, what we can do is we can actually make an IO go from the start to finish, all in user space without crossing the kernel, without doing a bunch of memory copies. And that gives us the performance that we need to really get the value out of these, you know, the high-end devices. And the performance is what, you know, our high-end applications are looking for. And that is, I think, what a true value that we can add to our customers. Yeah, so Mano, if I understand that right, it's really that deconstruction, if you will, of how storage interacts with the application. It used to be, it was the scuzzy stack when I used to think about the interface and how far I had to go. And you mentioned that performance and latency is so important here. So as we're moving from, you know, what traditionally was disk, either externally or internally, moving up to flash, moving up to things like NVMe, I really need to rearchitect things internally. And therefore, this is how you're solving it, creating higher IO. Maybe if you could bring us inside, you know, I think high-performance IO and low-latency, SAP Hano was one of the early use cases that everyone talked about that we had to rearchitect. What does this mean for those solutions, any other kind of key applications that this is especially useful for? Yeah, I think all the high-end demanding applications, talk about SAP Hano, all the healthcare applications, look at Epic, Meditech, look at the high-end databases, because we already run a bunch of databases, but the highest end databases still are not running on HCI. I think this technology will enable, you know, the most demanding Oracle or SQL Server, Postgres, you know, all the analytics applications, they will now be running on HCI. So the dream that we had, every application, whatever it is, they can run on the HCI platform, that can become a reality, and that is what we are really looking forward to. So our customers don't have to go to 3DF or anything. If it is an application that you want to run, HCI is the best platform for your application. That is what we really want to be. All right, so let me make sure I understand this though, because while this is a software update, this is leveraging underlying new hardware components that are there. I'm not taking a three-year-old server to do this. Can you help understand, you know, what do they need to buy to be able to enable this type of solution? So I think the best thing is we already came up with the all-NBME platform. And everything beyond that is a software change. Everything that we add is just available on an upgrade. So of course, you need a basic platform which actually has the high-end devices themselves, which we have had for a year or so. But the good thing about Nutanix is once you upgrade, it's like Tesla, you know, you have the hardware, but once you get that software upgrade, you get the boost in performance. So you don't need to go and buy new hardware again. As long as you have the required devices, you get the performance just by upgrading it to the new version of the AOS software. I think that is one of the things that we have done forever. I mean, every time we have upgraded, you will see over the years our performance has increased and very seldom has a customer required to change, you know, their internal hardware to get the performance. Now, another thing that we have is we support heterogeneous clusters. So on your existing cluster, let us say that you're running all flash and you want to get, you know, all NVMe, you can add nodes, you know, which are all NVMe and get the performance on those nodes while these flash can take the non-critical pieces which is not requiring the highest end performance, but still give you the density of what a VDI or maybe a general server virtualization would. While these nodes can take into account the highest end databases or highest end analytic application. So the same cluster can slowly expand to actually take this heterogeneity of applications on it. Yeah, that's such an important point. We had identified very early on, when you move to HCI, hopefully that should be the last time that you need to do a migration. Any time anybody that has dealt with storage moving from one generation to the next or even moving frames can be so challenging. Once you're in that pool, you can upgrade code, you can add new nodes, you can balance things out. So such an important point there. Mano, you had stated earlier that the underlying AOS is now built very much for that hybrid cloud world. You talk about things like clusters that you now have the announcement with AWS now that they have their bare metal service. So do we feel, are we getting a balancing out of what's available for customers, whether it's in their own data center or in a hosted environment where they have it or the public cloud to take capabilities like you were talking about with the new storage class? Yeah, yeah. I think, see, most of these public clouds are already providing you a hardware which has NVMe built in, which I'm sure in the future will have storage class memory built in. So all the enterprise applications that were running on-prem with the latency guarantees, with the performance and throughput guarantees can be available in the public cloud too. And I think that is a very critical thing because today when you lift and shift, one of the biggest problems that all our customers face is when you are in the cloud, you find that enterprise applications are not built for it. So they have to either re-appertect it or they have to make use of new cloud native constructs. And in this model, you can use the bare metal service and run the enterprise application in exactly the same way as you was running in your private data center. I think that is a key tenant because now with this and with our data mobility framework where we can actually take both storage and applications do move them across the public and the private cloud, we now have the ability to actually control an application end to end. A customer can choose now where they want to run it. They don't have to think, oh, if I have to move to the cloud, I have to re-architect it. You can choose the cloud and run it in the bare metal service exactly as you were running in your private data center, utilizing things like Nutanix clusters. Great, well, Mano, last question I have for you. We really dug down into some of the architectural underpinnings and some of the pieces inside the box. Bring it back up high level, if you would, from a customer standpoint, key things that they should be understanding that Nutanix is giving them with all of these new capabilities. You mentioned the BlockStore and the SPDK. Yeah, I think for the customer, the biggest advantage is that now the platform that they chose for EUC server virtualization can be used for the most demanding workloads. They are free to use Nutanix for SAP HANA for high-end Oracle databases, big data, analytics. They can actually use it for all the healthcare apps that I mentioned, Epic and Meditech. And at the same time, keep the investment in hardware that they already have. So I think the fact about this Tesla analogy that we always think is so apt with Nutanix, I think with the same hardware investment that they have done with this new architecture, they can actually start leveraging that and utilize it for more and more demanding workloads. I think that is the key advantage, is without changing the appliances or your SAN or your servers, you get the benefit of running the most demanding applications. Well, Manu, congratulations to you and the team. Thanks so much for sharing all the updates here from the next show. Thank you. All right, and stay tuned for coverage from the Nutanix global.next digital experience. I'm Stu Miniman, and as always, thank you for watching theCUBE.