 Live from Las Vegas, it's theCUBE. Covering VMworld 2017, brought to you by VMware and it's ecosystem partner. I'm Stu Miniman, here with Justin Warren and you're watching theCUBE, the worldwide leader in tech coverage. Happy to welcome back to the program, Sachin Vagani, who's now the Vice President of Technology with Nutanix. Sachin, great to see you. You've got a long history with the VMware community. Always good to catch up with you. How have you been? Great to see you guys again. I've been good. All right, so I like it, always the progression as to what gets discussed at the show. I was kind of poking at VMware a little bit that in 2015 it was any device in any application, one cloud and the keynote this morning, it was any device, any application, any cloud and we're highlighting, one of the things VMware is highlighting is NSX, they want to be kind of that interconnected fabric between the cloud and the edge. And of course I bring that up because you, besides the virtualization stuff that you've done for years, you've got a little bit of a different role now. You joined Nutanix through the Pernex data acquisition, so why don't you tell us what's been keeping you busy and excited these days? In a word or in two words, edge computing for sure. And this has been striking to me too because I walked in to pick up my batch this morning and the first thing I see at VMware is IoT. There's this big area right next to where you pick up the batch and so it's quite telling as to how computing is evolving. As you just said, NSX is one way to quantify that evolution from how do we stretch the cloud into all the way into the edge. There's some other ways. And so yeah, I've been working on what defines that operating system that can kind of converge the edge and the cloud into one seamless piece. Yeah, there's definitely a lot of excitement around edge but as with any early technology, if you're talking to a telco, it means one thing. If you're talking to somebody involved in sensors and IoT, kind of means a different thing. We're not talking about the robo use cases that some people looked at, kind of HCI as. So what is that, what is edge computing in kind of your parlance and that OS, how does that fit into it? I know, great question. In fact, I'll come back to talking to different verticals and how we can make some sense out of all those disparate conversations, maybe as second part of my answer. But the edge in my definition is a way to collect, digitize the real world, that's one thing. But then take real time decisions on all this data that is coming in from the real world. And so in my sense, the edge is all about taking real time decisions on data and then the cloud long term is going to be about taking some long term decisions about the same data or some subset of the data. You know, Tesla is a great poster child example, right? As you know, in a self-driving car, you want to figure out whether you're going to get into an accident or not right at the car, which is the edge. But the longer term notion of how Tesla's can be made to drive better can be all done in the cloud, deep learning, et cetera. And so there's this very fluid movement of data and code between the two systems that we as operating system vendors need to make possible. Otherwise, every edge application, which is really a hybrid application, some part of the logic runs here on the edge, some part of the logic runs on the cloud, it's going to be tremendously difficult to create these applications. And you mentioned, and very rightfully so, is the interpretation of what edge computing or IoT means is very different by vertical. And so in fact, right now my number one mission is to look at all these different use cases and figure out what's common. I think back 30, 40 years ago, when the guys sat down to write POSIX, people figured out what is edge that an operating system can provide that makes sense to 200,000 different applications. I think it's time to answer that exact same question for the world of machines. What is edge that an operating system can provide, which makes sense in the world of machines that serve transportation, that serves defense, that serves healthcare, and blah, blah, blah, et cetera. So what are some of the things that you've discovered in the research you've done so far? Have you got some early hints of what those pieces of commonality are? For sure, great question. And so, you know, one common thread that unifies all these different use cases is most of the use cases that are around getting insights from the data that's being collected, whether it's image data, whether it's raw numbers, whatever it is. And so, you know, machine learning and analytics seem to form the core of any interesting application subset that you are looking at. And so then the question is, you know, can you make it from an operating system point of view? Can you make it very easy to build applications around machine learning? Can you make it very easy to build applications around analytics? And the third thing is, you know, it's very, this is not about running virtual machines anymore, yeah, sure. Inside, at the very guts of the system, you might be running virtual machines, but it's about a developer-centric world, right? So, the question is, can you make it such that developers can just deploy code without worrying about whether the code runs in a virtual machine and a container in a combination thereof? Can you make it very easy for data to move across without the developer having to explicitly code that up? And so that's the common parts. Yeah, I wonder, you know, what's your thoughts architecturally when you go in there? So, we actually had, at Wikibon, we have a weekly research call. We were talking about serverless last week. And we actually saw, you know, for edge computing, you know, serverless, really good. Amazon has their green grass, leverages lambda to go in there, because sure, VM, I mean, seem a little heavy for some of these applications. No, what should we look at, kind of, from that standpoint? I agree, I agree. It seems to be more serverless and maybe containers, because sometimes when you're trying to do serverless applications, you need a bunch of infrastructure that then has to be packaged into containers. And so between the two, probably is where the compute part of edge computing or IoT is going to be edge. Okay, can you talk to us, you know, Nutanix? You know, I think that, you know, there's obviously Nutanix and VMware, strong partnership for some things, which is, you know, about three quarters of Nutanix customers are running VMware, but differences of opinion as to some other things, they're, you know, vSAN versus Nutanix, and Nutanix has AHV. I've heard you say operating system a bunch of time. Is that an extension of AHV? Is it something else? VMware has plans as to how they're going to go from IoT. Can you maybe compare, contrast what your vision, how that matches with your skillsets inside Nutanix versus, you know, VMware, who you know real well? Oh yeah, for sure. And so AHV, the distributed storage fabric, which Nutanix has, I think that is a great kind of, you know, substrate on top of which to now build much more application oriented, edge computing kind of, you know, stack, right? And so the edge computing stack is more about, you know, what are the data services that you can provide, which are not data stores anymore. You know, it's things like structured data as a service or unstructured data as a service or streaming data as a service and function as a service to your point of serverless computing or containers. And so I think this becomes, the Nutanix asset becomes a great substrate on top of which to build this much more application oriented architecture. Now the VMware story I would imagine is kind of sort of the same, although VMware is, I think, remarkably leading with NSX because, you know, this network, the extended network between the cloud and the edge is obviously a great problem to solve. You know, I have my own biases, so I tend to lead with the application side of things, right, which is, you know, what does it, as I said, what does it take to make the next POSIX layer for machine learning or for analytics. And then the infrastructure piece is obviously important, but it is what it is. You know, I'm sure we are going to solve it. And so in some sense, the two stories are evolving in a very congruent manner. We'll see. I think the market is so big and the use cases are so diverse that for a change, we can potentially, if we kind of do it in a cooperative manner, we can probably evolve an interesting standard, interesting stack that is kind of sort of, you know, stable. And so I think this could potentially be a story of cooperation as opposed to, you know, throwing stones at each other, but, you know, time will tell. One of the things that VMware talked about in the keynote was security and the nature of security being baked into things. And you mentioned that your focus is on applications and certainly developers of applications, they don't really care anything about infrastructure at all. So this service consumption is something that they think is quite interesting. But you also mentioned how fast things are changing. Those three things at once on the edge, how do you see security services developing so that they can be easily consumed by people instead of it being this really difficult infrastructure problem? I wish I had just an easy one line answer, but I think I'll explain it just the abstract of how I see this evolving through an example, right? As if you think about iOS and Android, they have a very structured way of doing notification. And no app can do notifications by itself. It just has to get in. So I think, you know, probably it's time to solve security that way, right? I'll, you know, it's time to put some structure to how you can program the movement of data, the collection of data, the deployment of compute, a program, onto a layer, onto a substrate. And if you tightly control that movement of data and the deployment of compute and so on, then there is a higher chance that you are going to be able to, from an operating system provider point of view, probably you are going to be able to deliver a much more secure system because you've taken the bulk load of security programming away from the application developer and made it an operating system core service, right? And of course that requires you to impose a framework on how you develop applications, which is, you know, the notification examples, right? I've imposed a framework on how you deliver notifications. And by doing so, I've kind of, you know, made a very seamless, a very good experience. I think, yeah. So it's taking a very opinionated approach to doing things. I think so. Which that does restrain choice a little bit. And one thing that various vendors say is like, oh, we're all about customer choice. But sometimes there is too much choice. However, on the flip side, IT so far has been making choices about how you should do security so far that haven't really worked out so well. So what do you think needs to change for this opinionated design to feel like the correct one that people will want to use and not then feel constrained? But I want to be able to choose to write my own encryption algorithms. I think if we can democratize the availability of data, so you know, it's competing stacks are all about collecting as much data as possible and then writing great applications on it. So if there is a security framework which doesn't impede the ability of different tenants, whether it's analysts, whether it's application developers, whether it's regular operators like the guy running the airport. If it doesn't impede their ability to interface with data and if it doesn't impede their ability to deploy interesting compute on top of that data, then probably we have a way to go. So I think having a great security framework, but in a way that still democratizes the availability of data, the movement of data and the deployment of compute. If we can achieve that, I know I'm asking for a bit too much. Probably the source of my next five patents. I think it's a doable thing. It's a doable thing. Sajim, why does Nutanix have a right to be a player in IoT? I look at it, these are going to be some pretty gnarly ecosystems. I look at, we did some early work with GE when they were coming up with the ITF for the industrial internet. There's some really big challenges. You're not going to create those, the centers and all those devices. Some of the really big companies in the world are working on some, the messaging protocols and things like that. Where realistically does Nutanix play and how do you fit into that really big and very mature ecosystem today? Great question. So, various different takes on it. One is, I think it's a natural evolution of the company strategy. We started by saying, look, here's Hyperconvergence 1.0, the ability to converge compute storage and networking into this box. Hyperconvergence 2.0, which is what we are executing right now as we speak, is this orchestration layer between public and private clouds. We are converging public and private into one through things like calm, et cetera. And so we naturally think the next step of our strategy, Hyperconvergence 3.0, if you will, is the convergence of the edge in the cloud. And so just from the point of view of how we evolve the enterprise cloud operating system, we think this is the natural place for it to grow into as a piece of technology. The other way to look at it is, it's time to build the next hypervisor. And the next hypervisor is the hypervisor for data as it moves around in all these clouds between ridiculously disparate places, right? From an oil rig all the way into some mainland data center or from an airport into the cloud and so on and so forth. And so the process of creating a hypervisor for data is a distributed data systems problem, which we have been really good at historically. And so we think we deserve a good cut at it. And the last thing I'll say is, if you think about how IoT evolves, you use the example of G, great respect for companies like that. But think about it, you know, those companies had to evolve all of that by themselves because they happened to be the sensor vendor. And so the Gs of the world, the Honeywells of the world, they had to not only make the sensors, but now they also had to make some compute capability to make some sense out of all that data. But now that sensors are here, they are here to stay, they are all open to your point, you know, the protocols to get data out of it, MQTT, co-app, this, that, everything is open. So now it's genuinely the time to focus on the data and code aspect of, you know, the data that's coming in, you know, 40 times more data is going to hit the edge than the cloud will ever see in three years. And, you know, that deserves kind of a swing from traditional data operating systems guy, like guys like Nutanix and some of the other guys you see on the show floor. You mentioned it's a distributed computing problem, which is obviously very hard to do, but there's a networking aspect, and that's one thing, particularly in cloud and other things, it tends to get overlooked quite a bit. I mean, the poor network people feel left out. What implications do you think there's going to be for this level of data from all of these sensors and the amount of computing and decision-making that will happen at the edge, as distinct from in the cloud, talking between different devices in the edge, but then also providing the data back to cloud. What are some of the network impacts that that's going to have? Great question, there's so many of them, but something that is front and foremost in my mind because I literally got out of a customer conversation on this is, you know, for example, there's this customer who does smart oil rigs, and they now need to move that data, some of it at least, to the cloud, but the network is really flaky. So there's obviously the problem of securing that network and so on, but now there's also the problem of what does that mean to the application itself? Can you express the flakiness of the network as a policy that a programmer can program towards? Is, you know, when flakiness of the network happens, literally from a programming API point of view, the programmer can make a choice as to, you know, for all the streaming data that's coming in, the programmer can choose either to drop all of it, you know, when the network's flaking, or to, you know, have a data-moving system which can kind of catch up, right, when the network becomes stable again. So all those things now become a, you know, these are all infrastructure problems that now manifest themselves into how you program the system. And so we are having some conversations around that. And so, yeah, you know, there's so many interesting possibilities here, I don't even know where to start. Whole new programming abstractions that we may not even be using today. Exactly, yeah. And there's obviously the hardware part of the problem, right? People are creating lower networks, they are creating NBT, this new type of, you know, LTE network just for IoT data. And so there's a whole bunch happening in hardware as well. Okay, Satya, pull this all together for us. What, you know, you said you're meeting with customers. What can they do today? And what are some of the major kind of milestones that we should be looking for to say, oh, I'm hearing from Nutanix and customers they're doing this, we've reached that point, give us a little bit of a hit today and maybe a year or two out if you can. Great question. So I think the year and now thing customers can do is, well, apart from the obvious thing of figuring out what are the interesting ways that their business wants to monetize data coming out of their own operational infrastructure. Assuming they've figured that out, you know, the important thing they can do now is to start creating that dispersed cloud, right? Cloud of assets out in the field, you know, in factories and airports and oil rigs and aircraft carriers, that they can manage centrally, right? And that they can deploy applications into centrally. And so now they don't need a specialist on an oil rig to figure out how to operate infrastructure. So that's a year and now problem. They can set it up right now, centrally managed clouds with central application orchestration on those clouds. The second part of the thing is to create a new generation infrastructure which is one level higher than virtual machines and data stores. It's infrastructure that gives their developers the ability to use data as a service both on the edge and on the cloud and to deploy compute as literally code, function as a service, either on the edge or on the cloud. And so that gives them a homogeneous system, a much more developer friendly system to deploy applications that can consume data at a volume, at a pace that we have never seen before. And the only way to make that consumption possible is if we can make the movement of data, the availability of data across this dispersed cloud very, very easy. So that's step two. It is a data and code hypervisor, something that we haven't seen yet in the industry. Sachin Pagani, pleasure to catch up with you as always. We look forward to watching the progress as all of this progresses. Thanks guys, I'm Jeff. For Justin Warren and myself, we'll be back with lots more coverage here from VMworld 2017, you're watching theCUBE.