 Ladies and gentlemen, please welcome President Nutanix, Sadeesh Nair. Good morning, and if you are watching us on the live stream, thank you so much. Good afternoon, good evening, good morning, wherever you are. I'm happy to see that the room is almost full. The way I see it, there are at least two distinct advantages of doing a conference like this in DC. One, like yesterday, Dheeraj said, almost all of you are here, and I think most of you are not drunk, right? Most of you. I say most, because if you went to Jason's party last night, then probably you are not here, and if you are here, yeah, I'm sure you are hungover. And the second one, in any other city, a nerd heard like this will not be the coolest one. But in DC, when you compare to those politicians, we are the coolest bunch. So if you are single, want to mingle, go to a party, to bar, you can actually say what you do for a living, and people will talk to you. Look, how was yesterday's keynote? Yeah, I didn't ask just for an applause, but I know that some of you are still a bit confused with the velocity with which things just came down within a compressed time frame. And some of you are probably thinking, I don't know what just happened. What is XI? What is calm? I am calm already. The mysteries of calm and XI, I know that we need to solve it a bit more. And that's what we are going to do today. Today is a day. We'll try to solve some mysteries of life. As we know, the life is full of mysteries, right? Some mysteries are hard. Some mysteries are easy. For example, here's a hard one. What exactly does a DJ do in the club? Look, he looks busy, he's got a lot of stuff, he's got earphones on, he's acting very serious. The music is already playing. What the hell is he doing there? No one knows. In fact, some of the DJs also don't want you to know what they're doing because they themselves don't know what they're doing. I'll give you another one. This is a really hard one. This guy. You switch on TV, change any channel, you can see his face, but people have no clue what his real talent is. Zero clue is another hard mystery. Very hard to solve. Look, I am not trying to solve any of these mysteries, but there's another mystery. That's about NASDAQ. Everyone knows the name. I think some of you probably trade there, but do you really know what goes inside that building? I don't think, like I said, even Sherlock Holmes can solve the mysteries of Ryan Seacrest, but this one I think we can solve. We have the right person for this. Let's welcome the CEO of NASDAQ, Mrs. Adina Friedman. Thank you for joining us. Absolutely. Wow, Ryan Seacrest. He is a big mystery, isn't he? No one knows. People try, but they give up. Well, we have a much simpler task this morning, then. Let's hope so. Look, I think to prepare for this one, I did some research because I didn't know what the history of stock markets were, and to be honest with you, I was surprised that the idea of stock trading, commodity trading, it all started almost 700, 800 years ago, and I found a photo of one of the world's first stock exchanges somewhere in Europe, and the business of stock trading through exchanges hasn't changed or didn't change significantly until 1971, when NASDAQ came out with the world's first all-electronic stock exchange. Talk about disrupting an industry, right? So my question to you is, how do you lead a company like this with such an awesome, awe-inspiring history through a time where everything is changing around us in technology? Sure, well, it's really interesting. In addition to having the U.S. markets, we also own and operate the markets in the Nordics and the Danish exchange. If you'll go to Copenhagen, one of the biggest buildings in Denmark in Copenhagen is the original exchange building, and it has a ramp for basically the horse and buggy to bring all of the goods up into the exchange, because back then it was, here I'm going to give you a good and I'm going to exchange it for another good. But today, obviously, stock trading, commodities, equities, options, futures, they all happen by computer, and NASDAQ really was the first exchange company to introduce the concept that all of those people running around the floor, throwing tickets on the floor, can actually be automated and organized and into a network that can become modern. I never understood that thing about throwing stuff on the floor. Yes, I know. Neither did I. But I think that if you look at what we do today, most of the stock trading decisions are done automatically, and what's interesting is to see where that value chain is moving up in terms of automation, which I'll talk about in a minute, is something that is in the DNA of NASDAQ. But it's interesting when a company is disruptive and then becomes very successful, it's hard to continue that disruptive culture, because you start to have a lot of success, you start to make a lot of money, and you start to get comfortable, and then you start to get afraid. You're afraid of disrupting yourself again. You're afraid of disrupting that success that you're having and the money that you're making, and you start to get moving to the mode of being incremental instead of continuing that disruptive path. And I think NASDAQ has gone through that in its life. Frankly, in 1971, it's 46 years ago, so we've been around a while now, and it's been interesting to see the phases of our existence over time. What you just mentioned is probably applicable to the entire audience here. I always say that if you want to play, play like you're nothing to lose, but the problem is you do have things to lose. If you keep that momentum when you are the leader, how do you continue to play like you're nothing to lose? It's really interesting, because as I took on the role in January, one of the first messages I gave my team was, okay, just let go of your fear. We are in a very disruptive period of time in our industry. We've got the cloud, we've got the blockchain, we've got machine intelligence. There are a lot of disruptive trends, and I want to be the one who's disrupting the industry. On the forefront of that, I think NASDAQ, frankly, we are on the forefront of those trends, but the key is that you can't be afraid to embrace those in the context of realizing that it might disrupt part of what we do. The first thing was let go of your fear. Think about it in terms of what's the right thing to do. Think about it in terms of what's the best opportunity for our clients. We'll find a solution. We'll find a way to make it work within the contract of our business, but let go of the fear. The second thing is, we have essentially an R&D budget that is separate and distinct from the operating budget. We do that so that our leadership can come forward with new and exciting and disruptive ideas and not disrupt their budget. In a way. You do it outside the budgeting process. You have it available to you to be able to come forward and come up with something innovative. What we also do is we have a venture investing arm. We have a group that goes and makes venture bets. That obviously gets us in front of a lot of disruptive companies. I think it's kind of good to see the Wall Street asking not to be afraid, because you are part of Wall Street, and most of the public companies are afraid of Wall Street, to a certain extent. Let me ask you about the automation part you talked about. Every time when the stock market, let's say, takes a small crash and opens up financial times for Wall Street Journal, what's the first photo you see? A bunch of stock traders with their hands on their head, paper everywhere. Most of them look like they are from New Jersey or Brooklyn, yelling and screaming. When we walk into NASDAQ, you don't see that. You don't have a pit like this with paper everywhere. You automated the hell out of all of that. It's very interesting to us to know people from trading floors, NASDAQ, and Nutanix, we are sort of trying to figure out how to remove people from data center who are doing repeatable tasks. What's the future of automation? What exactly will the kids do when they grow up? Tesla is talking about automating trucks. Amazon is going to automate the grocery checkout. All services jobs looks like it's going to be automated. What are people going to do and what's the future of this? I think that if someone knew a perfect answer to that question, they'd actually do really well in the stock market. But I actually would say this. I think that it is an uncertain future, but my view of the future is that there's always going to be man and machine together. When we look at how stock trading occurs today versus how I see the future of the industry, a lot of trading decisions are made through algorithm and automated. I would argue that index investing is an automated form of investing because you're basically choosing a basket of securities by rule, not by individual judgment. However, there are still a lot of people in the industry. So what are those people doing? They're creating the algorithms. They're managing the algorithms. They're backtesting the algorithms. They're surveilling the markets. They're making sure that we're compliant. So there are still an enormous industry with an enormous amount of talent. The talent is shifting to different skills. Now let's look at what we see going forward. In the investment industry, and that's an industry that I know pretty well, the history of it has always been people, people and judgment. They take a lot of inputs, use their human brain to make a human judgment to make a decision to invest. That is also starting to get disrupted. So you've got the passive investing with indexes, and then you have certain companies where they don't have any investors. They have data scientists. 700 data scientists managing $45 billion. And they're taking massive amounts of data using Nutanix's amazing capabilities, scaling it up. Thank you for the play. And essentially driving to an investment decision. And I talked to one of the guys who runs one of these firms, and I said, well, what about human judgment? Don't I matter as a CEO? Doesn't a human who's running the company matter as to make sure that you make a good investment decision don't you want to meet the management team? And his answer was, well, it'll all come out in the data at some point as to whether or not you're a good CEO. I was like, okay, well, I think that... Is it still working with you? And I actually think at the end of the day that automation will augment the human brain and the human judgment, not necessarily replace it. So I think there will always be some investors who make that choice. But if every investor made that choice, it would be a huge herd mentality. And then there would be these humans over here who would arbitrage that herd mentality and they'd make a lot of money. So I actually think there's always going to be a balance. There's going to be some that automate every investing decision. And there are going to be some that always make human judgments. But I think the vast majority of the industry will end up being a combination. And they call it quantum mental investing. What is it? Quantum mental. So you take quantitative information and inputs to make a fundamental investment decision. And that's kind of a little bit of a hybrid of man and machine, just like the hybrid cloud, right? A hybrid. And I do actually think that that's the future of the industry. So you heard what she said. If your kids are not data scientists or algorithm writers, I don't know what to tell you. So Nutanix and NASDAQ, like you mentioned, have a really good history because one of our best moments in the company's past happened there. It's a beautiful story. I think we broke the record a number of people. Yes, you did. Hopefully, no one will break. Yeah, that's fantastic. You are there in the middle. And before that, Nutanix and NASDAQ also had a vendor-customer relationship, more of a technology partnership. How is it working out? Please don't tell me if it is not working out well in front of everyone. In front of all of you. No, it's working out incredibly well. So I would say there are a few things. One is that we originally worked with you to make sure that we were basically able to do a lot of real-time analytics on our own system performance and making sure we are tracking and monitoring our systems appropriately using the Nutanix capabilities. And the HCI stack has really enabled us to do that because we have so much data that we're processing every millisecond that it allows us to make sure that our systems are performing. But that's just the beginning. So where we are taking the company next and one of the things we're going to talk about is we don't just operate our own markets but we provide the technology that powers over 90 other markets around the world. That's a piece that I think most people probably don't know about, right, NASDAQ? And that's one of the things that we really are, we define ourselves as being the technology leader in the industry and therefore all these disruptive technologies are very important because one of the things we just built was a market that's fully deployed in the cloud that is blockchain-enabled end-to-end and that will basically be a totally new architecture of what markets are going to be. Across the world they take your technology as well, right? And so it's really interesting to see but most of our clients as well as ourselves are likely to move into the hybrid cloud model. There's going to be a lot of data sovereignty issues. There's going to be a lot of... a little bit of fear, frankly, of moving everything up into the cloud. There's also proximity issues but that hybrid of being able to have some things on-prem and some things in the cloud and a scalable architecture and Nutanix frankly gives us all of that capability with HCI capabilities to make it so that we can create that hybrid cloud model for ourselves which we're doing with our DR, our Disaster Recovery, as well as for our clients. Fantastic. I mean, NASDAQ is all about performance and security and the fact that we can even associate our brand with that is a phenomenal honor. Anything else, any last words for the audience? Well, I would just say that it is an amazing moment in the history, certainly of our industry but I think of every industry in terms of the pace of change, the pace of disruption but also the opportunity that we have with the technologies that are here today if you think about not just Nutanix but the whole infrastructure that we are moving towards in terms of having it be highly scalable microservices with a platform type of underpinning it kind of opens up the possibility to do things that we never thought was possible and I think that every industry is facing that moment but it really is an exciting time. You know, I said there's no better time or place to be in technology, right? I mean, this is the revenge of the nerds. We rule the world again, so... I like the concept of the nerd herd. I think that was awesome. Nerd herd, yeah. I'm glad to be a part of it. Thank you so much. Thank you. Thank you so much and thanks. Yes, thanks so much. Absolutely. Adina Friedman, CEO of NASDAQ. The next person who's going to present is one of my favorite guys in the world. He is a very smart guy, he's full of brain but he's also full of hair in his head and because of that we call him Hair Brain, HIR Brain. Let us invite Mr. Sunil Poti. Today's IT landscape is changing. The era of multi-clouds is here. What if you had a single fabric that could power all of your workloads in just one click and manage them with a single pane of glass? What if you could expand your data center on demand and make all of your clouds core, distributed, and edge invisible? What if you could simplify operations without losing visibility or control and elevate IT to focus on the business? Well, now you can. Welcome to the world of One OS. With the Nutanix Enterprise Cloud. Ladies and gentlemen, please welcome Chief Product and Development Officer Nutanix, Sunil Poti. So, at least this guy got it right but Sudheesh, I've known him for three years he still doesn't know how to pronounce my last name. It's like my son, he's like twelve now when he was nine. I know I told this joke a couple of times for a smaller audience. He was sitting in the back of my seat in my car and he goes, Dad, I love you. We do lots of good stuff and I love Indian food and all that but why do you have to name me after a toilet? So, Sudheesh, you know, thank you for reminding me about that joke. So, onto a mystery, I guess this definitely hybrid cloud is a mystery as much as it sounds like everybody has an answer. So, if you want to ever learn about Nutanix product roadmap, read Dilbert, okay? So, one thing, you know, this is what I joke to my team and our partners and so forth is people talk about the hybrid cloud but the term hybrid cloud itself is a little bit of an oxymoron, right? Because the only cloud today that truly exists is the public cloud. Everybody knows that's real, it's there, it's mainstream and how can you actually talk about a hybrid cloud before first, you know, actually building another cloud? We're trying to call it the private cloud or whatever you want to call it and once you actually do a genuine job of the private cloud, only then can you actually embark on building a hybrid cloud and most of the, if I can call it, you know, vendors out there including us and so forth, I think what we have to keep ourselves honest is before we jump fast into hybrid cloud, let's make sure that we first do a good job of the private cloud and then expand it beyond that to actually make sure that hybridity works. So, you know, Nutanix is a company, we have these theses that look, use the right cloud for the right workload, if it's a predictable workload, focus on doing it inside because that's what things, you know, will work out over the long term. You know, when we come to Washington D.C., we stay in a hotel for three days, we'll lease an apartment, if you stay here for five, six years, you know, we'll buy a house. Maybe in Pennsylvania, but we'll buy a house. So, and if it's elastic, obviously a public cloud makes sense, right? So, the key thing though, the thing that has actually led us to where we are today, I think, to differentiate our offering from others is the fact that when you build a private cloud, there has to be an exact replica if not, you're not doing a honest job about providing that same consumer great experience and choice, right? So, and a lot of our achievements so far, I would say, we've come a long way from software defined storage to hyperconverge to enterprise cloud, a lot of it is based on your own testimonials about where we have come. In fact, that particular code, I actually got it myself while I was drawing down one of the highways in North Carolina between Charlotte and Raleigh and one of our customers actually mentioned this. And so, this is the kind of thing that independent of the terminology, this is what keeps us going inside our engineering group is this delight that we sort of embark on from customer-partner relationships so that we know that we're making a difference with our technology. So, today, obviously we're going to be talking about the fact that as we've found a way to re-platform the Enterprise Data Center to look just like Google or Amazon or Azure, it's only the starting point. It's only the starting point because we think that the error of cloud is increasingly getting dispersed, dispersed and into multi-clouds is the term that we use, not just hybrid clouds. But the starting point of that is what we call a simple core hybrid cloud. And that obviously requires a way to burn them together, fuse them together so that, as an operator, you don't know the difference between what's actually on-premise versus off-premise. And so, before we kind of get into the multi-cloud architecture, something to note is the fact that this is something that we've been doing successfully across a variety of customers. And I just wanted to share with you guys for the first time our largest deployment in the world is actually more than 1,700 nodes. It's growing. It's got a great network. In fact, I can't speak over the facts, but I can count the number of operators for this environment on my, you know, single hand, right, on my fingers. But the most sort of interesting thing here is the fact that it's 100% AHV. It's 100% AHV. So it's come a long way. It's come a long way. In fact, our core product is going to be more than 100% AHV. It's going to be more than 100% AHV. And, you know, we're trying to take this to the next level. That's what this conference is about. That's what engineering is all about, is to constantly keep raising the bar. And so, in this world of, you know, clouds that are getting dispersed, we think that the thing that architecture that's hovering the clouds, we're going to have a lot of cloud infrastructure. We have hospitals, DR centers, secondary data centers and so forth so that they all look like smaller versions of your private and public clouds, right? So a simple example is we have this global pharmaceutical company. They started small with us across two couple of data centers, but they're now essentially replatform their global infrastructure across all their sites and so we think that this distribution isn't going to stop there, though. It's going to obviously move closer and closer to the edge. And this edge computing cloud is not just limited to, in our opinion, things like drones or something like that, it's actually more practical. It's practical in the sense that it's about moving some intelligence onto a, you know, a barge or an oil rig or even a Humvee or a forward operating base. In fact we're going to be talking about some of those examples today where we think that if you can truly homogenize operations but accommodate this distribution of clouds is where the next era of cloud is going to be from our perspective. But before we do that, though one of the key things that we'll have to do is this fact that as you can see Nutanix as a company was always about software, even though we packaged it as turnkey appliances on day one, we work with a variety of our partners, like Delanovo, we integrated with Amazon and Azure and so forth. And yesterday, Deed has talked about how we are actually taking public cloud integration to the next level where it's just not about homogenization of operations but the fact that we are actually going to fuse the way that we deliver enterprise apps and cloud native apps in the Google cloud. And I'll spend a little bit more time on that later in the keynote. But a couple of other things on the flip side on mainstream infrastructure that we have going is the fact that that same software that we're calling the enterprise cloud OS fabric is, you know, it's been shipping now on Cisco for a while. Any customer that actually uses Nutanix through Supermicro or Dell essentially packages that same software and is able to consume that across the C-Series, the B-Series, the storage heavy line as a uniform fabric. And today I'm actually quite, you know, quite pleased to announce the fact that we have now extended that to HP. And in the software side I created those graphics myself. Thank you for that applause, by the way. So, you know, it's the DLCDs. No surprises there. The most popular platform out there. It's the same exact stack. It's something else that we've done beyond just making it available on HP in the way that we've done it on Cisco is that as we move into this software first strategy, the software first form factor, software first consumption model, the thing that we're also doing is to disrupt ourselves so that we can offer you maximum flexibility. So, the software that you're able to buy on HP is transferable to Cisco. Okay? Or vice-versa. And similarly, that flexibility will apply as we decouple our software from our hardware form factors and we support new form factors going forward. So that tells you that, you know, we're trying to stay ahead of the disruption of going from appliances while preserving the simplicity and the turnkey capabilities of appliances but making our fabric more ubiquitous by ensuring that the software form factor is first and foremost a consumption model. So that's the sort of like the key takeaway there is around this concept of having a single OS across a variety of delivery vehicles. So when we come back and we look at this OS fabric and we say, okay, so what's what's new? So what's new? And the analysis that I have and every product slide, every product roadmap inside Nutanix has to have a brain, by the way. If you guys have an inside joke, this is what they just used the left brain, right brain and, you know, many of you guys you know, like I was I was always confused about what is left, what is right but the left and I've verified this last time is obviously about you know, logic and stuff like that the right brain is about creativity and the way we think about our architecture for this multi-cloud is similar that you actually need a single operating system that ensures that it powers a variety of architectures but it cannot come at the expense of having a poor experience and this is the foundational difference between us and many of our competitors is the fact that we refuse to acknowledge that you will need 15 different products for 15 different use cases, right and Amazon AWS has shown us that you can actually use a single fabric across a variety of workloads, okay it may not be everything but it can increasingly and every year it keeps growing so fundamentally that's our core theses of our product architecture is the fact that you can deliver the same experience using the same operating system irrespective of the workload so let's delve into both these dimensions by the way that's where we will talk through the product roadmap is the fact that when we talk about one OS it is primarily centered around the fact that it can be across any application any deployment and so far that's a requirement those two things, you have to support any workload any deployment form factor but you have to do it in an open way and that's the last one is super hard we will talk more about what we are trying to do to continuously the environment so when we talk about any applications let's start there for a second and we talked about the fact that it's no longer about VDI or Oracle databases it's all about the advent of developer friendly applications between mode one and mode two applications whether you call them that or you call them enterprise grade or developer friendly when you look at what's in our capabilities that actually we boil it down to three sets of capabilities the first one at least in the traditional world is about what do we do to enable quick seamless migration and once it's migrated what do we do to optimize it and third how do we keep it on all the time without a high degree of maintenance and so today we've been investing in this product in the under bowels of the company it's not a sexy feature like AHV or anything like that but it's a pretty significant investment from our side and in fact our own IT organization has embraced this in a big way is that you can now essentially get a product called VM extract which allows you to take ESX on three tier environments your traditional environments in a very simple turnkey fashion migrate that to AHV that's first it's just not a V2V kind of tool but it's also packaged at an application construct where we can actually take a very complex app such as SQL server scan, design it transform it and push it into the Nutanix Fabric in an optimized way and once it's there we can also go beyond that to optimize the way that we actually secure the data in this particular case instead of using hard, secure drives expensive PKMS solutions and so forth in the upcoming release we will actually support native encryption for all of our core data path so yeah every one of these slides are going to have three, four features in them by the way and we do have 172 sites so that tells you how many features are going into them so that's there's a preview of some of that stuff on the migration side there's a detail session on it I would recommend some quality time on VM extract and database extract and some of these core data path capabilities now on the performance bottleneck side another interesting take away so I won't jump into the fact that oh we're doing great stuff on the core platform which obviously we're spending a lot of time on what we actually did for this year was because we became so mainstream in terms of ensuring that we can benchmark against what is the old versus the new we actually took a lot of the customer workload data and burn that into a new tool that we are calling x-ray x-ray essentially is a living breathing sort of benchmarking environment that allows you to essentially quickly assess you know the Nutanix fabric compare it to your alternative environments and over a period of time we are constantly burning best practices into it so that you can actually test out not just performance but overall system availability system health kind of test so because it's just not about how fast we can run in a good environment what happens when there's failure what happens when you have to add nodes how long does it take to remove nodes and those kinds of things right so that's x-ray and once we obviously invested in building tools to benchmark it we have to spend a lot of time in terms of at least you know the networking bottlenecks versus storage bottlenecks this is pretty straightforward what are we doing everybody is doing this we've got NVME now being burned into the core platform on top of that from a networking perspective we are going to fuse in RDMA natively and both of these between NVME and RDMA improvements are actually landing up in a new platform that we are going to be launching pretty soon which is the 9030 platform which is going to be our flagship high performance platform offering 40 gig connectivity some millisecond latencies all that good stuff that you guys know about but this is not the what I would call the aha of performance from our perspective because in the new set of workloads that we constantly see the bar is being raised where the bottleneck is shifting beyond storage compute networking to the application where the virtualization layers are where the bottlenecks happen to be right and so from our perspective if you think about the traditional hypervisors they were built knowing that some of these latencies existed on the storage stack the network stack and so forth what we've had to do and this is probably the most significant release of AHV since when we came out in terms of feature functions we've been adding a bunch of things it's something called as AHV turbo mode it is essentially re-architecting the core AHV fabric to ensure that it completely understands the fact that the storage and network latencies are completely dramatically changed the faster it can create bypass routes the faster it can multi-channel queues so that the app is actually touching the storage and the network queues as quickly as possible there's a lot of details here we've created a few of the capabilities but this is going to essentially with a simple software upgrade give you a tremendous boost in performance so that's the performance optimization side and then finally of course we have keeping things on right which is probably the most important thing at the end of the day and here as many of you guys know we do sync, async, a whole bunch of stuff and we're going to demonstrate a few of these things but in the upcoming release we will actually introduce something new which essentially brings you to the benefits of synchronous RPOs, near-synchronous RPOs over a high latency van and we're not just doing that from a data protection side we're also extending that to backup where especially with AHV especially with AHV when many of you guys have come to us and said look I think it's great for a lot of things but we really need to get integrated backup going I'm sure you guys have seen this on the show floor we now have a ton of partners signed up including ComTrade including Rubrik that have built native capabilities inside Prism for backup and so forth and to sort of give you guys a quick view about all these capabilities let me introduce Raja on stage come on up Raja let me quickly just log into my virtual desktop and get things going just like my last name if you ever want to pronounce Raja's last name just call him M-10-Y like I-18-N someone like you last name jokes are not the way to go you still work for us so what are we going to show right now? look the Nutanix Enterprise Cloud OS it's a platform for running applications and there are three things that any platform needs to do you need to be able to move your applications onto the platform easily once they are on it you got to be able to run them real fast and then finally you need to be able to provide them the highest levels of availability so let's start with the migration bit first okay the extract product for databases essentially allows you to move your SQL servers from anywhere onto Nutanix quickly and easily the tools workflow it consists of four distinct phases in the first phase we will discover your SQL servers no matter whether they are running virtualized bare metal or in the public cloud in the next phase when you go to the design phase we look at all of the config and the performance profiles of the SQL servers and we apply a whole slew of design best practices to really optimize the SQL servers for a Nutanix deployment we move on to the actual deployment itself and then finally in the last stage we use SQL replication to migrate the data from the source environment on to the target awesome man let's see it so let's bring up the demo on the screen please so let's this is we are looking at the you know the the console for the extract product so let me just create a project here we are moving some SQL servers for an HR application we are going to create a new project and the first thing that we will have to do is to input a set of parameters that describe the source environment we have made it super simple for you so you can see this is pretty much all you have to enter to identify the source environment some IP address ports and your user credentials so let's just go ahead and upload that through the spreadsheet and let's see what we have here so essentially it's obviously contacting the databases that's right so you can see now the tool is going and searching for the SQL environment right and you can see it's discovered two SQL servers so let's go have a look the first one you can see is a SQL 2008 and it's running on windows 2008 and this one is about three gigabytes of data and the second one is about 62 gigabytes of data if you flip into the databases tab the tool shows you a view of the different databases that are within the SQL VM now that we have discovered the VMs let's go and generate the design and you can see the tool came back and recommended two SQL server VMs and we retained the guest OS as well as the SQL versions so the tool is automatically recommending a whole bunch of AOS storage properties as well as the VM and you know compute and networking but most importantly all of the database disk layouts within the SQL VM so a lot of stuff that otherwise you would have to do manually all taken care of by the tool itself we then proceed to deploy the VMs and when we do this it will ask us for some parameters describing the target when you do it in your data center this will take you about 10 minutes or so to deploy the VMs and depending on the data you have within the SQL environment minutes to hours to migrate all of the data from the source to the target so in the interest of time let me just cut over to how this all looks like when it's all done so let me just go and find the project and you can see if I pull up the deployment status the deployment is all done and if I go into the detail view this is where you can see a lot of steps all the way from cloning the VMs to laying down the OS disk to like setting SQL server up to configuring the right roles to renumbering the disk so basically we have made what essentially would have been thousands of lines of Chef code or something like that simple turnkey and prepackaged it from a migration perspective that's exactly the net of all of this is that moving SQL servers from one infrastructure environment to another has always been a painstakingly complex process taking often days or weeks and in true Nutanix fashion we have really focused on the mundane and we have made that complex task simple turnkey one click. Got it. So what's next? Well these applications it's also about performance so what do you think how many ops do you think our cluster is driving? 50 million well let's go that was not part of the script by the way it's supposed to be 50k IOPS yeah so if you look into we have an 8 node cluster and let's go have a look here we are doing about 450,000 IOPS let me jump into the storage tab I think that was supposed to be the surprise 4,000 IOPS that's a lot that is a lot from an 8 node cluster we have been able to drive 450,000 IOPS so let me go and take a look at what's going on let's go and see what's driving all of those IOPS and let's go into the table view here we typically end up running some Oracle in our .next demo so let me take a wild guess and yeah it's my lucky day you can see that we are actually running some Oracle VMs and you can see these are big VMs 44 cores and 240 gig let me jump into Oracle Enterprise Manager and look into the view from over there let me just quickly is this the GUI that Dira has developed yeah you can see like the difference between Prism and a sort of a modern GUI and let's wait for Oracle Enterprise Manager to load up thankfully he hasn't been writing code on Prism anymore that's right so so if you go into the rack view here you can see that we have this like four Oracle VMs they form the cluster and then if I jump into the summary performance tab you can see from a IOPS standpoint we are driving about 450,000 IOPS and if I go to the latency view you can see all with excellent latency it's about a millisecond of latency so this is a big difference from last year yeah absolutely for those of you who were with us last year at dot next in Vegas we showed you how we run Oracle bare metal using Acropolis block services and in that environment we were driving about 80,000 IOPS and this year we are running Oracle virtualized in a hyperconverged manner on the cluster and we have been able to gain a 400% performance improvement software update oh that's great so obviously we've done quite a few things here right so maybe you can spell them out yeah it really is a testament to the power of our software defined architecture we get to because of our vantage point in the stack we get to leverage innovations on the storage site we are using NVME drives if you look at the compute you talked about HV turbo that's what we are using here and then finally on the networking we are using RDMA to cut down latency as well as increased throughput awesome awesome so if you look at it to net it all out when you look at running Oracle SQL or other databases on Nutanix you have complete flexibility you can run them bare metal you can run them virtualized all with best in class performance so what about the last piece then yeah these applications they are not about just performance it's also about protecting them so let's jump into the data protection tab and see what we have going here right and if you go into the table view here we can see that we have one protection domain that we have configured to protect the Oracle VMs right and we used to have one hour RPO in our asynchronous replication and we had said that with our near synchronous replication we'll bring it down to 15 minutes I'm very happy to announce that actually with our near sync technology we have been able to bring the number down a lot lower it's now when we ship the product it's going to be at one minute wow that's how we sent our engineers is that I told them we can be on stage for 15 minutes so let's go into the schedule here you can see that it's set to take like snapshots every 15 minutes if you go into the you can see the system has been taking snapshots every 15 minutes so let's go ahead and see if you can change the schedule and lower it so let me go into here and let's do the update all right so it's so there's some obviously we've done better than one minute that's the price I mean in for some applications and use cases already using this technology we can bring it down to even lower it's at 15 seconds so let me go ahead and you know show you that here so I set it to 15 seconds and we save it and now the cluster will start you know taking snapshots every 15 seconds so actually so maybe you can summarize the impact of something like this yeah the real reason that this technology is so powerful is that look no matter where your data centers are your primary could be in New York your secondary is in San Francisco thousands away you could still use this near-sync technology to give your application the highest levels of availability even if your data centers are close by so that you could do synchronous replication in many cases for these mission critical applications you just can't do that because that would introduce a lot of latency in the IO path so this technology no matter where your data centers are no matter what your application is provides you the highest levels of protection got it so if you look at what's going on here you can see already we started here and the system is starting to you know take snapshots at every 15 seconds in fact the 15 second is lower than the you know the prism refresh time out so if we do a refresh we'll see some more snapshots in here yeah so there you go you can see like you know at 15 at 30 at 45 there you have it all right man thank you so much Raja. I'm sure you're all right Raja committed for it so make sure you hold him accountable all right so so quickly so we've done a lot of good things on the enterprise apps part when you obviously look at you know where Nutanix is being used across the data center across public and private clouds there's a lot of increasing footprint around Nutanix for more to applications or developer friendly applications and fundamentally the reason is pretty simple right which is the fact that we have a lot of applications that are available on the CLENC scale and so forth to Pivotal Cloud Foundry or any of these past applications we also need to provide a homogenization across mode one and mode two applications on the same operational fabric and that's the reason why we've had a lot of success with enterprises now not only starting on Nutanix with containerization whether it be Docker base or Pivotal Cloud Foundry or Pivotal Cloud Foundry that we integrated into Nutanix and then taking this form factor that we currently have whether it be a past fabric from Pivotal or with Red Hat OpenShift or third party open sources and expanding it to even next generation AI workloads in this particular case TensorFlow so that we can actually run it with native AHV GPU pass through. The one we launched in my opinion one of my favorite fastest growing features in the capabilities Acropolis file services for consolidating you know file capabilities natively into the fabric and since it's launched it's actually taken off dramatically in terms of not just production usage but the amount of optimizations that we have put into the fabric and so today I'm very happy to announce for it to go mainstream as well so that it covers both and also machine data is that it's actually going to come out with native NFS support across the code fabric as well. So essentially in a nutshell what are we saying right we're saying that look this platform now has evolved in a few years it's gone from managing virtual machines to now files, containers and so forth and when you really look at the architecture though increasingly you will see that the constructs are more essentially you know we started with the EC2 for the enterprise or the EBS for the enterprise we extended it to EFS ECS kind of services all sharing the same consumption paradigm and so when we look at this operating system across any application and we look at this multi-cloud world between core distributed and edge we have to obviously then click into the fact that can we actually have the form factor can we have the consumption model to support the requirements and we talked a little bit about the fact that we have this choice of a whole bunch of these options but it's my great pleasure to actually introduce a strategic partner on stage today for the first time which is you know it's a relationship that has been in the making for a long time and we've done a lot of exciting things already in terms of truly providing some innovation with IBM so Brad where are you come on up so Brad every time that I've seen you you're always in a Hawaiian shirt thank you for dressing up by the way you got a little classy for us so tell us a little bit about the relationship first of all even before we talked about the products I think I've been really excited looking at our development teams working together we took our power platform with your software stack I think it was three weeks three weeks the teams were able to knock it out and get that up and running you know we have that on display out in the demo area that platform up and running which I thought was a really great thing so I think that bodes for great things to come as well as we look into the future as we look at these two great development teams it must be all that free beer that you have in Austin no we don't know this is IBM we come to have beer with you guys so tell us a little bit about the joint offering yeah so what we're looking at here right is we're moving the entire the entire new tonic stack over to our power platform so we got our power platform with the same enterprise processors that we developed and having in our enterprise servers for quite some time we'll now be running the full automation stack of new tonics you know I think the early results are great you know of course we'll have the great reliability of power systems on there but you know we're seeing performance advantages already too you know we haven't even gotten a lot of tuning done and we're looking at 18-20% performance advantage on transactional workloads yeah especially like data workloads and old TP workloads and stuff like that yeah so I think that that is you know exciting for us you know and then you look a little bit forward into the future you know and we got the power nine platform coming out you know later on this year I think you know you start looking at that right you know and then the IO innovation that we've been doing on that platform you know we got PCIe Gen 4 will be available later this year and you look at you know our open CAPI interface there's a really high bandwidth accelerator interface you know I think that's going to be able to bring a whole new set of workloads to the hyperconverged platform because that IO you know is going to enable all the enterprise workloads I think you know in a hyperconverged platform I think that'll be very exciting got it and tell us a little bit though I mean one of the reasons why we were able to get this product out so quickly was also the fact that you guys had already done some work on KVM and then we did all this thing so that the full stack actually now runs natively on AHV yeah absolutely you know I mean that's what we've done with our open power platform is bring all of those industry standard Linux industry standard KVM platforms on the power platform that certainly you know has enabled many clients you know especially the new tonics workload to port quickly to the platform got it but you know there's one thing I just wanted to say you know you had that IOP chart up there you know 320k IOPS we can go ahead and make next year's chart now it'll be all over a million well over a million alright Brad well over a million one node okay a million IOPS on one node we do need three nodes for a cluster by the way so we'll take three million IOPS we'll give you three so remember that Brad from IBM three million IOPS on a cluster Raja from Nutanix 15 second RPOs okay those are two names so when we talk about distributed enterprise let me just keep going through this because we have some more demos here is the fact that obviously distribution matters from a scale of operations and one of the biggest things that we have done and this has taken a lot of work from us is you no longer need three nodes right only one node and two nodes work alright in the new release this can be obviously packaged in a variety of form factors it'll still be centrally managed using Prism Prism will have one click upgrades both from a schedule perspective as well as a rolling upgrade perspective and all that good stuff right this particular area is not just for remote office branch offices but also for the next architecture which is around the edge cloud and the edge cloud as I said it's just not about drones or nodes and so forth we've already done that it's about actual practical deployments of shrinking the form factor while retaining the same operational experience across a variety of these deployments whether it be on cruise lines airline terminals practical things that we've actually been working with customers on where you can actually package Nutanix software and purpose built hardware all the way from the edge to the distribution layer connecting to the core cloud and there were some interesting stories in the military that we've run into in the last 12 to 18 months that I thought that I'd invite couple of our friends to come up on stage to talk about them Chris and John come on up man so before Chris we get into it I see Hercules there dragging something so what is that you I just told him to add Libit by the way he may be going over the top but so you didn't get a room I forgot the line by the way you didn't get a room at the hotel or something I'm sorry did you get a room at the hotel is that why you're carrying that luggage it's the Gaylord is booked so I've just brought my bag with me okay got it so what is that what is that this is the class telecom tactical data center alright so while he's setting up and showing us how Nutanix with class can actually give you that edge node maybe Chris you can tell us a little bit of a couple of stories so the first one recently just returned from military exercise I was invited to go along with the government customer can you tell us where in Europe that part of the world so what the premise was is currently down in Florida they have a data center that is currently replicating and federating services straight into a tactical data center so the premise of the exercise is I need enterprise services though at the forefront at the forward operating location that we're going to go deploy to so what they did was they used the Nutanix software they started replicating data down in Florida and within a couple of hours they were notified that they need to go and deploy to a forward location so at that point all they have to do is turn the system off power it down put the lids back on it and it's 45 linear inches so they were able to take it on any commercial aircraft so they flew commercial aircraft that's correct and it meets the size requirements that they could actually put it as a carry on item on that same aircraft meaning that I don't have to wait for a day to transport equipment wow wow and so what happened so then when they finally got to their location they were replicating data to a virtual data center in Germany once they were able to actually get on the ground they got into the everybody knows Eastern Europe you've got the small compact cars they had two personnel carry the entire system were able to get in that car go to the exercise location and go ahead and start setting up the beauty of the system is that it's battery backed inherently in the system so they were able to power on even without grid power and start getting services back up and running so they were able to provide within two hours local services for enterprise grade services on the deployed location and tell the transmission path came in and they were able to start replicating again wow wow that's a good story do you have it up what does it look like where's Prism I thought you were going to show us some fancy screens you got it on your phone or something alright man so essentially just a four node cluster a multi processors 128 gigs of RAM per node and an integrated 10 gig switch that class telecom brings to the market all these are going through military regularizations, testing and standards which gets into the second story actually before this how much can I can I I can touch it so this probably won't pass regulations right that's pretty heavy actually right guys alright big John you can get off the stage now so what's the second story so second story is class telecom designs and develops ruggedized equipment for the DOD or rough market with that there was a story of an airborne operation we did recently where for some reason the parachute did not deploy when they pushed the equipment out of the aircraft the parachute didn't deploy so what happened was the chassis itself hit the ground probably 1200 feet about 120 plus miles an hour and actually they call it a burn in they call it a burn in because it actually impacts and makes a divot in the ground it hit the ground it's a node so the bundle that they use wraps it in a little bit of protective material and then they put it into a bag that bag has a chute that chute will deploy and it makes a graceful landing probably about 5 foot per second for that though it didn't deploy meaning that the actual equipment hit the ground at full force at 125 miles an hour so what that stands for on the crash side is that we develop products that aren't meant to withstand that type of treatment but what it did do was allow the equipment to be able to be recovered from that site and we did take it back to the safety area and we're able to power on the battery and the router and switch at the same time without any damage so the chassis itself contained all that equipment got it got it and since it's true edge deployment we asked for some videos but I guess they were blacked out so anyway those multi cloud architectures across the core distributed in the edge and we look at this across any application any deployment form factors the key thing as I mentioned before it comes down to doing this with optionality because in this day and era every customer that we talk to has to replicate the lock in that they went through for the last couple of decades right so if anything I want to choose the right cloud I want to choose the right hardware platform I want to choose the right hypervisor and so forth so whether it's any platform any hypervisor any consumption model whether it be appliance software pay as you go and finally as Brad mentioned any computing architecture as well as a first class citizen and this is foundational to the fact that Nutanix is evolving beyond where we started as a form factor that was just a pure play appliance to embracing the full optionality that we need to give to our customers long term as they embrace this journey towards the multi cloud architecture so that was the one OS portion right any application any deployment with an open approach let's talk a little bit about one click it's the experience which is what differentiates us frankly even more so than our data plane the fact that we invested so heavily on the control plane is a sense of ownership and pride for us and there when we talk about what we have done in terms of building this core IP of a control plane that's single, common, scalable across all three crowds the first thing that we have done is that everything is now in Prism Central Prism Central is deployable in one click in the current release so you can literally see that we'll demonstrate that when Rajiv shows up on stage but it's also essentially scale out like our data plane because with the kind of volumes of nodes the number of VMs that we have to support it has to embrace the same capabilities of scaling out across a variety of requirements and so whatever we have found Prism Element is now being shaped into Prism Central in that single pane of glass but it doesn't stop there where it's extending though is the fact that so far we've always focused on virtualization, compute and storage in the definition of our single pane of glass people used to come and tell us what about the network and when people would ask me this from time to time and we would talk about it some customers or some partners would say well when are you going to put a switch inside your fabric because that's all I need to do inside my box and when you really press them what was the reason say well I got 8 cables going out I need to bring it down to 2 cables and we would give them some money back for their extra 6 cables but so the real problem what we've found when we talk to our customers and you know real deployments is not in the data plane the real problem in convergence and networking in our opinion is in the control plane because when I deploy something I'd like to know if something's slow and if that is slow because not because of my storage latencies but because of my packet loss to the top of the rack switch and so in the current product itself that shipping right now we've started that journey over the last 6 to 12 months where we've built in network visibility as a first class citizen so that in prism you can now have a clean visible transparent view into what's causing latencies misconfigured a whole bunch of troubleshooting capabilities around networking is burnt into the product but we haven't stopped there because that's the first step visibility the second step is about provisioning and automation so when I provision a hundred VMs today in Nutanix we take care of the compute side and the storage side now with native integration it will provision the VLANs on the top of the rack switch it will update the load balancing rules it will update the firewall capabilities as well all within that one click automation and we haven't stopped there because once you do full network automation build into prism the next step is about how do you secure my application environment and there with Acropolis hypervisor upcoming release you'll have native inbuilt micro segmentation and we'll demonstrate that which is a really powerful capability that doesn't require you to actually install network overlays but you can still, without having to buy expensive overlay fabrics and so forth there's yet another tool, yet another management console this is very similar to Amazon again AWS or Google or Azure have concepts such as security groups you provision a hundred VMs I take ten, put it in a security group I don't pay anything extra plus it's simple that's what native Acropolis micro segmentation is and this is supported now across a variety of what I would call L3 to L7 partners across firewalls, load balancers switches and so forth and to talk a little bit about the full power of this platform, this control plane and how they're using it from a small deployment to a very large deployment please come up Jamie from CenturyLink so is your mic on? I think so, I hope so he won't get off stage by the way Jamie's one of those guys so tell us a little bit about CenturyLink, I know you guys have grown a lot across the years with acquisitions, data centers worldwide and started with Nutanix in a small way and then expanded dramatically so basically the one of the key view we are looking at is IT operation execution strategy so basically we use Nutanix and help us to pave three simple steps towards the accurate and operation efficient execution strategy so basically the first step we're looking at is our people and our resource out there how can we freeze up people's cycle so what we do is we actually set Nutanix as our standard offering for our windows and Linux workload, that's the first step once it's in place of the new comments and also the existing workload we can migrate over and because we freed up people's life cycle the next step we can do is we have time to tap into innovative projects that's why we do the pivotal Hadoop and also we are able to run IBM Watson Nutanix he said that Brad are you paying attention he said Watson anyway so that's mode one mode two applications on the same platform and then I know you have this interesting new project called get the red out for looking forward get the red out actually there's a core purpose that everybody's care about the objective is production stability and operation if you are CTO CL those are your top items so what we do is with our innovative projects we are able to set the big data execution strategy to help us identify because keep in mind that we are able to run with speed already so if the speed is high we don't have accuracy the disaster is going to be big that's why we use big data execution strategy for us to help us to identify and narrow down what are the reasons how we're going to remediate so you got essentially a one two three plan like you laid out between mode two applications and then now analytics and data so forth sounds pretty comprehensive right so what's next so the next future is actually interesting because I always tell people any problems can be solved by money those are small problems however new tenants help us to enable and to tap into the area that we can take our speed and that is something that is you know the very unique position that new tenants has that nobody is able to tap into too much yet so what I tell people is future is not going to be there waiting for us future is actually for the people who get there first so new tenants enable us to get there quickly thank you Jamie thanks a lot so I know we spend a lot of time on automation especially on the control plane side a couple of interesting things though that actually keeps us going frankly is to go beyond automation I think for us more and more we are becoming a big data engine ourselves and our data has touched on that a little bit essentially over the last two to three years we've made a significant investment in machine learning engineering inside the company you saw some aspects of that with crossfit and planning but essentially what we are now doing and you know a whole lot of functionality is now coming out in the release coming shortly is the fact that it can auto suggest if not auto correct a lot of these capabilities around troubleshooting whether it be cleaning up dead VMs for capacity whether it is identifying a Bully VM and rebalancing VMs in the cluster or even as needed to add nodes into the cluster on demand based on your requirements at that point right but that's only the if I can call it the bottoms up view the other part and we talked about this and this is probably the you know a very significant evolution within the company is to elevate operations from VMs containers to applications and that's where calm comes in calm probably our first strategic product in the company beyond our core offering of acropolis and prism essentially elevates us to focus operations from an app centric perspective and you'll see this in whether in every demo every usage and all every portion of our workflows are now coming top down from an app centric perspective okay obviously we have a very rich marketplace to begin with we thought we showed a little bit of that yesterday we'll see that you know in the conference plus also on an ongoing basis this is a living breathing organism that keeps rapidly changing with third party apps partner apps and so forth but the core essence though of calm is around essentially creating this transition of what I call you know the CIO to the CAO the chief information officer or the azure officer or the alphabet officer by the way kind of works basically chief cloud officer right but I think if you think about it imagine your CIO going to your business and saying look you can go to Amazon or azure, google you can come inside whatever but tell us your application workload requirements what is the SLA what's the cost and let the system recommend based on the right SLA and the right class for the right workload the system should be able to say is it better to be served on Nutanix on premise or AWS or Google off premise and not only is that a one time decision which is I can one click deploy I should be able to change my mind or over time make things mobile so I should be able to move workloads over a period of time depending on how the workload working set changes so imagine the power of what we mean by calm is in the evolution of calm you'll be able to log into calm log into AWS into that account it will scrape the AWS usage it will model the workloads just like it's modeling on premise heuristics around usage cost and so forth and it will say look here's a workload this is predictable this is elastic would it make sense to run predictable on Nutanix this is what it cost you want to migrate and do you want to do that in a reverse direction too I'm running something on premise it's running for two hours a day that's what it's analytics I'm spending eight nodes to do it is it really worth my time am I better served running it as a service on Google so that is the real power of calm and to show you sort of I would say an end to end view of how prism calm this whole operational control plane is coming together let me bring Rajiv on stage how are you you doing okay you're out of breath I thought you were the one out of breath I'm getting there so what are we going to talk about today so let's take a look at some upcoming prism central features prism central has been evolving into control plane for all of your data center management starting from initial provisioning moving on to day to day operations and even scaling to public cloud but before we can do any of that we first have to deploy prism central itself and we've made that process really simple let me just show you that so over here I have prism element it's not been registered to any prism central yet let's go ahead and do that I want to deploy a new prism central instance we pick a build over here and now I get a choice I can either deploy to a single VM prism central that's what we've been supporting so far but starting with our upcoming releases we will support scale out prism central so now I have the option of deploying a prism central cluster so let's go ahead and try that I pick the number of prism central VMs 3 is good select network with an IP address this one's good and that's pretty much it I can at this point go ahead and deploy prism central you can see that a task has been created if I look at the task list over here so essentially in a few minutes you've essentially got a scale out control plane it'll take about 4 minutes to complete but that's how simple it is now to deploy prism central got it so that's deploying prism central but what about applications, how do we deploy applications on top of prism central and other there showed a little bit of this yesterday with the new com interface the marketplace is where we do all of this now I have a few applications over here I have among other things a Microsoft VDI desktop I'm going to go ahead and launch that this is a blueprint built in for VDI it's got a few parameters to it but one interesting thing you'll notice that I now have a section for VGPUs so who wants AHV VGPU by the way I know there's a bunch of folks that are asking for it so it's coming so with AHV supporting VGPU we can actually bake this into the blueprint and Windows desktops can take advantage of of VGPU so let's go ahead and launch that again the whole workflow as I mentioned is now top down from a com being fused into prism as an operational experience yep, I'll give this a name go ahead, create the application over here and now notice one thing I did not do I did not pick a host for this particular application prism central has a view into all of your data center it knows what your hardware looks like and we have a new cloud scheduler that will actually pick the best hardware for a particular workload in this particular case since it requires VGPU it will pick the right node with the GPU built in and if I look at that if I look at the VM over here you can see that it is it does have a Tesla M60 Nvidia Tesla M60 VGPU assigned to it it's starting on the Darth Vader 14 node if I look at that that's a node with two Tesla VGPUs so it's AHV VGPU support the new cloud scheduler built into the core product but it also looks like our interface has changed so this is a new look that we are experimenting with for prism central the tabs across the top are gone we now have a hamburger menu which lets us have a more scalable visual design put more entities over here and it's also fully integrated with search we can navigate using the search bar and we'll see some of that as we go along so we now have this VM up and running and of course GPU support in the modern day and age for Windows 10 and for most modern applications it's very useful most of them use 3D acceleration extensively I have a little demo over here using virtual tour of the Smithsonian Museum of Natural History hope some of you got a chance to see that today if not maybe we'll get a quick look over here this is actually VGPU rendering on the server side this is over the network so some of the slowness here is more to the network than the VGPU ok so now that's provisioning what about day to day operations and one of the most complex tasks that IT admins what about a decoupling PC by the way I have that screen on yeah so one of the things we are doing going forward a lot of new capabilities coming in prism central we want to get them to you as quickly as possible so going forward we're going to release prism central on its own release train so if you want to keep the normal AOS releases you'll be able to get new functionality in a very quick way but still we will align with the normal AOS releases so that if you do want to keep the one stack upgrade experience going you can still have that as well but you have the option of consuming at a faster pace sounds good great so let's look at some of the more complex tasks that IT admins have to do and one of them is troubleshooting performance issues so let me search for slow VMs search bar I can see an alert for an Oracle VM that's been running slow there's an alert over there I can click on that and here you'll see something new again I have a graph here showing how the system's been monitoring IO latency for a while but in addition I have this light blue band and that is a graph of expected performance what the system thinks normal IO latency should look like and we get an alert when latencies exceed the system's predicted value this means that you no longer have to set static thresholds on latencies yourself you don't have to go to each VM say this is what normal behavior would look like let me set a threshold the system will figure that out automatically for you we also have a couple of possible causes over here we have a bully VM that's been running on the same node we also have some high CPU for the particular host that we are running on I can analyze this further I get a nice heat map showing that Apache O2 that particular VM has actually been using a lot of IOPS on the node but then the question is and this is a new view completely these are all new views now you said that we automatically rebalance bully VM so why have we not done that in this case the system's actually been able to give us a reason for that also it's not able to migrate the VM because there's a host affinity policy there's a policy that we've configured that's keeping these VMs on the same node I see so let's explain this because this is very important we've actually essentially built in auto-triaging and root cause analysis right inside the system right all the way from the effect we went from an effect we figured out what the cause is from that we actually went all the way down to the policy that's causing that particular effect so a lot of intelligence built into the system you'll see a lot more of this going forward as well let's take a look at one more new workflow that we have it's on the planning screens so let's do planning for this cluster this is a view that many of you are familiar with it's our resource runway chart and it shows what runways would look like for various components based on our system predictions but I also have a new workflow here for optimizing resources what this is doing is using the same machine intelligence algorithms that we built in to look for waste in your system so I have 11 over provisioned VMs that have been provisioned with more CPU memory or storage than they've been using I also have 19 inactive VMs VMs that have not been used for a very long time so here's my opportunity to get some resources back I might want to send a report to the owners of these VMs showing that hey look you guys haven't been using your resources can you give some of this back and I get a nice report summary with the resource runways for CPU for memory for storage and also a table of my inactive VMs and my over provisioned VMs I can just email this off to the owners and I'll be all set so even after we have optimized all our resources though there will be times when we just want more we need more capacity all extended to the public cloud right so one of the things that Aditya showed yesterday is how we can use the public cloud as an extension of our data center by provisioning applications over there but he skipped over one step we never talked about how the networks will be connected how do you connect your private data center to the public cloud and if you do this in the traditional way it's fairly complex right you set up a VPN you punch a hole in the border firewall you go through testing or even use direct connect direct connect is expensive but all of these take weeks if not months to set up what we did was we built some technology like the Aviatrix, one of our partners over here and made this really simple let me show you so in the market place I have the Aviatrix application I'm going to go ahead and launch that print over here just go and launch that give it a name just call it Aviatemo you'll see I've given a few parameters over here I've given it my credentials for Amazon so this is going to set up a connection between my private data center and AWS And it has the number of VPCs I want to create, which are private clouds I want to create on Amazon. Let's go ahead and create that. Now, this takes a few minutes. This takes about four minutes. So I'm going to do the cooking show thing, and I'm going to go to an already deployed application that I have. And that is the AVI demo one over here. Actually, the AVI fix final app here, which has already been deployed. Go ahead and manage that. Launch this over here. And over here, I'd give it which region I want the gateway to connect to, so US West in this case. And I also give it the actual network that I want to extend into clouds. So I have 10.4.1, 24.0, slash 26. I run that. Again, this will take a few minutes. But at the end of this, I will have done everything it needs to connect to the cloud. Right now, without touching a firewall or any of those rules or a border router, so forth, you're creating a just-in-time virtual overlay network to your particular VPC that you can bring up on demand, bring down, or scale out? Exactly. So in minutes, you have full connectivity between private and public cloud. When you do that, you can go to the AWS console. You can see the Aviatrix gateway over there. And I also have an application I've been playing with on the side, my seafood beta application. And as you can see, this particular application has a private IP. This is an IP that exists only in Amazon. And I should now be able to connect to it just from my data center by just using that IP address. And you can see the application comes out. I still don't get this joke, but it's OK. You want to see Silicon Valley? Yeah, I don't watch Silicon Valley. You should watch Silicon Valley for a show. So go ahead. Yeah, I mean, I think let's talk about security. Obviously, we've talked about segmentation. Why don't we take a look at that? Yeah, security is pretty topical right now, especially this week with GoldenEye going around the world. And one of the major issues with malware these days is lateral spread. Malware affects one VM in a data center. From there, it starts spreading to the rest of the network. And micro segmentation helps us protect against that kind of lateral spread. Let's take a look at that. I go to my security policies over here. I've set up a few security policies already. Again, this is all part of the new micro segmentation feature, exactly. I'm going to just pick one of these policies to show how this works. A few things to note. This particular policy is in monitoring mode, so nothing's being enforced right now. The system is being monitored to what flows look like. I have some lines here in blue. These show policies I configured earlier for monitoring. So I have my application. It's a classical 3D architecture of the web server, middleware, and a database. There are some flows between these tiers, which are allowed. But there's also a net scalar load balancer that can connect to the web tier. I've configured all this. What I also have is these lines in yellow. These are flows that the system has detected coming into the application that I neglected to provision, that I neglected to set up policies for. So essentially, there's a corporate health check application that's doing periodic probes to all of my components. Now let's simulate an actual hack into this. Let's simulate an attack. I have a tool over here that essentially uses the most common passwords. There's a brute-force password attack against the database. So it's going to take the 1,000 most common passwords, apply them against the database that I have. Go back over here, and you should see that we now detect a new flow. So we're detecting these. So essentially, there was obviously an attack. But in this case, it could be anything new connecting automatically gets monitored and made. Exactly. One of the biggest problems with microsegmentation is understanding the flows in your system. By detecting them, by bringing them up in this very nice graphical view, we can show you what your network looks like, and then you can make informed decisions about which flows to allow which ones not to. So in this particular case, I probably want to allow the health check. I want to deny this hack over here. And now I want to switch from monitoring mode to enforcement mode. So let me apply the policy. In real time, make those policies. Yeah, so we set up the policy now, and you should see that this password hack should stop. Wow, it actually worked. All right, thank you. Thanks a lot. Thanks a lot. So microsegmentation obviously has been baked into our product for a while now. It's coming out in the upcoming release. We were pretty psyched about it, so that literally now, you can converge both networking, computer storage, virtualization all in one simple stack. So as we look beyond this and we look at, OK, so one more thing, we talked about elevating ourselves into the application-centric automation side, bringing clouds together, and so forth. The part that I think, Indira starts on this a little bit, is about how do we actually finish the journey in terms of solving for all of your requirements across public, private, but also a couple of new requirements that we've seen over the last few years. And that's with the introduction of Xi, right? And so what does that mean? So just so that everybody understands the why, is what we're seeing increasingly is that as you do private data centers, public AZs, you're obviously going to make a choice. You're going to choose enterprise apps. Let's call them mode one apps. You're going to choose cloud native apps. Let's call them mode two apps. And you're going to choose which cloud they're going to go to, using calm or otherwise, right? And when you deploy those apps, traditionally, you deploy many mode one apps and some mode two apps on the on-prem. Mostly, it's cloud native on the public side. And as you go through that journey, the number one consumption of public cloud, other than unless you're an Uber or somebody else, comes from lift and shift, right? Because your cloud native apps that you're refactoring or moving to the cloud, maybe it's a percentage or two. But most of the real adoption is happening when I'm lifting and shifting my enterprise apps. And that, increasingly, as most folks are getting to find out over the first year or two, is becoming a big pain. Because it is different tooling, different economics, call support, call public cloud support versus your own support, right? It's a different operating model. And therein lies the opportunity that was presented to us, is that what if we could help? What if we could help enterprises move to the public cloud, but preserve the tooling, preserve the economics, preserve the SLAs? And that's the foundational reason for XI, which is about, while you can use AWS, Google, and Azure for your next generation applications, build it out, could I actually accelerate, move to the public cloud as a service, while retaining the same tooling that I've come to learn and love on, say, Prism on-premise? And so that's the sort of like the spirit behind XI is the fact that you can now extend your data center, your full stack that you're building on your data centers, moving your VMs or containers, gets replicated into the cloud service from Nutanix, and provides you the same operational constructs so that lift and shift per se is no different now than moving between one cluster to another. It should be as simple as that. And that's the goal for us for XI, is just like you would go from migrating a workload from one cluster to another or cluster expands on demand, we need to make XI look like a seamless extension of your data center across the network, the data path, and the control path. Okay, that's the first thing. The second thing with XI is the fact that it needs to have exactly the same kind of operations. We can sort of like have disjointed operations. That's the point of having convergence across hybrid clouds, right? And therein again, the operational fabric for XI is an extension of your prism infrastructure that we'll demonstrate. And then finally, when we look at the types of services that you'll consume, these can't be services that are just random, right? These have to sort of feel like they're a natural one-click example. And the biggest use case that we are starting with, from our perspective, the biggest need that folks have come to us with is DR. And increasingly, folks have come to us and said, look, there's two types of customers, right? One is I'm a mid-market customer. I have no DR. I only do backup. And you'd be surprised how many people do that. And then there's the enterprise customers who say, look, I'm investing in a lot of secondary data centers. I've got all this stuff. It takes me three months to do DR. And I still am not sure when DR actually happens, whether it's going to work. It takes me three, four months to do a test. I'm not really feeling good about my DR. Who actually looks forward to a DR event, by the way? Nobody, right? So that's the premise of Xi's first service is the fact that you shouldn't have to worry about your secondary data centers going forward. It may not happen next month, but over the next few years, frankly, you should be out of the secondary data center business. The more you replatform your primary on Nutanix single fabric, we should burn in services that extend your data center into the cloud, such as DR. And the way to internalize this is as follows, right? I mean, this is probably the clearest way that one can think about Xi is we all compare ourselves inspirationally to Apple. It is a full stack product, like Nutanix. Like Nutanix, we take delight seriously. Like Nutanix, in the Apple form factor, iOS is really the core behind all the capabilities that it provides. But along the way, while it provided an App Store and you could deploy apps like Com, along the way, over time, it said, well, I can use Google, I can use this, I can use Dropbox, and so forth. But over time, I said, I don't need an app. I just go to Settings, and I check once, and I have iCloud burned it. That is how we envision Xi to evolve is that it's essentially, for all purposes, taking the full stack that you have and the fact that you've retrofitted your services onto this full stack of Nutanix, allows us to provide a genuine cloud offering that's an exact replica without you having to worry about lift and shift or complicated operation capabilities. And to talk about how simple we have made this, because that is the real power, is to really see whether we can pull off one click DR, let's bring on Beni for the last demo. All right, Beni, so let's get into it. Yep, let me bring up the demo here. All right, so today I would like to show how we're gonna bring Xi Cloud services to all our customers in a beautiful and delightful manner that you're talking about. So let me act like an IT admin of a company called WalDot, and here you can see I have my com marketplace using this, I've created a blueprint for my HR app. So let me look at what VMs are there for my HR app. Here you can see I have five VMs. And there's a notion of categories that we've introduced, this is tags that you can apply on your VMs. So here you can see I have app type Oracle DB, this is my DB tier for my VM, and then I have employee payroll, which is the web tier of my VM, right? So here are my five VMs, I've tested it out, and now I'm ready to put it in production. Now one of the first things I think about is disaster, you know, what happens if things go south? The first thing, when you think about DR is okay, I need another data center as you were talking about. So let me look at some of the new constructs that we've introduced here. With Xi, there's a concept of availability zone, and that means it's a self-managing infrastructure domain, that's a fall domain, and traditionally you would treat your data center as an availability zone, you see I have only one data center. So now let's show the beauty of Xi, and how does it help you in getting another data center? So I can go here and add my Xi account, and essentially it asks me to log in to my Nutanix, this is my existing Nutanix account at my.nutanix.com. I'll use my test account, vinigilvol.com. So here I'm logging in into Xi, using my existing account, and what it is doing is it is giving my on-premises Prism Central authority to go use my account in Xi. And that's all. So it's been paired as an availability zone almost, right? Yeah, in fact not one, but two availability zones are showing up. So this is two real availability zones, one on East Coast, one on West Coast, and that are available for me to go consume. So this is a hybrid view, where now I have public and private coming together, one management plane, but the hybrid goes deeper than just the management plane. We have a new concept called virtual networks. This allows portability of my subnets and IP addresses from on-premises to the cloud. So my applications can move over and DR without IP changes. There's also a few new constructs on data recovery and how the data plane is fused together between private and public. Here you see the first thing is a protection rule. Now, whenever I have an app that I need to protect, this is what you do, right? So you create a protection rule. And by the way, while Benny is typing this up, what you'll notice is, as part of delivering a native cloud service, it is continuing to keep us honest to actually take that same capability, such as we got to write log file parsing now, we got to do multi-tenancy from the ground up, we got to keep things simple from how to operate it as an operator ourselves, but from a code line perspective, something all of you guys should know is that it's going to be the exact same code line for XI or for NX, so that if you don't want to use XI, but you want to use this capabilities or one click DR across your own data centers, all these capabilities that Benny's going to talk about will flow into the current product. Yep. So here I've defined a platinum protection rule. I have put the source and destination availability zone that I want to use, RPO, how many snapshots I want to retain at what side. And here's an interesting part. What does it apply to? So I can pick VMs by name, that's a traditional way, or I can say, let me filter the VMs that I want to apply the platinum policy to. So app types, all the apps that are Oracle, DB and employee payroll, these are the two tiers I have, and I click add and I save. So that's all I need to do to protect my application. What happens is now it knows the replication schedule. It's actually seeding right now to XI, it takes some time, and it'll make sure that the recovery points are available on the other side. So instead of waiting for the seeding to happen, let me cut over to my production environment here. As you can see, this also has the XI availability zones included in the cloud. And I have my recovery points here. Now this has been running for, we started this yesterday, so this has been replicating to the cloud. So now let's go to XI, right? And I'll show you how these recovery points appear on XI. So right now I'm going to log in to the XI cloud services. This time I'll use my production account. And the first thing that you'll see in XI is dashboard. This dashboard gives you a clean view of what is your cost so far. As you can see, the current bill is $206. We were doing a lot of testing on Monday and Tuesday and a little bit on Wednesday. We were doing DR testing quite a lot. There are no applications right now running there because I'm only using it as a DR target. Let me go to the Explore view. This is very similar to what you have in Prism Central on-premises, except you don't see the hypervisor, you don't have to touch the hardware. All of that is hidden. What you look at is your virtual resources. Again, the availability zones are there, the same availability zones, wall.dc, this is my data center. Here you can see the recovery points. They are there. So now, supposing there is an unfortunate incident, either your data center is down or my app is down. It was compromised or somebody made a mistake. Let us quickly do a failover. Traditionally, this is a hard problem. Once we set up the recovery plans and everything, what we're trying to do is to actually simulate a failover event so that we can actually see what's happening. So all I did was just click one button. It had a confirmation. This is the availability zone where you want to failover and just click. And the runbook that we had created for this, where we specified, let me show you how the runbook was created, I can click on update here. And this is the runbook that we had created. Essentially, it is saying that you can create the layers of your application and what order they need to be booted up. There are many functionalities here. You can say that I can add a script. I can add a delay. And so on. This gives you capability just like VMware SRM. It's very simple. Right now, it's actually running. Let's go here. You'll see that the failover runbook has been issued and there are some tests that we had already run in the past. Let's look at the tasks and how they are doing. Right now, while we were waiting, it's actually already done. Essentially, it verified the runbook. It created a subnet for your IP addresses to be ported to the cloud. And in the VMs, it says it's all recovered. So let's go and look at the VM screen here. Let it refresh. And there. There you go. So that's one click failover, literally, that is brought into the Xayi availability zone. Yeah. Let's do one more thing before you guys take a bathroom break. Right. And don't worry about your sessions because we're still here. Yeah, that's right. Now, I'm running on Xayi and I have to add another VM. For example, I'm comfortable running on Xayi. So let me expand my HR app itself. So I'm picking the disk image that I have. And let me give it a name, HR app. If I can type web, VM4. There you go. Could you make it a little longer? No, I'm an IT admin. That's what I like. And production subnet. And categories is the powerful concept. Employ payroll, that's all I need to do. And hit Save. Got it. Now, I'm provisioning an application VM in the cloud. So essentially, another guy, so DR, remember we've done all the heavy lifting. This isn't backup. It's a full blown primary data center environment that you can now continue to use as your IA's infrastructure in the cloud. Yep. And as you see, I can see the console as it is booting up in the cloud. And very few cloud providers actually give us this seamless experience like you have on premises in the cloud as well. Got it. So one more thing, though, which is, obviously, it's a true cloud service. You want to clap? You can clap. Yeah. Yeah. Thank you. So we had to build failover natively in such a way that it's bi-directional. And so to really show some failback, let's see if we can actually, in real times, solve that problem. So in fact, it's not about cloud-ready apps. It's really about app-ready clouds. And what Xi is doing is it's actually welcoming all applications, traditional or modern, into the cloud without asking the apps to change. And that's what we have shown here. And you're tooling the way you debug your VMs, and all that remains the same. So now let's look at this VM. And as you can see, the protection rule platinum is already applied. If you go to the recovery points, and I refresh that view, here you can see. So now what's doing, the system already knows that it has to revert replications. Basically, notice that the primary data center is back up. And it's obviously setting up replication the other way. So now let's log into the primary data center, where as we wait for it to boot up. And we'll see quickly if the replication point, the recovery point, is already there. So let's go there. You see, it's already there. And that's because it was built from an image that was already seeded on both sides. It's very quickly there. And look at this in a hybrid view and so forth. Yeah, so right now, let me do the other action, which is the Xi cloud is not portal California, right? You can also check out, and you can leave whenever you want. So here, for that, there's one button. It's called failback. So same thing. It'll ask confirmation for direction. You click failback. And that's all you need to do. You can go to the VMs view here. And as we wait for the failback to happen, let me talk about this hybrid view, right? So this hybrid view, I'm showing my availability zone on-premises. And this is the Xi view. As you can see, all the six VMs are there. And you can see billing and everything else. Yeah, so there's integrated billing here as well. You can click on billing. And here you can see this was the cost of using Xi. And there's accounts. We have one click ADFS integration. As an admin, I can invite users, and so on. Let's go back to the view that I have here. As you can see, VMs are already coming back and failing back. Got it, got it. So this is one click DR, failover, and failback. Yep. Thanks a lot, Binny. Thank you. Great stuff. So as you can imagine, guys, this Xi is obviously a pretty company-making initiative. As they just mentioned, it's about reinventing ourselves, disrupting ourselves ahead of what our customer needs, and so forth. When you look at this full picture, this is where our strategic partner Google comes in, is as we were building Xi and as we were looking at this as our native cloud services, it became obvious that one of the biggest things that enterprises need is the fact that I need to be able to take this infrastructure, not just to make it available at scale globally in all the regional capabilities as well, but dark fiber, low latency, and so forth. But it also needed a single solution that could be consumed by enterprises for both enterprise apps and cloud-native apps fused together into a single platform. So that's what we're going to be doing there, is essentially that when you're running your databases, your warehouse management systems on Xi that you've replicated over. Remember, the full stack is there, so you can actually take BigQuery or any of the GCP services on the same network, the same data path. You can connect them as if you were running one single environment. And that, in our opinion, is a true game-changer from the way that public cloud will be consumed where not just net-new cloud-native apps, but existing enterprise apps are moved to the public cloud without any lift-in shift. So to wrap things up at a high level, right? So we talked about this enterprise cloud OS. It's clearly a big evolution for us to actually deliver this as a software-centric approach across a variety of partnerships, all the way from public cloud to mainstream enterprises. And also, at the same time, keep disrupting ourselves in terms of the form factor and the capabilities between appliances, the software, the services. And we talked about a lot of things today, right? We talked about cloud, we talked about Google Cloud, we talked about micro-segmentation, we talked about our partnership with IBM. We talked about many things. And if you think about it as a company that's still growing, it's growing pretty rapidly, velocity of product innovations is a top-of-mind thing. You've seen this over the last few years. It's continued this year between 5.0 with all the hardware part, the platform pieces, the software pieces coming out, including OpLix with 5.5 shipping soon. But the biggest thing that keeps us up and that keeps us going is the fact that we've been able to balance velocity with quality. And just to give you a data point, one data point, actually two, one is the fact that we keep this on our dashboards every day. Every engineering department support department has this. As the number of nodes have changed and they've grown rapidly, we take the number of customer-found defects as a percentage of node ship and we measure that line. In mainstream enterprise products, being less than 3%, 4% is world-class. And we've sort of consistently tried to keep it below 2%. That's one big example of quality. The other big example that we've always talked about, there's like a few QA engineers here, so they'll appreciate it. The other big thing that we've talked about is the fact that, look, you know, we have the world's best support period. Who agrees with me on that? Right? So it's a combination of support quality that we think drives this machine. It drives us to continuously, you know, sort of impress you guys, bring that delight to you folks, spend more time at the bar versus in the data center. You know, a combination of all things that lead us to this conference, for example, being all about you guys. But I'd like to take a small piece of time for us to also thank a bunch of folks that are not here. You know, these are the invisible guys. Your favorite SRE, your favorite SC engineer. Who wants to give a round of applause for your SC or SREs? So thank you again. I know it's been a long session. I hope it was informative. I know there's a lot of deeper sessions on XI, Acropolis file services, on Binuma, on AHV, and so forth. We'll find you guys later at the bar. Drinks are on me again. Ladies and gentlemen, download the mobile app to build your schedule for today. Be sure to be back here at 5.30 PM for our evening keynote. We have a great lineup of influential speakers, such as SAP CEO, Bill McDermott, Dell EMC President, Chad Sikatch, and Lenovo EVP, Kirk Skogun. And you don't want to miss Dot Next Fest this evening. We are taking over National Harbor with an exclusive music and food festival right in front of the Gaylord. Enjoy your day.