 Okay, so thanks everyone for coming today really appreciate it. It's a very exciting time and a very exciting conference, so I'm glad to be here. So I have a couple of questions so that I can actually make sure that I can address all your information you're looking for in this session. So how many people have VMware in their infrastructure? Okay, how many people are thinking of putting an open stack on top of VMware on that infrastructure? Oh, awesome. Okay, so if you guys, anybody want to volunteer, what are you looking for from this session? If I can help anyone? Just to make sure. I have a couple of slides which will make sure that what should I emphasize on. Okay, that's a great point. How much, I would say capabilities. Do you want to take out of VMware and put into open stack? Yeah, that's a great point. Yes. Yeah, that's a great point. Yes. Again, to keep for the camera, what are the gaps that vCenter can cover from open stack? There was another point here. Yeah? Okay. Okay, so integration. What are you considering about in VMware integration? Okay. Okay, vRealize. Okay, so again, the point here is like, how does vRealize automation, vCenter orchestration integrate with open stack? Unfortunately, I don't have that in my slide, but I would love to talk to you afterwards to give you a little bit of detail. Anything else anybody is looking for so that I can answer? Okay. I think we have a couple of good points here. So let's get started. So again, my name is Amar Abderazek. I work at VMware. I'm in product marketing, but in a previous life I was a data center administrator and an enterprise architect for eight years. So I've been on the customer side, and I think one of the reasons that I've been on product marketing was I was asked to make sure that whatever we do kind of makes sense, and it's not kind of marketing speak. So I'll try to be as practical as possible, and then what I do and what does that mean to you? So again, so this kind of started off with an ecosystem. Again, this is like the biology is like an ecosystem is things that just around us, whether we interact with it or not, it just, it's a component that we kind of interact with. So that's kind of the biological kind of definition. But again, it's kind of a little bit even more descriptive than the computer science one, but it kind of describes like you put things to run on your computer, on your server, on your phone, and it just works. But for this to work, a lot of things has to happen in the back, and that's a lot of integration that has to happen with a lot of the partners and a lot of integration. So we want to talk a little bit about that, but the key thing here is that sometimes all of these integrations makes it a lot easier, but they're kind of seamless, like the fish in the water is like, you don't see it, but you're just in it all the way. So let's get a little bit of, if you don't mind, just a step back and what is VM we're doing now, and how does that fit into the open stack story? So again, you know vSphere, so VM we're kind of pioneering virtualization for server virtualization, but what happened is we actually wanted to introduce virtualization to the entire data center. So that's why we have NSX for network virtualization, we have virtual send, and a lot of technology on the storage site for storage virtualization. But again, the key thing is like how can we bring all these capabilities and the benefit you've seen at the compute layer across the entire infrastructure, and then putting management like stuff that's vRealized Suite, and that becomes kind of the software defined data center. But on top of that, the key thing is like if you're going to get the benefit, you shouldn't be locked into kind of a single way of consuming your infrastructure. Whether you want to put traditional workloads, whether you want to use open stack, whether you want to use cloud foundry or pass or containers, it all should be kind of seamless or independent. You have an infrastructure that should be capable of kind of serving these kind of workloads. So that's kind of the theory or the background behind all the things we're doing here. And again, the objective for us is to cover kind of most of the workloads in terms of scale up and scale out. Again, if you think about it, there is always this kind of separation, or sometimes people would like to put a firewall. It's like, okay, we've got a platform three, or like cloud native, or we've got a platform two, or like traditional apps. The reality is a little bit kind of more nuanced. And a lot of time when you start approaching cloud native or some of these kind of new workloads, people do that in kind of a piece by piece. So you take the web section of your application. So you kind of put that in a container, or maybe kind of in a stateless mode, and then move a little bit to the application, and then start approaching the database. So it's usually kind of a journey, not just kind of this, okay, I'm going to build this infrastructure and I was going to serve this kind of new workloads, and it's just going to work. It doesn't happen this way. So that's what we want to do. We want to ensure that you have one infrastructure to serve both. But let's start off from vSphere. So again, this is very exciting times for us, because again, you've seen we've introduced vSphere 6 very recently. We're very, very proud of this release. And again, just to give you an idea, we've kind of, again, stretched the bar in terms of what can we do in terms of performance, in terms of VMs and host per clusters, in terms of RAMs per host. Again, some of these, for example, we have like 4 terabyte of RAMs, or sometimes 2,000 VMs that's per host. That's because that's the biggest system we could put our hands on to test. So that's one thing. But again, the key thing is that as a vSphere as the foundation, you shouldn't be worried about any kind of limitation, whether you want to go with which approach of your infrastructure. So that's one thing. Again, a couple of other numbers. Just to tell you, we tested vSphere very, very heavily inside VMware that sometimes 30,000 VMs were created per day. And part of our testing, or what we call dock footing in the infrastructure, much, much faster performance. You can have up to 10,000 VMs per recent cluster. Again, a lot of usability enhancement. Again, we see vSphere as kind of the foundation of what we're working on. And to dig a little bit deeper of the vSphere ecosystem, one of the strongest benefits we have at vSphere is the guest operating system support. If you're working with other hypervisors, sometimes other hypervisors have some operating system as for class citizen. And some others are second class citizen. And not everything works or not everything is supported. That's one of the benefits of vSphere. So you can have Windows. You can have Oracle Linux. And towards Red Hat, all of these are kind of supported and has been supported for a long time on this. For example, CoroS has just been released. Now supported in vSphere, you want to run CoroS. You can just do it on vSphere. So the objective of this is, again, as a customer, you shouldn't be kind of limited to the amount of work loads or the guest operating systems you have. And this is very beneficial to sometimes if you have an acquisition. So again, maybe you standardize on one kind of hypervisor and one kind of guest operating system. But now you're a part of this company and they use kind of a totally different kind of operating system. So again, how can you merge both and kind of bridge the gap till you have some kind of standardization? So that's the first thing. The other thing we work very, very hard on is the ecosystem enablement for kind of the hardware kind of providers. So again, there is something called the native driver program. So again, how can we abstract all the technicalities of the drivers in the vSphere layer so that on the VM layer, that's kind of seamless. And again, if you have been working with vSphere for some time, I think it might be a long time since you kind of uploaded a driver into kind of the VM you're working on, unless you're working with very special cases. So this is actually part of the work that we do to do this kind of enablement and make sure that whatever component you plug in is actually supported from day one into vSphere. And that gives me to sometimes some of the emerging kind of technologies, at least in the IOS stack. So again, take a concrete example. We have like 100 gig NICs now in kind of early development. They're starting to come out. What a lot of people don't know is actually we're working with these vendors right now to make sure that these are supported on vSphere so that you can actually, when they become available, commonly available in the market, you just plug them and they work. So that's again some of the benefits you get out of working with vSphere and the ecosystem. A lot of technological innovations that we do with like disks and storage, and again, like this slide, I mean, I can put a lot of vendors in this slide, but again, the objective is how can you make sure that when ESX like sits down in infrastructure, whatever on top of it as a guest, whatever underneath it as hardware, it becomes seamless and it's a very easy experience for the customer. So that's from an IOS stack perspective to highlight some of the work we did with Intel. So again, this is part of the work we do. So I've been working one day in campus and we have this kind of secret place and I was talking to one of our engineers like, what are you working on? I was like, I'm working with kind of a next-gen kind of Intel kind of CPU. It's like, yeah, is that one of the new things coming now? It's like, no, that's coming in two years. But that's also part of the partnership that we do now with Intel to make sure that all these technologies are happening in the market. And then what happens is once we have these innovations, some of these are like just core CPU innovations. Some of these are visualization-specific innovations. So you have those and then it kind of loops back to the community as Intel, of course, is doing a lot of work in the OpenStack world and trying to incorporate these features so that you can maximize that through the OpenStack kind of and then use that through VMware. But at least on the foundation layer, it's going to be supported from day one. So that's one of the things we have. On the storage side, you actually have a very, very exciting story. So again, in terms of vSphere-certified storage, so again, if you work with vSphere, you know, you can plug in almost any type of storage in it. Local.attach, network, scale out, vSan, whatever. You can have all the major protocols, FiberChannel, FCOE, iSCSI, et cetera. And we have more than 3,269 storage devices certified with vSphere. So that's also kind of one of the things that we do with all of our hardware and storage vendors. But it doesn't stop there. So we have a lot of very exciting stuff in terms of how we integrate with the vendor. So again, as we have seen some of the very common operations in terms of how, let's say, using virtual machines, using templates, we thought, why can't we just kind of try to optimize that with some of our storage vendors and the storage providers we work with? So this is where vStorage APIs came in. And if you think of it, there's like a family of APIs. So like VAI or VASA. And again, these are designed to do a lot of things. But if you can group them, there are kind of two groups of families. There is like execution APIs. So APIs like optimizes execution. We'll talk a little bit about that from the storage layer. There are some awareness APIs. So like, okay, you share some information between the array and the array talks to vSphere and vSphere talks to the array. And you have this kind of communication protocol. And to give you an example of what that means, so for example, take an example like VAI. And one of the things is like full copy block on the left. And again, let's say you want to copy one of the VMs or you want to use a template, or which is now kind of the image, how you kind of use images in OpenStack. If that image is in the same array, what copied in another section of the array, what happens is the array actually does the operation in the background. So instead of having the storage or the entire VM move into vSphere and then moves back to the array, actually all the operation happens back. And then it can provide you up to 10 to 20x faster VM deployment time. Another one is again, thin provisioning stunning. So again, if you're familiar, thin provisioning allows you to provision large storage that kind of doesn't exist, but again, trying to optimize if you have like 10 groups, each group is requesting one terabyte. Likely that they're not going to use the one terabyte at once. So you provision the one terabyte at thin provision, while on the back end you might have like five or six terabytes. And as one of them reaches capacity, you can do that background operation. But what happens sometimes is you run out of storage. So what happens sometimes the VMs crash. And what happens is if you have thin provisioning stunning, you actually have the array respond to an error message and vSphere kind of puts the VM in kind of a cryo, what do you call it, a freeze like state, just to kind of give you some time if you want to provision something or remove something from the array. So again, resulting less crashes. This is part of the execution APIs, the work that happened in the background. Again, the objective is to make your operations kind of more seamless and more kind of stable on the background. Another example is VASA or VStorage API for awareness. And this is actually very exciting because this is kind of a communication protocol that helps to give you an example. The vCenter know that, okay, this data store is protected by rate five, replicated with a 10 minute RPO, snapshot it in 15 minutes, and compressed and de-duplicated. And there is no way you can do this or know this if these operations are happening on the back end. But what that helps with, actually, if you think about how we think about software defined storage, again, the storage policy, we want to make sure that there is some kind of an intelligence in how you place the storage and some kind of a policy. So you define a policy that talks about capacity, performance, availability, and data protection, and then it's up to vSphere and the attached kind of storage to negotiate that. And then vSphere helps you kind of place that workload into whatever storage you have. So again, some kind of that intelligence of placement that kind of makes it easy to the administrators. And again, like the array publish some capabilities. You know where to put it. You can have like gold storage. You can have it specific with specific data protection. It's actually a very, very rich kind of capability that, again, if you're thinking of cloud, you'll probably have, if you're thinking about large scale open stack cloud, you're likely having not all the workloads are the same, not all the workloads require the same level of capability. So unless you provision kind of the highest tier of storage for everyone or the lowest tier of storage and then kind of risk having some failures, or you can have this kind of intelligence happening on the background in terms of policy. So again, that kind of splits into kind of two pieces. So again, there is the policy based piece on vSphere and it's kind of virtual volume which is the protocol that leverages some of the APIs we talked about and then kind of builds upon them to kind of create this like extraction or abstraction layer to vSphere. So again, going back, that's kind of one of the capabilities that we have in terms of applying storage and intelligence of the storage in the background. Again, and virtual volumes now has been released with vSphere 6. That's why we're very excited about it. And we've been working with a lot of these vendors now for a long time before to make sure that it's included in most of the arrays, like either by vSphere kind of release time or very, very soon. So we'll probably have most of these vendors either rolling out now or already rolled it out vVolume-aware storage arrays. Another part of the capability if you look at the open stack layer is because VMware have this concept of the data store. So you actually, from an open stack perspective, you don't need to abstract the drivers to open stack. And that also kind of gives you a lot of operational benefits. So if you think about, for example, you want to integrate, you have an EMC VNX and a VMAX and Extreme IO, which is kind of all storage array. Traditionally, we're working with other kind of technologies or hypervisors. You have to put the sender driver for each array in the sender piece with open stack actually on VMware. You don't need that because you have that all abstracted through the VASA and the VAI and all the VVolume, not VVolume, but kind of all the data stores. But then all that kind of is abstracted to one driver, which is put into, which is the VMDK driver we have. So again, simplifies life very, very easily. You don't have to worry about upgrading multiple kind of drivers, changing their arrays and everything. So that's another thing. On the network side, you actually have a very, very exciting story again. So if you think about NSX, again, for those not familiar with NSX, NSX is our network visualization platform. And NSX allows you a lot of capabilities in terms of security and in terms of network abstraction. So one of the things we do is actually how can we make sure that that networking kind of functionality is abstracted across the ecosystem. And we have kind of three levels of integration with NSX. So again, there's like the basic level of coexistence. Again, NSX just kind of plugs into all the networking gear that exists. But again, you have the second layer, which is having NSX-aware components. So again, they can leverage NSX API and metadata to understand, for example, some or build a solution like taking, for example, Toofen. Toofen kind of brings some of these capabilities in terms like managing policies and managing kind of configuration. If you're managing firewalls and kind of NSX, firewalling as well, so physical and virtual. And it kind of helps you do that. We have other kind of, which is the deep integration we have, which is the service insertion. So again, this is where actually NSX forwards the data to these kind of technologies. And then these technologies can take action on some of that. So for example, F5. So F5 has a virtual appliance for load balancing. And then if you want to have leverage the F5 capabilities, most of the capabilities in the hardware are available on the virtual, not all of them so far, but most of them. And then what happens is you can actually use NSX and work with F5 to leverage these kind of advanced load balancing capabilities. And if you have a single console to manage your F5 infrastructure, you just, you can manage your F5 virtual appliance and physical from the same layer as well. Same from the security, same from McAfee and Symantec, like one of the interesting things that we do, for example, is if you integration with vulnerability scanners. So again, going back to the OpenStack cloud example, let's say I have 1,000 VM and one of these VMs got compromised. Now these VMs are just dynamic. They're, you don't know exactly. You may not know even which hypervisor where they are, what's going on. So if you're running a vulnerability scanner and it detects something, you can actually call the NSX API and kind of put a lockdown kind of template or excuse me, security policy on this VM right away. So you don't have to go to the security administrator and say, hey, this is compromised. We need to shut it down. Everything just happens in the background through APIs automatically and you just have like a very, very kind of security infrastructure. So that's one of the exciting benefits of NSX. And then I'm going to talk a little bit about the partner programs we have just kind of to give you an idea. So we, VA1 has 45,000 partners between technology partners, between channel partners, system integrators, et cetera. So we have a dedicated partner organization that is almost, I would say, like one third of the company, something like that. So it's a big organization just making sure that we have this because VMware also kind of is a channeled kind of company. So most of our sale happens through channels or sometimes fulfilled by the channel. So probably most likely one of the, if you're working with vSphere or having your infrastructure, you've kind of somehow procured it through one of our, either resellers or channel partners. So that's why the partner is partner systems and the partner ecosystem is very, very important for us. And just to give you an idea how we think, so we think of the partners like in terms of like three angles. So there is kind of pure kind of technology partners like Google, Palo Alto Network, HP, EMC. There is also kind of system integrator partners like AtoS Accenture, Computer Center, et cetera. And there is also kind of the reseller partners, GreenPage, Dimension Data, Software One, or any of these. So again, these are kind of the three dimension, but each of these has specific relationship, has specific teams, and sometimes in some of these relationships we have dedicated partner managers from both VMware side and the partner. We have actually some of them sitting with their hand today. Again, we also have something very exciting which is the solution exchange. So again, solution exchange, VMware solution exchange, and it's available and it has a lot of materials about what components are available. Some of our partners actually are virtual appliance partners, so it doesn't have to be kind of a back-end technology. It might be an application that is just optimized and easily consumable via the V app construct that VMware have. So I think we have, I think of the next slide, we have around 12, 900 something. I'm sorry, I have the number. Yes, like 1600 virtual appliance from 950 developers. So that's just on the application side. And it's a very, very live part of VMware and very, very visited, very, very heavily with our customers. Again, we also have the compatibility guide. I'm sorry, I skipped that. So the VMware compatibility guides can, like it's available online. Make sure that whatever components you're procuring or getting, whether it's supported or not, how is everything works together, et cetera. So again, you have the partner verified and supported kind of programs, which is provide assurance of support. So again, some of these partners, you can have secondary level support, first level support from VMware and then secondary level support between VMware and the partner. And another part of the ecosystem, which is access to people who can actually build infrastructure and people actually capable of running VMware technologies. So again, VMware has more than 100,000 trained professionals. And that's, I think it's a little bit higher than that, but that's kind of the latest kind of official number that we can share. So again, giving us kind of some, a lot of skills and a lot of leverage and if you want to build and scale infrastructure and finally the people who can actually do that. So again, going back to why are we doing this? Again, our objective is with OpenStack is give you all the flexibility and the power of OpenStack, but at the same time, giving, building on the reliable infrastructure with VMware. So that's what we're trying to do. And again, from, so we talked about the partner and the OpenStack side, again, a lot of people don't know that actually VMware has been a strong contributor to OpenStack. And if you think about NYSERA, so at least the core OpenStack, if you think about compute network and storage, so the new tool project was kind of first created by the NYSERA team. And then when VMware acquired NYSERA, kind of continued all the work that we do and then we added kind of all of our technologies up the stack to be available. And if sometimes even champion some of the projects such as Congress or OVM, which we have some full-time people working on. And the key thing here is like all of these kind of technological integrations are available for free upstream. So again, if we want to use any of VMware technologies with any other vendor, you can actually do that right away very, very easily. So why do you want to put OpenStack on VMware? So again, the foundation, we talked about foundation from vSphere, again the reliability and the risk features. And a lot of times there is this assumption about, again, you want to put the intelligence in the app, which is actually a very, very powerful notion, but at the same time there is a lot of operation that has to happen on the infrastructure itself. So take for example the recent patch that happened. I mean, it didn't affect VMware, the Venom, but again, at some point in time, the VMware administrator or the data center administrator need to do some maintenance. And that means sometimes you may not have the scale, think of it as something like one of the large service providers, like if 10 of our infrastructure breaks, it's okay. But again, you need specific scale to be able to do that, and sometimes you don't have stuff in highly available two or three nodes to do so. Can I get through the question? No. No, no, that's not abstract. Because again, it's the reason we want it to be compatible with OpenStack APIs. And to the best of my knowledge, the engineering team decided that this is going to be like an operator kind of configuration, not a user kind of accessible feature. So again, Neutron. Neutron has been one of the leading kind of technologies in the market in terms of network visualization. And one of the largest clouds in the world actually run NSX. And for us, actually in VMware, we actually have one of these largest clouds in the world. We have around, it's from 3,000 to 8,000 VMs, depending on the usage. And it's been running for around six years, I think. Yeah, so we have a lot of experience in operationalizing and making sure that we have reliable. And all of this kind of feedback from our operational team comes back to NSX and comes back to our products. And finally, again, the storage. Again, the storage, if it works with vSphere, then it works with OpenStack on vSphere. So again, that's kind of easily abstracted. Or you can use vSan and have a hyperconverged kind of solution if you want to use commodity hardware. So that's also an option. Another area we talked about the gaps, determined to ask about the gaps. So again, one of the things we do is that OpenStack doesn't have, like, operation and management, at least to the infrastructure. So what we did is actually we integrated our products in that. So we realized operations, or formerly vCenter operations, as our operation management tool. And what we did is we realized operations doesn't just give you visibility to the vSphere infrastructure, but it also kind of through blockings and content pack can give you visibility to end to your storage array, for example, or to your servers. And what we did, we actually added OpenStack on top of that. So OpenStack, we have an OpenStack management pack, and this OpenStack management pack kind of you put it in, and it monitors the OpenStack services, it understands the tenants, and it can actually give this end-to-end visibility. Like, for example, if you have a Cinder kind of node that is failing, is that a Cinder issue? Is that the VM that is hosting that component? Is that the server? Is it the back-end kind of storage? So actually giving that kind of end-to-end, kind of troubleshooting and end-to-end kind of visibility on what's going on actually helps. And the logs, of course, if you have any OpenStack and production or the criticality of the logs, and one of the things we also did is we have vRealize Loginsight, which is our log kind of management tool, and it's designed so that it's very, very easy to use. It's a virtual appliance. Virtual appliances are very, very easy to install, and we have a content pack designed for OpenStack. So actually you just direct the OpenStack logs to it, and you have dashboards, for example, about the API response time, errors per component. A lot of that are time series, and if you want to create and optimize your own kind of dashboard for something that you want to look at, it's actually very, very easy to do. And this helps you again in these kind of deep troubleshooting, a lot of things that doesn't exist on, you cannot see unless you have visibility on the logs. And finally, you have vRealize Business, and again, coming back to why we're doing this, why we're doing OpenStack and why you're doing Cloud. One of the objectives is giving that level of agility and capability that exists in the public Cloud in the underlying infrastructure on our private Clouds. Then a question comes in, it's like how much does it cost, or how does X amount of kind of workload compare if I put it on one of the public Cloud providers? So this is where vRealize Business comes in, because it actually plugs into the infrastructure, understands the workloads, understands actually per tenant kind of view, and then decides if you want to compare how much is that going to cost you compared to, let's say, IWS, or Azure, or even vCloud Air. And you can have like real life comparison, or you can have hypothetical comparisons, like okay, what if I want a workload that is kind of with this kind of level of capacity or this kind of level of compute, and you can do both so that sometimes the BU Business Unit kind of comes into you and say, hey, I can do this on AWS and like 10 times cheaper is like, no, let's see it. I'm like, okay, this is your workload, this is how much is going to cost you in AWS or any of the other vendors. So again, giving that transparency and visibility. And finally again, this is kind of the, going back to the ecosystem, most of you have already some kind of solid investment, so you don't have to change your infrastructure. You can use your existing to put OpenStack on top of VMware. Again, you don't have to learn a lot of things to do so. You already know that, and you can operate it easily. And then finally, if you have already tools and plugins that you have already kind of integrated with your infrastructure, it doesn't have it necessarily to be VMware, but it can be any of the other tools that you, I mean, HP OpenView of any of those. All of these tools can monitor VMware very easily. And then you can leverage these tools as well to monitor the OpenStack kind of component. And again, going back to this, all of these capabilities we talked about can be leveraged not just with VMware integrated OpenStack, but with any of the OpenStack kind of providers available. The content pack and for VRALAS operation and all the management is available open source for free. The drivers are available open source. So again, a lot of this is just available if you try to choose on OpenStack on VMware. And then going back to the ecosystem again, one of the things we see is this kind of having a loosely integrated framework kind of approach. So you want to plug into whatever component you want. Or on the other hand, you want to use something like a tightly integrated product that is just pre-integrated. And both has its pros and cons. So again, on the same time, you have the same vendor-neutral API on both because that's OpenStack and that's one of the benefits of using OpenStack. But on the other hand, you get two kind of operator-led decisions. You want to have the ultimate flexibility. You want to have multiple hypervisors, kind of disjoint management tools. Or at least it's you pay for somebody to kind of duct table this component together. Or you do it if you have the skills to do it. Good for you. But on the other side, there is another approach that kind of using a tightly integrated product into end, which is something like, for example, VMware integrated OpenStack. It's a single hypervisor, but again, you have a standard architecture and it's tested by VMware. But that kind of brings the question, if that is like a single VMware decided that that's the stack I'm going to use, where is my flexibility and where is my choice? This is what brings us actually to, I'm sorry, that's VMware integrated OpenStack. I'm just going to come to the other point in a minute. But again, this is our distribution. It's designed to be working on existing vSphere environments, and it has the management tools, the validated architecture and single support contract. And it's free if you have vSphere Enterprise Plus, vCloud Suite, or vSOM with operations management. So again, support is optional if you want it. And it's at a very nominal charge. Because we think you're already paid in the infrastructure, the way you consume your infrastructure should be kind of your choice. But going back to the integration layer, so again, you can have different architectural approaches to the multi-hypervisor kind of story. So the first one is you can actually have one OpenStack control plane, and this control pane can talk to a VMware stack or an Un VMware stack. And again, you have the API interactions on top, which is the same. The other approach, which is, again, if you want to start in with one option and then maybe at some point later, start off and go with another kind of vendor, or even add KVM or any of the other way around. If you start off with KVM, I want to go with vSphere. You can actually do this through kind of having two controllers and linking them through the Keystone Identity Service. And actually, there is a great white paper that talks about that by the OpenStack Foundation. And we actually have a couple of copy of those in print, if you want to have that on your way out. But the key thing here is because you're working with OpenStack, this is what you have here in terms of the APIs, the standard. And again, this is where actually the def core kind of comes in very, very handy, because again, you have this assurance. I don't have to promise you that it works, like somebody have taken the time in testing it and making sure that it works. And again, leveraging all of this kind of ecosystem, again, whether you're doing this from the technology side, whether you're doing this from the abstraction side, whether you're doing this from an integration or consulting components. All of this are tools available at your disposal, and this is what kind of OpenStack kind of brings in. One thing that we also want to highlight is again, sometimes OpenStack brings some capabilities that doesn't exist at VMware. So for example, we don't have a Swift storage kind of story. So what happened is you can partner, this is where OpenStack comes in, and because you have, let's say, one of our partners, like SwiftStack or Nexanta or any of these partners, because they work on the same APIs, they've done the integration kind of on the component layer, and you can actually get the same benefit and all the components. And for example, when Manila kind of becomes again more popular the more any of these components, you don't have to be locked into what VMware is offering. This is where the beauty of OpenStack in. A lot of the other components can be plugged in automatically. And I think I have a couple of minutes just to show you a very quick demo about the multi-hypervisor and how it looks like. So this is a very quick demo about how does it look like, how does it look like having kind of the multi-hypervisors kind of multiple kind of distros we talked about. So again, you log in, and this is to horizon, and what you see here is actually the same user, and you can kind of provision instance. But if you look here, this is where it kind of uses the concept of the regions. So again, you can have a region in your data center that is kind of KVM based. You can have a region that is kind of VMware based. And each region kind of has its own endpoints. And they can share a lot of the authentication and also coming soon, hopefully a lot of the networking and the capabilities. But again, from a user perspective, now you have this VMware kind of region. You move to KVM, and from a user perspective, it's the same horizon. It's the same GUI. It doesn't change the experience that much. And again, coming back to the beauty of OpenStack, this is where the abstraction layer happens, and you get kind of some of these benefits from leveraging OpenStack. So again, this is... It's kind of a very simple demo, but one of the things... And we'll be working with some of our partners to actually make sure that some of these solutions are actually available soon in the market so that you can actually do that kind of... If you want to do it, but most of the time, we see kind of organizations approaching one kind of hypervisor at the time starting off with one and then kind of moving to the next. So that is it in terms of time. And yeah, so that kind of brings me to... I think I have a couple of minutes of Q&A, but I hope that kind of answers some of your questions. Again, if we can be available afterwards to talk later. So we are testing it now. So the question is, have you tested, if I understand the multi-hypervisor, kind of with other distribution? So we've done it... So we are... I cannot say the partner now, but we are doing it as we speak. So because, yes, it's a very simple approach. We just make sure that we kind of align on the latest kind of releases, but we have done it in our labs multiple times. So it should be possible. It should be kind of publicized very soon. Any other questions? All right. Thank you very much.