 Can you hear me through the speakers OK? Yeah, I think the volume's up now. OK. So thank you for joining us on the last day of OpenStack Summit. For those of you who aren't attending the design summits, I know the design content continues tomorrow. But today is the last day of actual proper breakout sessions. So you've joined us for VMware's sponsor set of sessions. And we're going to actually start it off with a technical deep dive. And then the sessions that follow me will talk about actual customer use cases, go a little bit deeper into some of the technologies that help make our distribution of OpenStack a success for VMware customers. So my name is Trevor Roberts, Jr. I'm the Senior Technical Marketing Manager for OpenStack at VMware. And what that means is it's my job to go out and speak to customers, partners, and to the community about the good things that VMware is doing with OpenStack, how we're contributing back to the actual foundation, as well as the community. And hopefully, users are getting a benefit out of it. So I'm not going to have too many slides. I just want to go over some high points. And then I really want to dive deep into actual usage of the product so you can see how it works. Have any of you heard of VMware integrated OpenStack before coming to the session? OK, for those of you who have heard of it, have you actually tried it in your environment? OK, a couple hands. So hopefully, after seeing what we're presenting to customers, maybe you'll give it a shot. So why is VMware getting involved in OpenStack in the first place? And I think this solution or this tweet shows why we've come up with a solution. It reads, if there was a video game called SysAdmin, the final boss would be OpenStack. And I think for those of you who chuckled, I think we can all relate to the trials and the tribulations of sometimes what we go through when we're deploying OpenStack. OpenStack is a great cloud management framework, and it has a lot of extensibility, a lot of customization options, a lot of flexibility with how it can be deployed. This flexibility is accompanied with some kind of complexity when it comes to deploying it. What's the best way to deploy it? Well, at VMware, we spoke to our customers. They want to be successful with OpenStack. We want them to be successful with OpenStack. So we came up with a distribution that we hope aligns with those goals. And what is VMware integrated OpenStack? Simply, it's taking your existing VMware vSphere infrastructure, then we take OpenStack from the open source. We're not making any changes to the OpenStack code itself. But what we do is include the community drivers that we have contributed for compute, network, and storage. And we have some intelligence that we've developed around how to deploy OpenStack, how to manage it, and optionally, how to report on it using OpenStack aware cloud management. So VMware integrated OpenStack itself is a simplified installer, as well as simplifying common administrative tasks. In addition, we have some reporting tools that VMware also provides that has the capability to report on your OpenStack usage, your OpenStack infrastructure. And we'll see a little bit of that later in the presentation. So we're on our 2.0 release, which is based on Kilo. Our first release was Icehouse. And you'll notice that we skipped Juno. And our goal is to be as current as possible while providing the features with the most maturity and stability. So some of the features that we wanted to include from Juno weren't quite where we wanted them at that release. So that's why, with the next release, we went to Kilo. And I hope the wait was worth it, because with the 2.0 release, we're going to have, or we do have, the ability to upgrade your OpenStack distribution in an automated fashion. And I'll show you that at the end as well. So in the 2.0 release, we include all of the great projects that you know and love, Nova, Cinder, Glance, Keystone, or whatever, as well as introducing the capability to do auto-scaling with heat. We've included support for Solometer. And we've also included more features from our networking side of the house, including load balancing as a service. Now, getting into the bits and bytes of how our management infrastructure looks, when you're setting up VMware Integrated OpenStack, you're going to have a total of four networks. The first network is the API Access Network, which allows our users to write all of their automation scripts to access the control plane. We're not going to actually let users touch the management network directly. Next is the management network where all the control plane components communicate with each other. And you'll have a pair of load balancers connecting that management network to the API Access Network. And all communications between the users and the OpenStack Cloud are happening over SSL by default. There's also the transport network that is used by our VMware NSX Network Virtualization Toolkit, as well as the external network. And this is where you'll assign your floating IPs for your instances. As far as the compute architecture, we have a management cluster where the control plane resides. We have the edge cluster where all of our router VMs will be placed. And finally, we have one of more compute clusters. So one thing that differentiates OpenStack on KVM versus OpenStack on VMware, we provision to vSphere clusters instead of individual hypervisors. And the reason we want to do that is that we allow our users to benefit from DRS and HA and all of the availability features that are built into the vSphere platform. And it's completely seamless to the users. So if there is a problem with one of the hypervisors or if it's time to update your hypervisors, you can do things like put an individual hypervisor into maintenance mode. We'll automatically evacuate all of the instances that are running on that particular hypervisor to the remaining hosts within that cluster. So it simplifies administrative tasks, also helps with operations. And we think or we know, based on our customer feedback, they like that approach. And so our management plane is made out of a management server which is analogous to a build server from other distributions. We have a virtual machine template that we clone to make all of the components of that control plane. Then we have a pair of load balancers, a pair of controllers, a pair of RabbitMQ VMs, and a pair of Memcache DVMs. And that's because you want high availability built into the architecture from the ground up. So VMware Integrated OpenStack isn't just a science project. It's meant that when you deploy VMware Integrated OpenStack, you'll have a highly available production ready infrastructure for your OpenStack Cloud. You'll see here that we have three Maria database VMs because we're using Galera clustering and the quorum for that cluster is three. So that's why we have three VMs there. And then we have a Compute Driver VM. The sole purpose of this VM is to run Nova Compute on behalf of the vSphere cluster that it is managing. So for each cluster that you assign to VMware Integrated OpenStack, you will have a Compute Driver VM. So if you have five clusters being managed by OpenStack, you'll have five Compute Driver VMs. And this Compute Driver VM, even though there's one per cluster, it will be protected with the RS and HA within the management cluster. Are there any questions so far on anything I've covered? Yes. Yes. OK, so I'll repeat the question. So the question is, is VMware NSX a requirement of VMware Integrated OpenStack? It's not required for VMware Integrated OpenStack. It depends on the networking use case that your customer has or that you have. If you have simple networking needs, if you just need flat networking, you don't care about tenant networks, overlapping IPs, and things like that, then you can just go with the virtual distributed switch option. But if you want a fully featured neutron set of services, that's when you go with VMware NSX. Yes, we have a question up front. As far as CPUs, RAM, and storage? OK, so the question was, what kind of sizing is required for the control plane? And I don't have the configuration file on me, but I know that we have some of the components required for virtual CPUs up to 16 gigs of RAM, depending on the actual function that is serving. Like the controller VMs, this pair of controller VMs, they're running all of the API services. So we give them a fairly liberal chunk of resources so that they stay up and perform as expected. But all of the requirements are in the actual documentation for the product. So if you go to www.vmware.com, forward slash go, forward slash open stack, you can get all of our documentation right there. Yes? Does that include security groups? That would include security groups, yes. The reason why we wanted to have, oh, sorry, let me repeat the question. So the question was, VMware NSX is not required, so would that also include security groups would be required for VMware NSX? And that's correct. Security groups are required, or VMware NSX is required for security groups. That's because we want to make sure that for the simplest networking cases, we're just going to give customers the ability to do that without VMware NSX. And then if they need real, scale out neutron services, then they'll use VMware NSX. We want to give them the best possible experience with the cloud. Yes, sir? Is your management server a physical server? And is it, why one is enough for no HA? Right, the management server is a virtual machine. And just like the Compute Driver VM over here, the management server will be protected with the clustering technology that we have, including VMware HA. So it's not necessary for this to stay up 100% of the time in order for your cloud to run. So that's why we felt no need to actually have two copies of it. OK, any other questions? And I'm going to have to space out the questions so that our microphone gentleman has a chance to get to you. Any other questions? OK, going once, going twice. All right, I'll move on. OK, so that's enough talking through slides. I would like to actually show you how the solution works. So for those of you who are actually running OpenStack in production, how many of you are deploying OpenStack using a yum or app to get? OK, we have a couple of brave souls. All right, who else is using a distribution like Mirantis or RDO or Canonical? Who's using a distribution like that? OK, a few more hands there. OK, and so for those of you who haven't raised hands or for anyone in the room, how many folks are actually using VMware in production and are interested in using OpenStack on VMware? OK, all right, so we believe that our automation utility will be very attractive to VMware administrators and we'll show you in a bit why that's the case. And then we'll also transition into showing you some of the operations benefits you get with using VMware Integrated OpenStack. So I'm just going to switch over to my demo screen. OK, I'm sorry, I can't scratch that out any further without cutting off some of the screen. So the folks in the back, if you're not able to see it as well as you'd like to, please come forward so you can see a better view. So how many folks know what an OVA is, a virtual appliance in VMware language? OK, our management server, our build server, is distributed as an OVA, which is a virtualization appliance that you install on the VMware platform. Once that OVA is deployed, you have a plug-in in the vSphere web client, which is the main administrative interface for any of your VMware administrators. So I'm going to go ahead and click on the plug-in, and I'm going to click Deploy OpenStack. Now I can deploy the values fresh, I can enter them fresh, or I can save or use a saved settings file that has all the entries that I have created from a previous cloud. So I'll go ahead and open that file, and what this will allow me to do is all of my prompts will be pre-populated with information about the cloud that I want to run. So I enter my vCenter credentials. vCenter is the brains of the VMware vSphere installation, and I'm going to choose my cluster that will run the control plane, and I name it Management Cluster. So I go ahead and select that and click Next. Now two of those networks that we talked about before, they need to be created in advance, so I would select the port groups for the relevant networks. Again, that management network is the control plane communication network, the API access network. That's the network that users will communicate with the OpenStack cloud using those IP address values. And again, they're sitting behind a pair of load balancers, HA proxy, so they have a virtual IP as well as a DNS name. Now the next thing we have to do is specify which vSphere cluster will be the first cluster assigned to OpenStack, and that's Compute Cluster right here, so we've selected that and click Next. We also have a storage abstraction technology within VMware vSphere. So for those of you who are familiar with configuring storage on a KVM-based OpenStack platform, you may be familiar with going into the cinder.com file, saying which driver for the EMC array or the NetApp array or the Pure Storage array, whatever, in that configuration file, as well as which Luns you're going to be communicating with. Well, because we have included a driver that interprets how we do storage, we can tell OpenStack how to use VMware data stores. So the storage configuration is greatly simplified. I use my VMware data stores. I just select which ones I want to use. So I've selected these two data stores. I can see the capacity as well as free space and click Next. The same thing for my image storage in glance. So I've selected these two data stores. I'll go ahead and click Next. This is where you choose your networking options. So again, if you have very simple networking needs, you just want a flat network, you don't care about all the bells and whistles that come with Neutron, you would go ahead and use the virtual distributed switch. However, if you want overlapping IPs, metadata services, security groups, and the full Neutron feature set, that's when you select VMware NSX networking. So I've entered in my values and click Next. And I choose my authentication mechanism, either the local Keystone database or an LDAP server. Finally, I'm going to select my syslog server. This is where all of my OpenStack logs will be sent to, so I can have some kind of meaningful analysis of when I have an error. So we have a solution called vRealizeLogInsight. But you can use any syslog aggregator that you would like. So whether it's Splunk or ElkSack or any of those tools out there that you're already comfortable with, just go ahead and indicate that IP address. So now I see a summary of my entire OpenStack installation, all the settings that I have configured. And I can scroll down. And I can see the control plane that will be deployed. So again, a display of all of those control plane VMs that we talked about before. And it's important to note that we make sure that we have anti-affinity settings in our vSphere clusters. So you'll never have controller 0 and controller 1 on the same physical hypervisor. Because if that one hypervisor goes away, then your controller services go down. So we make sure that when we're deploying the control plane, that we never have the same type of components on the same physical server. So I'll go ahead and click Finish. At this point, the management or build server will clone a bunch of virtual machine templates and start running a bunch of Ansible playbooks based on the settings that we entered. So after time passes and all the Ansible playbooks run, we see that our OpenStack cloud is running. I can go in here and see my control plane. Now, one of the questions that I get when I show this demonstration is, well, what are the users going to do? Are they going to be using the VMware interface for their access to the cloud as well? And that is certainly not the case. We want to make sure that users have the same experience on our distribution of OpenStack as they would have on others. So as you can see here, your users continue to access the cloud using Horizon or using the APIs and the CLIs. With each release of VIO, we make sure that we pass the community def core standards, meaning that we are an API parity with any distribution of OpenStack that is ratified or accepted by the foundation. OK, before I go on to showing the day two operations aspect, are there any questions about the deployment? OK, so the question is, by default, the networking options, whether it's going to be VLAN networking or VXLAN networking. So that depends on which networking option you use. If you're using the virtual distributed switch option, then it will be VLAN networking. If you're using the VMware NSX, it will be VXLAN networking for your time networks. OK, yeah? And it shows the security there. The security group is useless, right? Yeah, the security group will not be used. Oh, the question was, on our screen, we still have the access and security of you, even if you are using the virtual distributed switch, because that's standard Horizon. And in that case, if you're using the VDS option, then yes, security groups will not be used. OK, any other questions on the deployment? Does this look pretty easy compared to some of the things you've seen in the past? Very fast, yeah. I mean, if you have really fast storage, like I was blessed to go test on an all-flash array once, all of my virtual machines finished cloning in under a minute, and then it took about 25 to 30 minutes just for the Ansible Playbooks to run. So depending on the type of storage you have, you can have a production-ready control plane ready to go in under an hour. Yeah, and we think that's pretty cool, and so far, our customers do as well. So just on that alone, please go ahead and keep the tires on VIO. We'd like to hear more feedback. So before I continue along with the presentation, I just want to go into some of the day-to-operations because that's where we also see some value add for our customers. And that's pretty small. Let me see. OK, so I'm just going to cut off the screen a little bit. I'll do the best I can to not fall over the edge. OK, so VRealize Operations Manager is a tool that VMware develops to allow health scoring of your infrastructure. And by default, it will report on all of the virtual machines that are seen in the VMware vSphere environment. And what we have done at VMware is to have some OpenStack intelligence added to the VRealize Operations Manager. And you can see that here with the OpenStack tenants view. So if I had multiple tenants and not just admin, I would see information for all of those tenants. So if I had an engineering tenant, finance tenant, whatever else have you. And as I scroll over the various resources, I can see one second. As I scroll over the resources, I can see information about them. So for example, this instance, it has the M1 small flavor. It's an actually active instance. And we can see that it has a green health score, meaning that it's running well. Then you can move over to the Storage tab and see the data stores that are assigned to OpenStack. You can see their free capacity, as well as total capacity. We have our network infrastructure view that's going to look at all the network virtualization components, including the controller, the manager, and so on. Then on the Compute tab, we're able to see our compute cluster, as well as the individual hypervisors that make up that cluster. So we can get overall health scores on the physical infrastructure, as well. And then we move over to the OpenStack Controller's view. We divide up our views according to compute, network, storage, and then the rest of the management services on the bottom. Now we can see here that we have a red health score for our storage services. Something is going on there. Some of our users are reporting that they're having issues being able to allocate storage. So even though we have the highly available setup, there may be some kind of impact if one of your controllers is down, for example. So what I'm going to do is look at our log inside product that aggregates all of our syslog messages. And I can see here on the wrong slide. I can see here that we have some errors that we can check out. If I scroll down, I'm going to have to decrease the size for a little bit. I can go into the detailed syslog messages. And let me expand that for a little bit. I see that there is an error happening with the sender user and its connection to the database. So if that's happening, there may be something wrong with the sender volume service, for example, and it needs to be restarted. So I can go to the web client. And I will go a little bit further and check out the command line to see what's going on. All right, so I'm going to check the status of my sender volume service. I see that it's in a stopped or waiting phase, which we do not want if we want to have a fully functioning cloud. So I go ahead and start that service again, and now it's in a running status. If I go back to vRealize Operations and I wait for another polling interval, once I refresh the screen, I should go back to green. All of my health services should be reporting that sender storage is completely accessible, properly accessible. And just to go over some more management tasks, I'll go to this view over here. So ongoing administration tasks happen within the vSphere web client. Again, we want to emphasize for your VMware administrators that they do not have to learn a new interface just to manage the OpenStack cloud. They can do things such as expand compute capacity, apply patches, expand storage capacity from within the vSphere clients that they're accustomed to for managing their standard virtualization infrastructure. So if I click on Nova Compute, for example, I'll see here that I have my existing compute driver that's managing my vSphere cluster. Now, if I want to add another cluster, I go ahead and click that green plus button, and I select my second compute cluster. I go ahead and click Next. I get an indication that I'll get an additional compute driver VM to manage that new cluster that I'm adding to the system. And then I choose the data stores that will be associated with that vSphere cluster. So I go ahead and select Six and Five and click Next. And now I have an overall view of what's going to be changing in my environment so that I can have expanded compute capacity. So I go ahead and click Finish. I get an indication that my Nova services need to be restarted to accept these changes. I click OK. And there you have it. Once I refresh the screen, my second compute cluster is available for use in the OpenStack Cloud. So again, the administrators, they didn't have to go to any configuration files. They didn't have to go to some other interface. They're using the same interface that they're familiar with to get their work done. OK, so the question is, would that additional cluster be presented as a new availability zone? No, it would be within the same existing availability zone. However, you can create host aggregates. So for example, if you have a production cluster versus a Dev and Test cluster, you can put them into different aggregates and then link up your flavors with the metadata according to the aggregate that you want your workload to go into. Any other questions on the day two operations? Either it's brain dead simple or I'm blowing minds, or you guys are just ready for OpenStack Summit to be over. OK, we have another question. SSH directly to the OpenStack, and then if he can do so, why is VMware support in this aspect for OpenStack? Can the customer, OK, let me make sure I understand correctly, can the customer SSH to the management plane for management purposes? Is that what you're talking about? OK, maybe a user SSH into the OpenStack and make some changes to the Cinder.call file or the Nova.call file and make some customization. So after that, why does the support stand from VMware on this perspective? OK, so the question is, if the administrator goes in and makes some changes manually to the configuration files, is that respected by us, are you still going to be under support from VMware? Now, our stance is, we do not want you to physically change the configuration files because of the Ansible automation that we have in place. That being said, there are some settings that are possible to change and persist across patches and updates and things like that. However, that's something that's more of an advanced use case. I know we're all OpenStack wizards, but make sure you consult with your sales or system engineer, as well as our support team before you make a lasting change, just to make sure that when we run an update, we're not going to overwrite your changes. All right, any other questions? Oh, we have another question at the front. OK, so the question is, can we federate instances of VMware integrated OpenStack? We looked at federation for the Keele release. Some of the functionality that we wanted wasn't exactly there. So one of the things that we considered is having federation based on a common identity store, like using the same LDAP server across multiple VIO installations. That's going to be our first attempt at it, but actually using real Keystone federation that is something on our roadmap that we are working towards. You have a question? No, that was the Keystone federation was the thing that I thought would make this so cool. Right. Well, I mean, that was one of my use cases. Right. There are many other cool things about the solution, but again, we want to make sure that Keystone federation is airtight before we put a solution around it. If it's going to require some care and feeding and it seems that way right now, we didn't want to put it in too early. When's your next release due? We're still in the planning phases for our next release. We have been looking at the Liberty release already, as well as considering some of the blueprint work from the Mitaka release. But we try to not lag too far behind the foundation. That being said, I can't give a concrete date right now. But we try to stay as current as we can, as long as the features that we want are stable enough and ready to go. Yes, another question. Right. So those solutions don't really have a VMware integration story. So it's not something that we're blocking. It's a matter of they'll need to work within our framework in order for us to support it. But at this point, with our distribution of OpenStack, at least, we're working only with VMware NSX for network virtualization. OK. Any questions? All right, so I'll move on to the rest of the presentation. And I actually have good stuff, so don't fall asleep just yet. All right, let's go back and talk a little bit about backup and recovery. So how are folks backing up your control plane? Are you just doing a tar ball? Do you actually have a backup appliance that you're using to make sure your control plane is protected? So how many people are using a tar ball? I know you don't want to admit it, but I've seen it in production. So I think some of you are not telling the truth. OK. How many of you are using a backup appliance? OK, cool. How many of you are just hoping for the best that your cloud will never go down? That may be some of you in the room, too. I know you don't want to put your hands up for that, just in case your employers are around. But we have backup and recovery that's new to our distribution. And what we do is we attack backup on three fronts and backup and recovery. First, we back up the OpenStack database. So all of your metadata information will be backed up to our remote NFS share that you can use a backup appliance to take care of. Also, we'll back up our build server. When you make those changes to the Ansible playbooks with that wizard that we showed before for deployment, you don't want to lose that data. So we'll actually back up the entire build server VM and also put that remote to an NFS share that can be backed up with your backup appliance. Now, recovery, recovery actually works. We've actually tested it. And another thing that you can do is if, for any reason, one of the control plane components gets corrupted, like controller 1 goes down, or your memcache d2 goes down. From the command line, you can actually tell VMware Integrated OpenStack to recover that specific component of the control plane. It will clone a VM and run the relevant Ansible playbook commands on that new VM to get your component back up and running. And it's as if it never went away. Again, the benefit of having a pair of your various controller components running behind loan balancers is if you lose one piece, your users aren't really going to notice it, and it gives you time to recover. So we're pretty happy with the backup and recovery story that we have with our installation. And hopefully you give it a try and let us know what you think. So overall, all of the new features that we've included in this new release, as of about a month ago when we released it, it's based on Kilo. We have the automated upgrade support. So if you started with VIO 1.0 and wanted to go to 2.0, we have an automated upgrade process that I'll show you. I'm not just pulling your leg. We actually have successfully upgraded. Now, when I first joined VMware and my product managers, like Santos, told me that we were going to upgrade OpenStack, I told them, sir, you are lying to me. There's no way that's going to happen. Only upgrade I know is to stand up a completely separate cloud and then retire the other one to the sunset. And they actually proved me wrong. And I'm glad for that. Also, we've included support for autoscaling. And this is not new to OpenStack, per se. But we wanted to actually work through some of the concerns folks had about Solometer. And we think that the approach we've taken to deploying Solometer with the autoscaling feature is going to be something that you'll enjoy in our release. In the vSphere and NSX components, we've exposed more features, including load balancing as a service, through the relevant APIs in OpenStack, the control plane backup and recovery that we just discussed, as well as some miscellaneous features, such as more localized languages for the cloud, as well as the capability to do some customization of branding within Horizon. So we've had customers change the logos in case you didn't like the VMware logo. And sometimes they would deploy a patch, and then all that customization would go away. Well, in this release, the customization persists. And we're fully supporting our customers with that. So one second. We're going to actually focus in on how we do upgrades. And how many of you have heard of the term blue-green updates or upgrades from Jez Humble and Martin Fowler and those guys? Anybody heard of blue-green? OK. Well, for those of you who are not familiar with this concept, we have the capability to deploy a new control plane since it's all virtual machines. And these are usually pretty good physical servers. We're able to stand up a temporary or temporarily have two cloud stacks ready to go for the production environment. We start off with our existing control plane for VIO 1.0 or whatever is your original version. And we update that build server, that management server, that deploys all of the OpenStack code. Next, we deploy a green control plane. This will be the next version that you're going to. Once that's deployed, we migrate the database. Then that's up to you to go ahead and verify that the new cloud actually works as expected. And then at the end of the process, we're actually going to move over that virtual IP that we defined with the DNS name associated to the new cloud. Now, this seems like an awful lot of work. But the reason that we're doing that is because we want you to have a rollback capability. If you're doing an in-place upgrade, there are some challenges to make sure that you're not going to lose out on your original settings. So once you have the 2.0 cloud up and running, or 3.0, or whatever version we go to next, you can choose to keep around your original cloud framework or for rollback purposes, or you can get rid of it. It's up to you. So we created a short video just to show how this works. And then we can discuss a little bit more in the Q&A section. OK, so I'll speed through this a little bit since we are running a little short on time. So for those of you who are familiar with Icehouse, this is how Icehouse looks with the Horizon configuration. And I show you that I have some existing workloads here that I want to make sure are continuing to run. The upgrade process does not affect the control plane at all. I mean, it does not affect the data plane at all. Only the control plane will be affected as we're going to the new version. So let me hold on for a second. I'm going to go through and verify that, yes, my VM were integrated. OpenStack is still version one. And it's running the Icehouse version, which is 2014.1.4. So on the build server, and I apologize for the small text, I go ahead and stage my upgrade patch. Once it's staged and I see it listed when I do a patch list, I'm going to go ahead and install my patch. Now, very simple. The build server is now up to the 2.0 version of VIO, which is running Kilo, 2015 version. So I go back into my plugin. I see my updated version numbers. And now I know I'm ready to deploy my second control plane. And I'll go ahead and do that right now. So I see here that I have a new tab for upgrades available, and then I have this blue upgrade button. So I go ahead and click on that blue upgrade button. And I have a new dialogue that's going to be shown. I give a name to my deployment. I call it VIO 2.0. And I give it a temporary virtual IP for the user to access the cloud. So it goes ahead and provisions the new control plane with version 2.0 binaries. And once it's in a prepared state, I can go to the next phase, which is migrating my database. So I'll go ahead and right-click on my original installation and select Migrate Data. So I select that option. And at this point, the control plane services for my original cloud will be quiesced. It'll be stopped because we don't want new data coming into the database as we're trying to migrate your data. So now I'm in a migrated status. And I can use that virtual IP to go ahead and check on my new cloud. So I'll go ahead and do that in a new browser tab. I log in. I can see that there's a change to the horizon dashboard, as well as I'm able to still see my existing workloads, including my instances. I can see that my network connections are still where I expect them to be. For example, my virtual router is there. My instances are still connected to their tenant network. So now that we've verified that that's all good, we'll go back to finish the upgrade process, which is to change over the virtual IP to the new cloud. So I do that switch to new deployment option. And now I have my new cloud up and running. And my old cloud is completely stopped. Now if I go back to that browser tab, I can access my cloud. Instead of using an IP address, I can use the actual DNS name that we had for the original cloud. So I can log in and continue to use the infrastructure as I normally would. So at this point, we've reached the end of our talk. Are there any other questions based on anything we've covered? Ex-later. If you start off with the virtual distributed switch, can you add VMware NSX later? So we advocate the virtual distributed switch for very simple use cases, including maybe Devintest or POC. So in that case, you probably wouldn't want to upgrade that installation. So we really haven't worked on migration strategy from virtual distributed switch to VMware NSX. Because we consider the VDS to be just a testing ground, not something that you're going to put in production. OK, any other questions? OK, in that case, oh, yes. One other follow-up question right here. Yes? Do we use it for obvious? No, not in this installation, no. You have a question? No, I don't see any object storage support in VIO. Will you use the service or integrate with the safe storage? So we have a reference implementation of Swift that's included in one of the control plane VMs. It's only a reference implementation just because we do not have a Swift, or sorry, an object storage solution in-house. So we partner with third-party partners like Swiftstack or EMC Viper or Nexenta to provide production grade object storage. OK, and that will wrap it up for now. Please stick around for the next session. We'll have more VIO goodness coming towards you. All right, thank you.