 OpenStack team like last year sometime you can read the agenda what we are going to talk about. It was very very complicated for me to learn OpenStack. As a product manager I didn't want to go too much into the weeks, didn't want to run Python client and all that. It just was taking too much of my time and I used to work with a lot of distributors, some DIY customers and it used to take us several weeks to get OpenStack deployed and then even more weeks to troubleshoot it because it was spewing so many logs. So one of my personal satisfaction out of making this product is that we have made it very simple and it does not solve the world hunger problem but it makes it easier for you to deploy a production ready OpenStack starting with your vSphere web plan. So that was one of my original goals. When I joined as a product manager I was like, if enterprises need to be able to use OpenStack it has to become simple. It has to become enterprise ready. So this is our first step, deploying OpenStack. So we have added features in addition to the integration that we have done. We have added features to make it. And that would help as well. We have made installer and configuration in VMware integrated OpenStack. That would make it easier for deploying and configuring and managing OpenStack. So what is VMware integrated OpenStack? We have done all these work. This is independent of VMware integrated OpenStack. We have done all the integration with the core compute, network, images, storage. All the integrations are in place, right? So NOVA can talk to vCenter. It can spin up VMs. Those drivers are all upstream. Sender can talk to vCenter, spin up disk and attach to VMs. Glance can store images in vSphere data stores. So you don't have to consider a different solution for your images. And Neutron can use NSX driver to create overlay networks. And it can also use DVS. We recommend using NSX because it gives the most flexibility for cloud applications. Now those integrations are all upstream, right? You can go and download it from the NOVA community. You can get it from any other distributor. So what is different in VMware integrated OpenStack? As I mentioned, the additional component here is an installer and a configuration piece, which is based out of vSphere web client. So that is the net new component in VMware integrated OpenStack. Of course, a lot of testing and hardening before we go to market and sell it to customers. We'll have to do a lot more testing in QA and make sure all of these things work, right? So that's where a lot of our time is spent at this point. So all of this is bundled as appliance. And you get it as a NOVA file. You go to your vSphere web client. You download the OVA and you start the installation. What do you get when you install? You get a well-tested production-ready architecture, right? So you get a load balancer and behind which we put all of these services behind it, right? You will see some of the things which are not necessarily core OpenStack, some things like Memcache, RabbitMQ, DHCP agent, which is somehow missing in this one, database services. So these things are all needed to run OpenStack in production. This is nothing new, right? Anybody who has run OpenStack at a scale knows that Keystone will start throttling if you don't have a Memcache over there, because Keystone gets bombarded with all the session authentication requests. So all these things are needed. And how do we know? We run a very large OpenStack deployment internally at VMware for our own developer use cases. In fact, all the development work that we do runs a CI job using OpenStack APIs in that OpenStack cloud. So very democratic way of development, right? We consume what we develop and ship to customers. This architecture is well tested from our side. Out of the box it can run for, I think we have tested for 2,000 VMs. It's good. Out of the box, one vCenter, 2,000 VMs. That's a pretty sizable OpenStack environment in about 30 to 45 minutes, right? So what happens really when you're deploying it? It's a very simple step-by-step process that you get. You get the appliance, as I mentioned. You go to vSphere Web Client. You will get a plugin registered. I'm going to show all this. This is what we will show in the demo, essentially. This is how we create OpenStack. So you start with an appliance. You get an installer and configuration service. This is, you can think of it as a wrapper around OnSible or Puppet or Chef, any of the configuration tools. We are using OnSible inside it. And that goes ahead and creates all those VMs, right? So you don't have to manually create all those 15 VMs that I showed. It will take an Ubuntu template. It will clone all of the VMs, and it will give them roles. Some of them will become DHCP. Some of them will become Memcache, RabbitMQ, all just standard vCenter operations of cloning from existing template, right? So in terms of more high-level architecture, this is how we see the OpenStack world once you deploy it, right? There's a management cluster. It's just a cluster that you have dedicated to run all your OpenStack components. Everything gets deployed in the management cluster. You can give it more resources. You can enable vCenter HA on those, so you can protect it against hardware failure. The architecture itself has HA, so there is application failure protection. So out of the box, it's very resilient. You will be protected from post-host and application failure, which is important because that is a control plane, right? It should not go down. Otherwise, you will have a downtime in your environment. And even though you might not care about the resiliency of tenant VMs, but you do care about resiliency of the control plane. That control plane needs to be on a good, solid footing. And then you can allocate multiple clusters. The key thing to note in our integration is we expose to NOVA per cluster. NOVA does not see each individual host. It sees the whole cluster as a compute resource. The advantage is then we can later on do DRS and do optimal packing of the VMs so you get the benefit of both the worlds. We'll explain it in more detail later on. So I'm going to focus on how all of this is created, right? All of these VMs are created for OpenStack, the whole architecture that I just talked about. So this is the deployment where we will create OpenStack in eight minutes. It's a recorded video. But the fastest we have done OpenStack deployment is when the customer had SSDs in their environment. It took 15 minutes for complete OpenStack deployment. I was surprised. It takes us around 30 to 45 minutes to deploy OpenStack nowadays. So you start, as I said, with the OVF. You get the VIO appliance. It's the simple standard OVF. If you have VMware administrators in the audience, they are familiar with this. You just point to a regular OVF file or the URL of the file. Once you download the OVF, it will ask for some properties like networking. Where do you want to deploy this OVF? How do you want to hook it up to your network, the standard ULA and all those things? Let's go to the next step. Where do you want to deploy this? You choose the cluster. So these are the standard VMware techniques. You put the VM in a particular cluster. You choose it. You power it on. Where do you want to put the backend images? In this case, we are using vSAN Datastore. You can use any other Datastore as well. The key aspect is in most of these deployments are networking. Essentially networking and certificates are the most challenging parts in most of the deployments. The one network that it needs is the one which it will communicate with vCenter because it's going to trigger all the cloning and all those operations through vCenter. That network should be connected to vCenter so that both this server and vCenter should be able to talk with each other. That's how it is going to go ahead and orchestrate all those cloning of VMs. This is standard. You will need the gateway servers, the DNS mask, and standard network configurations. Once you have this, it will register for SSO. You don't have to worry about authentication. It will leverage vCenter single sign-on. That reduces the traffic between these two and between this server and the vCenter. So you get single sign-on and this service is registered and you will see a plugin over here in your vSphere web client. Once you click on that, you will get a whole bunch of options including deploy open stack and once you deploy it, you can configure add data stores, clusters. We'll also add things like remove cluster, remove data store without introducing downtime because we can do vMotion of stuff. Then you can start deploying open stack. So far so good. It's a standard OVF deploy. You've got the plugin registered. Now you start deploying open stack. The first thing it needs is the vCenter where it will deploy open stack. The open stack communicates with vCenter so it needs the credentials for vC. Then it will ask for where you want to create the open stack deployment. As we said, we pick a cluster. We call it management cluster. It requires a minimum of three hosts, mainly for HA reasons and that too specifically for databases because we put database vM1 on each of the hosts so that you get nice HA configuration. This particular deployment will require minimum of two networks. One network where your APIs are coming in, your horizon is coming in, all your users are talking and consuming open stack. One network where all the management traffic, the communication between open stack, between vCenter, between Rabbit and other components of open stack. Just two simple networks. As I mentioned, there's a load balancer so it requires an outbound, the external IP, and the inbound to relay the traffic to the controllers and Rabbit MQs. It's going to go ahead and deploy all these components. I'm going to pause at this moment. When I joined this group, this is where we used to spend months and weeks and several man hours. Me just trying to learn and there were people who were trying to build this in production. It was very tough. Building this tool makes it really easy and I'm not saying this is the only tool. There are dozens of such tools, but this makes it really easy. This is the starting point for an enterprise grade open stack because you should be able to do that. This makes it really easy for VMware administrators. They know all these tricks of clustering and data store creations and all those stuff. Then you can select which cluster you want to use for tenant nodes. Here we are using compute cluster 1. I will come back after the deployment. I will show you how it's exposed in the horizon and available for consumption. What data stores you want to use for Nova just gives it more storage. Where do you want to put your images? Select any of the data stores. No need for any Swift or additional data stores if that's what you are using for Glance specifically. You can just use the same data stores. Networking options. This is of interest to you. There are two options for networking. If you are just doing IP dial tone your VMs don't need anything fancy. You just want to hook them on a VLAN. Use the virtual, the distributed switch. Most of the cloud applications need networking overlays or at least scalable networking. So you end up using NSX. They will enter a bunch of NSX parameters. I am going to skip that part. The only thing of interest here is this extra network that NSX needs is for all the data overlay. You don't want to use the management of external. This is an additional network that you will need for NSX. That's the data overlay network. Also all the DHCP allocation happens on that network. This is your horizon password. You give it the horizon admin password. We will configure it. Once the horizon is up you log in using this password. This one is my favorite. This is not required but I would almost never deploy OpenStack now and going forward without a syslog server. I have spent several hours of my life looking through NOAA and Neutron and Cinder logs. I don't want to do it again. So please use your favorite syslog server and point it to the syslog from day one. You will need it. It's highly recommended. That's my only... That's my greatest learning from all the OpenStack. Syslog, absolutely important. Splunk, log inside, whatever is your favorite, just pick it. Start using it to start scripting on top of it so that it makes your life much easier. Because a simple thing like my VMs are not getting IP is a spread between DHCP agent, between NOAA, between Neutron. It's really frustrating when you have to switch between VMs and find the logs, the right logs and it takes a lot of debugging hassle away if you have a syslog. So once you have fired that one off, it's going to go ahead and create all those VMs for you. Standard cloning operation. This is what we have been doing in VMware for ages. You take the Ubuntu template VM, we clone it. All 15 VMs will be created. The networking hookups will be done. Next will be created on those. Software bundles will be uploaded and they will all be configured for their specific roles. Here the IP is assigned already. If I move ahead a little bit you will see that they are all installed on that management cluster and you can see the... You can go to host in clusters and you can see installed on the management cluster all these components and there are two of each one of them the database. There are three VMs. There you go. Open stack deployment ready. I mean it's recorded videos that's why it's done in 8 minutes but I can guarantee nowadays that you can yourself do it in 40 minutes. You can deploy open stack fully. Production ready and allocate as many clusters as you want and start using it. So this is the cluster that we had allocated. The compute cluster 1. As you can see 96 vCPUs. All the storage that you allocated. Everything is ready and you can start creating VMs. Start consuming the infrastructure. Start creating disks. If you have NSX you can start creating private tenant networks the complete cloud applications you can start firing off the open stack APIs from a command line and start consuming open stack. Now the next thing is once you have deployed this open stack how are you going to operate this thing because this thing is not going to operate on its own. So you need some tools to operate and that's where you're going to get the micro help. All yours. Okay so once we've got the deployed environment let's look at a sample where I have created a three tiered topology. So here I've just run some scripts and I've created a web an application and a database tier and I've got logical networks for each one of them. A VM of those logical networks connected to a logical router and then with a connection to an external network. So I've got that available. So let's take a look in vCenter and see as a vSphere administrator some of the things that we've done to help you manage an environment like this and so for those of you who are not familiar with VMware at all vCenter is really the management platform that administrators use to do everything from creating hosts and networks and data stores and actually creating VMs. And we're going to look at the host and clusters view here and you can see, I know this is a little bit small but you can see a set of hosts here and you also can see a few VMs. You'll notice that three of these VMs have UUIDs so they get created with the open stack UUID which is useful to an open stack person but not quite so interesting maybe to the virtualization administrator who doesn't really know what that is. Below that you can see three VMs that have a label that started with volume and that has to do with cinder volume so vSphere does not have a concept of a disk that is independent of a VM. So we actually do something a little tricky here we create a VM and we attach those persistent cinder volumes to it and then we can move them to other VMs as you attach and detach them but they persist with connected to this stub VM. So as an admin within vCenter we've given you a little bit of additional capability you'll notice here there are a couple of portlets that are open stack specific so we're capturing things like the VM name the tenant name, the flavor even the logical network and then we're associating tags with those so as an administrator I could go look for all the VMs that were created by a specific tenant or in this case I'm going to search for all the VMs that were created with the M1 tiny flavor and that's as simple as actually issuing the search key at the top so I enter M1 tiny and then I'm going to get that list of VMs and then I can manage that in the same way that I've managed my non open stack created VMs. So we talked about the idea of creating the environment and presenting a cluster to open stack so what does that do for you? Well, open stack the nova scheduler looks at the vSphere clusters that are available and then places at the cluster level and then the distributed resource scheduler is a vSphere construct for being able to manage resources and move VMs around somebody hit the light in the back to be able to move VMs around within your cluster so DRS will actually handle the placement of that VM. The other thing that is interesting is since DRS is handling that the other things that you expect to have in a vSphere environment are available HA vMotion so you can move a running virtual machine within the cluster and this doesn't impact open stack in any way so here you can see that DRS is turned on in this environment the other thing that you can do as a vSphere administrator yes so we're actually vMotioning within the cluster so we're not moving it outside the cluster we're moving it it's not supported moving it across clusters not yet so you're moving I mean you're placing the vM in a particular cluster yes so one of the other capabilities that vSphere administrators use a lot is the idea of putting a host in maintenance mode and having zero downtime maintenance of hardware and so that's obviously enabled because when we put a host in maintenance mode DRS will evacuate that host move all of those vMs off of that host onto another host in the cluster and then you're able to do your hardware maintenance without taking down any of those vMs so it just takes a couple of seconds for us to evacuate that host in this demo environment you can see there are zero hosts left there and so then I could do my hardware maintenance and then migrate the vMs back once that maintenance is done you saw Arvin place the the individual during the install place the management vMs on vSAN so vSAN is our converged storage infrastructure lets you combine SSDs and spinning disks together into a converged storage infrastructure and that's available for not only the management components but also the tenants as well so we enable vSAN and you can see it's enabled here but beyond just using vSAN we also support storage policy based management so within vSphere I can drill in and create a couple of policies and I've done that here I've created a platinum policy and a gold policy so the platinum policy is associated with my vSAN data store and then the gold policy could be a local data store and the way this works is you actually expose some rules and a set of capabilities and so for vSAN here you see I have a the number of stripes and the number of failures to tolerate for that environment and every storage infrastructure that we use actually exposes some set of capabilities through storage policy management and what we want to do here is map this to cinder volume types so we're going to go in and use python scripts here and create a couple of volume types for gold and platinum and then we're going to use the volume type key command to be able to add an extra spec that maps the cinder volume type to the storage profile that we created in vCenter and then we'll actually create volumes and then attach them to a VM so now we're able to use cinder volume types for the storage policy based management and you can see here I've got as I list the volume you can see that I've got attached to a VM that starts with a UUID of 95E it has a volume type of platinum we're displaying it as a vSAN volume and what we want to do is to go back into vCenter and see that this volume that we've created and attached to a VM we're going to look at that specific VM that was attached and just verify that it was actually put on the vSAN data store so we go down to hard disk 2 and expand it and you can see that it is on that data store so we've combined storage policy based management with cinder volume types yes it's a storage profile but the general capability is storage policy based management but we're doing a storage profile with a cinder volume type so I'm sorry I didn't I couldn't hear that so what we support what we support are supported vSphere data stores so you know as long as underneath that whatever it is that you use as long as you can create a data store based on that on that storage infrastructure we can support it yes yes so we're using vMFS not yet that's going to be probably next year so if you remember in our environment we had three clusters and we made the compute cluster 1 our tenant cluster we're going to expand the capacity for our environment here we're going to add compute cluster 2 to our resources available to our tenants so we'll go back to that vM were integrated open stack plugin and if you remember the only thing you could do before was to deploy an open stack cluster but now we've got more capability because we once we've added the cluster we can add more compute resource we can add both nova and glance data stores to the environment so what's interesting is not only are we adding the cluster but because the compute capability is actually resource intensive and here I select compute cluster 2 because it's resource intensive it's responsible for doing all the launching and terminating of those vMs we're actually going to scale out the management plane as part of this so we're going to add another compute management vM into that management plane that Arvin created before so here you see the name of it vio compute 1 and so we're going to deploy that onto the vSAN data store and then we're also going to choose the nova database so instances that we deploy on this, where are we going to deploy them which data store for that and then once we select that we're really going to go through the same process we went through before so we're going to clone that open stack template and run Ansible playbooks to be able to provision the compute services in that management vM and now you've expanded the capacity of your open stack tenants this is all vSphere yeah so we go and check and just to see that that vM is actually running so we look at the hierarchy of vMs in our management plane and we have two of them now because we've scaled that out so we also talked about integration to other vSphere products so vCenter vCenter operations manager realize operations and log insight I'm still getting some of the names down so vCenter operations manager is a way to proactively monitor your infrastructure but what we've done is we created management packs so that there are actual monitoring of the services associated with open stack so what you're looking at here is a set of dashboards and this one is monitoring the controller vMs so we're running controllers in an active configuration so you see two badges for each of the services so we've got Keystone, we've got Nova we've got Neutron and then under the storage we have both glance and cinder so not only are we able to see at a glance the health of that infrastructure but we're actually capturing metrics so that we can look at individual services and see their CPU and memory consumption so as I drill into Nova here I can see Nova API or Nova Scheduler CPU and memory consumption and the same thing is true if I click on the badges for the storage section I can look at glance and cinder see the health of the infrastructure and also how they're using resources and I also visually can look at the entire infrastructure stack so I can see the services, I can see the operating system that's running and see if it's healthy, I can see the VM and I can see the host so very easy for me to tell if I have an outage in one place or I've got an outage in the entire stack we have additional dashboards around the compute infrastructure network and storage we also are capturing information by tenant so here I could say in my environment I've created the demo tenant I was very creative when I was coming up with a name for this I used demo but I can see immediately the infrastructure associated with the demo tenant and I can see that it's healthy I can drill into each of the individual components of that so I can see the VMs and I can see each of the logical networks and just by hovering over either any of these I see it's health it's risk it's efficiency really at a glance here this is using VC Ops so there are hyperic agents running in each of those VMs that are monitoring the services and when the services go down VC Ops is actually polling and finds out that that information or that that particular service has died so what happens if we actually have a service outage so we're going to go back to the command line and stop the Nova Scheduler service so I just say service Nova Scheduler stop so that a hyperic agent depending on your configured time within a minute or so will notice that that has stopped and you'll see that badge change so not only do I see that you'll notice it only changed in one because I stopped it on one of the controllers it's running on the other controller but beyond just knowing that I have an outage I can actually click on the badge and drill in and find out what is the outage because remember we're monitoring so you could have lost the host you could have lost just the VM or you could have just lost the service so I drill into that and I see that the outage is actually the Nova Scheduler is down if I continue down that I go even further and I see what the remediation is restart the Nova Scheduler service so I go ahead and do that after that starts again the hyperic agent picks that up notice it started VC Ops picks it up after that and then the badge changes back to green so the last thing to show you and Arvind helped me out with all the setup on this is to really talk about managing your log files so you can choose any syslog server in this case for the demonstration we're using login site and I'm using a demonstration of our internal network systems business unit cloud so again we've created management packs so that we have specific metrics that are available in these dashboards and so at a glance I can see things like looking at a heat map of all of the stack trace errors that have occurred in Nova over a period of time or I can look at the instance growth over a period of time in this environment if I'm running NSX I could drill down into NSX and I can see the logical networks and what you see here is we've got over 3500 logical networks and over 1300 routers it's interesting if you give people the capability to create their own networks to actually do it so there are a lot of networks in this environment and we can monitor that growth easily through this dashboard we can also look at API calls the number of API requests over time we can also see how response times have changed based on those changes in API requests but maybe what you might use most often in here is this interactive drill down into log files to do troubleshooting so I can choose a window of time maybe a 6 hour window searching through all of these log files maybe I want to narrow it so I drag across a timeline choose an hour and a half window I reduce the number of log entries and I can also filter it by keys so here I'm looking at traceback and so very quickly I've drilled in to see that I have an issue on a particular network where there are no IPs left available and that is the reason I got a particular error I could have searched by the UUID if I wanted to or some other criteria and log inside also gives you some ability to aggregate messages that have similar errors so if you're getting 1500 DHCP errors they'll actually be aggregated together to help you understand what they are and have you not search for them over and over again what I'm going to show you here in this 40 minute episode is how easy it is to deploy a production grade open stack leveraging your existing expertise with vSphere and your existing capability to manage that sort of environment and show you how we've integrated with some of the other tools around it to really help you manage that production environment so with that that's all we had for presentation any other questions? okay so I'm going to split that into one additional part which you mentioned even if you want to add other hypervisor regions the main challenge as you know is not how to hook that up with open stack the main challenge is actually testing and support of that architecture so we are actively investigating with our partners on how to deliver it in a meaningful way to enterprises that's our next frontier for us but if you are today and if you are not worried about that part as you mentioned there are a few other benefits first of all all these products vSphere NSX all these are well tested we test them internally across hundreds of customers right so we know where the bugs are much earlier than any of our partners or distributors so quality we can guarantee more quality for these products underneath open stack so it's reliable six months down the lane your you know hypervisor is not going to have a Linux bridge failure kernel panic somewhere or ice-cazing failure right we would probably know it before you and we'll probably give you patch second thing there's more and more operational things which we can build in because vCenter has capabilities such as snapshot you can snapshot a VM, apply a patch if it does not work rollback you can power it back on so we are working on those things which will help automated patching even automated upgrade right if you ask people why are you still stuck in grizzly everybody knows it's hard to upgrade we are trying to make it easier the whole idea is if it has to become enterprise grade it has to come with a lot of operational benefits tools scripts out of box support because not every enterprise has hundreds of developers sitting there to build cloud platform they have other things to do right so that is our biggest focus and also support single point of support anything from nova python process to the hypervisor to the NSX will support we'll take the support call I'm not saying it's all done by VM probably have some third party engagements in the back for memcache for example but we will take the support call and we'll fix it we'll release the patch to you right so it's a support operations deployment complete life cycle that's how it needs to be and that's where we are trying to go ice house this one is ice house any other questions I actually meet the first part and so I wanted to know is that all those open stack services right so they can be developed they can be deployed on the VMs yeah these are all deployed as VMs this one is not for home use this is for production use there is a toy available for home use which deploys everything in one VM as well so you can do it in for home use there is a separate toy version of it yeah so but I really don't want that I don't want production to be developed then you need to get three host minimum yeah minimum three host three host so one host once you deploy it you will see how it has laid out all the VMs three host for the management cluster and one host for compute is the minimum minimum yes yes and any other yeah minimum of three host orchestration of the services within the VMs starting all the open stack services let me take a question over here too yeah so we're still using we're still using port groups so you could either whether you're using the distributed switch or NSX we're still creating port groups within vSphere it can cross the cluster so distributed port groups should be able to span yes in the back KVM I didn't understand not necessarily I don't know what you mean by NOVA KVM NOVA just runs as Python processes here and we are not it's not running on the ESX host we don't need to run it on ESX host we're not running KVM in this environment the hypervisor is ESX it's just all vSphere in fact it just talks to vCenter NOVA does not even know what all ESX hypervisors are there you can take get rid of one of the host, put it in maintenance mode we can reshuffle it it helps a lot with the operations they're all ESX based what NOVA sees is a full cluster it sees a cluster as one giant hypervisor so if you had a cluster with 10 ESX hosts in it and you went into Horizon and you looked at the hypervisor it would just show the cluster name it wouldn't show you each of those hypervisors it would just show you the cluster yes so there's a couple things in that that yes if you entered maintenance mode we would evacuate all of the VMs but generally you would isolate your OpenStack cluster and the reason for that is all the accounting that OpenStack does presumes that all the resources you gave it are available minus whatever VMs you've already created so if you had a bunch of other VMs in there it makes sense to isolate to isolate them do you mean you're overlying we don't we would not recommend touching existing VMs because there is no metadata in OpenStack and it's hard to import metadata from but let's be clear about what you're asking you could have non-OpenStack VMs in your vCenter environment along with OpenStack but you don't have to isolate that way but you're not managing existing VMs through OpenStack it's just that OpenStack does not have information of what VMs you have created and it's pretty hard to take that and migrate and insert it in Cinder, Nova, Neutron and all the five places and give it a user and a keystone and just forget about it it's not worth it probably yeah I agree with the use case it's just the ROI to the effort ratio is pretty bad right now okay, any other questions? resource pools we are thinking about it at this point no if you have a specific use case we can talk a little bit more we are thinking some ways where it can be useful mainly useful in cases where limited licenses are available per host and you want to slice a cluster further it is relatively easy for us to do we just need to know the use case more and also resource pools are more dynamic people change them more often so keeping Nova in sync is a bit tricky so let's talk more offline and I would like to understand your use case for that this question you can use security groups with NSX if you are using the distributed switch you can't use there are some limitations in functionality all available with NSX for NSX with overlays and everything I would love to think about that a little bit more out of the box we do need NSX we don't work with other overlay products at this point there is a NSX controller it works with the v-switch the distributed v-switch that it has on various hypervisors any other questions glance back glance back by the v-sphere data store so it is the same image management it is just a data store so you don't have to stand up with to be able to deploy glance I mean horizon horizon is not an administrator operation tool horizon is an administrative tool for users for project quotas it's not a tool to monitor all these ESX or networking so you still need the data center operations tools and the example given was of VC ops but you can use some other tools as well or the syslog it does nothing to do with horizon so there is a lot of need for other tools to do your day-to-day data center operations and our candidates there are vcenter, vcops and loginsite the main idea here is that to run OpenStack you need some of these tools or probably all of these tools not saying specific VMware tools but you need to think about how you will monitor it how you will get alert when your NOVA goes down or Cinder is not responding so you need those tools we have some of our offerings if it is relevant and most of them will probably extend over other hypervisors over time they are very generic in their approach alright so the next session briefly before you guys all go away is about how to consume OpenStack using, I mean we showed you how to deploy it, how to operate it but do you get the same OpenStack? yes you get the same OpenStack, how do you create VMs how do you consume it how do you create 3 tier networks all those will be in the next it's actually a hands-on lab it's available online without any infrastructure you can go online and you can take the lab at your own pace in fact this whole installation is also as a part of that hands-on lab so if you want to stick around we will show you how to consume OpenStack in the next phase sounds good? thank you