 Devakar and this is Bharath here from HP. So we don't have Kiran who has done some parts of this work or most part of this work. So we'll take the time to actually talk on this behalf and walk you through this. So if you really look at this, we are trying to look at how OpenStack is actually having. What are the values that OpenStack has? And what are the values that we spear environment gives? For the sake of simplicity, we have just highlighted few of these values which OpenStack gives you and few of these values. And this is not exhaustive. So the list actually has what does OpenStack mean, which could be seamless API interface, portability across different forms of clouds and all that. And an easy way of accessibility and a seamless way of accessibility for any cloud user. And that's what OpenStack gives. And if you look at VMware, it basically gives you a lot of features and it's most proven inside the data center. It gives you a lot of functionality which is elastic cloud and it's the most adopted one. It gives you things like DRS, HA enablement, FTA-enabled VMs, FTA-enabled instances and lots like that. So our intent in this demo is to how to bring these two values together and see how one can leverage the other. That's the intent of the demo. And we have a certain set of blueprints which is implemented and waiting for submission to Havana. And based on those constructs, we'll try and show you some samples of use cases which is actually using this model. And that will be our main aim of looking at the use cases. So mainly here, we see that OpenStack is good today. And vSphere, it's good in its form. And what we are trying to bring together and make it better. And we can have a better cloud built on top of this vSphere and in the OpenStack. So in the interest of the previous discussion regarding intent of why we are trying to do this, we have the following blueprints here which we are trying to implement as part of this Havana. And these are some of the blueprints which we have put and which we have discussed in the design sessions. And I'll just quickly go through each one of them and not take much time on explaining each of them. So here, main intent here for the first one is to have multiple vCenters managed through one compute, including multiple clusters and the resource pools that is in the next one here. So the third one is mainly about vCenter compute driver and schedule announcements to publish the cloud capacity which is required mainly when we implement a compute managing multiple clusters and resource pools. Today, what happens in the existing vC driver which is out there as part of the grizzly, you can manage one cluster using one compute and the cluster is the capacity what it currently gives today is mainly taking the first resource pool capacity in the default resource pool capacity and this is enhancing that feature. So the fourth one here is mainly related to, currently today you need to have a glance image wherein you will upload a VMDK image to deploy an instance into the vCenter or in the ESX. Here, what we are talking about is having a glance, having a VMware template to be used as an image and use the glance as only the metadata repository wherein you can just maintain the metadata of the template that you have available in the vCenter and when you do the actual scheduling, have the vC driver that we write to take care of the, taking the template to deploy the images, deploy the new instance onto the ESX server. So the next one is, so here we are talking about the compute driver NAS meant to support our HP Cinder 3-Power driver. Mainly here, we are talking about having Cinder volume which is carved out from HP 3-Power array and providing the support for attaching that 3-Power volume into an instance. So it's not only, yeah, it's a fiber channel support and the second thing is in this whole model we have actually the OpenStack compute is modeled as a vCenter cluster. So vCenter cluster has multiple hosts and the visibility to the volume needs to be set not to the only host which is usually the KVM model but needs to be set to all the hosts in the cluster so that a VM which is live migratable within the cluster whether it's a data disk or the boot disk should be able to move and that's one of the features which is needed because the target needs to be cluster aware and it should have the visibility. Okay, so that's the other part which is also changed. No, no, in this model we are not trying to do it through vCenter. It's provisioned through OpenStack and every API is done through NOVA and the driver which is actually talking to vCenter which is the vCenter driver will proxy it and do all the work which is needed and while it does that from NOVA so this is about getting the best of both worlds and one of the world is OpenStack and you create a VM and then how do you stitch the data disk through Cinder to a VM which is actually stitched through OpenStack and using a vCenter driver and matching these two. It's the best of both worlds marriage and in this part the only thing which we had in Cinder Cinder basically tries to do a host model with multi-initiators, it's also possible but in this particular case it's all about setting the visibility for all the hosts which are out there in the cluster and it's the volume share across all the hosts. The Cinder driver is also done for FC and it's a data structure change and that blueprint is also submitted. This is the NOVA blueprint. There is also Cinder blueprint which was also submitted and it is approved. So if we can follow through and wait for the questions that'll be great because if I run out of time then you would not see the demo so we'll go fast and then finish and probably take about 10, 15 minutes for the questions. Yeah and the last one here is we have a Health and Mon module which is introduced into the OpenStack and which is currently in Stackforge. We are trying to get the data out of the ESX servers into this Health and Mon and we will show what is available as part of this Health and Mon. So when we consider the current Grizzly model of either the ESX or the managing the ESX server through the cluster. So this is the model what we see. If you take a KVM host, you will compute into KVM host and that will reflect into a compute entry into the compute node table and you will have a queue which is created for compute one whenever a request goes into the queue. Automatically that is served by this compute here. So similar to that one when you have n number of computes you will end up with the n number of entries or into the computes and you will see the n number of computes. So when we compare the same with the vSphere vCenter driver which is existing in Grizzly. So you are modeling the cluster as a compute. So for each of the computes you will see the entry in the compute table. So this is what is currently available and NOAA scheduler uses the queue which is created to pass on the request of a particular compute and that's how it gets the instance created on a particular compute. So when we compare that with the NOAA proxy compute driver that what we are proposing, here there will not be any change in the KVM driver. So here we are talking about having a proxy compute which can be deployed either in a VM or it can be in a physical server anywhere you would like, you can have that one. Which can manage a number of clusters. It can be a number of clusters, resource pools, grouped together. So we are having one compute managing n number of clusters, resource pools and we give a feeling that you have n number of computes so that you will not have any, you don't have to enhance any other stuff which is all the features which are available as part of NOAA and it will continue to work as is today, we will see what we mean by that. So all the requests will go to this queue. So NOAA scheduler, it uses that queue to if any provisioning that needs to be done to this particular cluster. So here, what do you want to talk? You can. Okay. Here we are talking about HP three power center volume attached for an instance which is already existing in a cluster. So what we are doing here is, there are two parts to the story here. One is the change that we need to implement as part of the Cinder wherein a request to attach a volume to a compute that goes into the compute and that will be forwarded to the Cinder to create the volume and as part of this one and the storage array, it will create that volume and when it returns the, today in Cinder, what it returns is only a name value pair that of the WWN of the storage or the volume that is created and it presents it to the particular host. But in this case, what we need is since we are managing a cluster, we need that volume to be presented across all the hosts in the cluster. So that's the change what we are doing here is we are getting a list of WWNs and WPNs for hosts and accordingly all the hosts will see the, will have the visibility to that volume and it gets attached to a particular instance. So the forward, forward, the step two also passes all the initiators from the host. So Cinder needs to know what are all the initiators which are there in the host and you first give Cinder saying that these are all the hosts to which you need to present this target to and post-presenting those target it actually gives those results of how it presented and then you go do a re-scan on the first, today just does the first one and there are ways and means where we can improve things and picking and choosing which path, what we want to do all that. But basically this is about collaborating for a cluster and actually making all of the target's volumes to be visible on all the basic hosts within the cluster. So with that one we are going to talk about a demo wherein we will show some of the use cases that are possible with the implementation that we have with the managing multiple clusters and our resource pools. So our demo setup would be something like this. So you have an open-sac controller. So we have two v-senders, you sender one A and B where you have a cluster called cluster high availability and cluster FC attached. So we are going to use these clusters to depict the use cases that we're talking about. So you have another cluster tenant one where you have resource pool and resource pool two. You have an array here. So you have this two proxies which are managing this v-center proxy A, it's managing the cluster clusters which are available as part of this v-center A and v-center proxy B, which is managing virtual center B. So you have the, I initially talked about the health and monitoring component and this is the word proxy which gets deployed onto this node here which manages the v-center A and v-center B. And so we will drill down into the use case one wherein we are trying to provision highly available VM as part of the cluster which is where you have enabled DRS and HA for the cluster. So if you focus on this particular cluster which is managed by this v-center proxy A, you have a compute HA compute over here and you have a flavor which is made available for HA and you have an HA image. So the request goes from the controller to the compute which is the v-center proxy here. So that request gets forwarded to this v-center and we use the v SDKs to make calls to, remote calls to the v-center. So that in turn creates the VM which is a highly available VM. Now you can see that there are two clusters here and as part of, since the flavor that we have is selected is HA and image what we selected is HA image. So because of the affinity it goes into this cluster high availability. It's just an image. No, that's the name. Here just the name of the image, that's all right. So there is also a metadata which is trying to say that this particular cluster, this particular compute is aggregated and it's actually a HA enabled setup, okay? So you're trying and taking the VMware values which it could be FT enabled VM, it's a HA enabled VM, any of those. And these are things which OpenStack doesn't understand but you've tagged it in such a way that this particular cluster is actually tagged as a HA compute and the names are set that way so that we are meaning it in words and it's a HA compute and it's actually aggregated with a HA host aggregate which is actually a HA aggregate and then the flavor defines that if HA is true then provision onto this and that's how the scheduler understands all of this and provisions onto that. And in this, all of this, you have only meta tagged the flavors and stuff like that and the native OpenStack kind of provisions onto the HA enabled VM. That's the basis of actually having the values of VMware using it in the context of OpenStack and all you have done is few tags. So likewise, any number of VMs that can be provisioned into this cluster. So the next use case here is about processing a highly available instance with the FC volume attached. We talked about attaching a Cinder volume to an instance. So how that happens with the vCenter proxy here. So you have an image over here. You have an host aggregate called a three-part FC which is created in the controller. You have the three-part FC tiny as the flavor type and there is a FC compute which is basically representing this particular cluster which is managed by the vCenter proxy driver here. And at the same time, we are showing another functionality here where you have a VM template which is an image which is out there on this vCenter which will use it for deploying the new instance. So the request here, it goes to array to create a volume like I told you about how it starts with sending a message to the array to create the volume so that volume gets presented to the host through the Cinder and the request here goes from controller to the proxy driver proxy. From the proxy, it gets to the cluster here through the vCenter and it uses the VM template which creates the VM and attaches the volume. So we will show this demo as part of the demo. And the interesting thing is if one host goes down and the VM migrates, it you will see that within the cluster, all any VM which is actually as the VM migrates, it will have the visibility to the Cinder. Yeah, since this is a highly available cluster, so if the host C goes down and as part of that DRS and HA configuration, it will move the VM to the host B. So even at that time, you will have this data disk which you attached as a Cinder volume there. It will be still be available because of the enhancement that we are doing where you can see the visibility of that volume across the host. Two questions. Yes, that's correct, that's correct, yeah. It's still within the compute called FC compute and to its knowledge, it's in FC compute and it's within the FC compute, the live migration or whichever migration happened. The interesting thing is if the Cinder volume was not presented this way, the migration couldn't have happened because it's not visible, the data disk part. Yes, yes, yeah. Otherwise, we center would not even allow the migration and it would get stopped there. It's not a destructive model, but for it to really perform, that's where this enablement gets the VM to live migrate and all of that and so that the VM is still having the data disk visible to it. So moving on. There was one more question. So the network part today is pre-configured, so that's how it goes, okay? So that's a part which we need to answer and probably Sean who's sitting here, we'll see some answers in the next, you know, time coming soon and that's where the vCenter one, a network is unsolved mystery for now, it's pre-carved and hope that works and you will not have to stay with that model for too long, I hope, okay? So for now, it's answer which is it's pre-carved, okay? That's a, at least for the vCenter driver, that's the truth. Let's be clear on that, okay? So the other use case is possible with this one is the tenant-based provisioning where you can set the affinity to a particular cluster or a resource pool wherein you want to allocate one resource pool to a tenant and you always want that new instance to be created in that particular resource pool. So this is the setup of what we have, let's concentrate on this cluster A here, where you have RP1 and RP2, these are the two resource pools which are out there in the cluster tenant and you also have a highly available cluster. So what we are trying to do here is we have created a host aggregate called tenant1 where we will set the affinity to go into, whenever you create a new instance to go into this resource pool one or resource pool two by the host aggregate set metadata that you will set here. So there again, this request goes to the proxy and from the proxy it goes to the vCenter. You have a tenant1 template, that's how you link this host aggregate. When the scheduler, it picks up the number of hosts, it goes to the host aggregate and from the host aggregate, it will pick up, okay, I have, it has a metadata to say, okay, looking for a highly available cluster with resource pool one. So that's where the VM ends in this resource pool one. Similar to that, it will deploy the resource onto resource pool two. Yeah. Yeah, single VM template. Single VM, vCenter based template, single VM. So finally, we need, whenever we configure and deploy some of the instances, we need to have monitoring and this is the monitoring solution we have where we have the component called helpmon, which can, which manages the clusters. It manages the clusters, it manages the resource pool available there. So as of today, I would have seen that from the open stack construct, we have open, open stack compute and the instances. Here, we get the underlying details about the cluster and the resource pools. All the way up to the physical. All the way up to the physical layer and also it manages the VMs. So with that one, I will move on to the demo part of the session. So here, we are trying to deploy new instance into a highly available cluster. So you launch an instance, you select image. So we showed that we have a high label image there and the flavor type. So this is the pre-carved network that we have. Yeah, I guess. Yes, yes. I'm sorry. No, we directly created it on the horizon. So. What, guest customization? No, not yet. So here, okay. We just launched this highly available VM. So here, we are trying to power on that VM. So this cluster, which is available here, it's a fully automated cluster and which is a DRS and HE-enabled cluster. No, just for showing you that it is highly available cluster and the properties out here, we are showing the vCenter, otherwise, you don't. So it's more for a demo sake? Yeah. That it's really happening and the proof is actually what you see, right? So that's the reason why you're showing it. Well, I could always say that that's all happened, but where's the proof? That's the proof. So you see that this IP address is assigned when we started this VM. So it's still pinging. So you log into this VM and show the IP address which is assigned to this VM. Sorry, I could not get your question. Yeah. Mainly, what we are doing with bringing in this functionality into the open stack is we are doing the cloud enablement of the same features that you would have seen in the vCenter and make it available as part of open stack where you get the benefits of the cloud. No, we are not. This is one way of working. Let's put it that way. So if in a brown field somebody wants to try out, they already have their VMs and they want to try out a cloud, that's the way we should talk of it. Yes. No, you should be able to. Okay, I'll move on to the next demo wherein we show the FC attached volume to an instance. Here you are selecting the volume here. So you do a volume attached to that instance. So it started the job. So you can see that it's attaching that volume to the instance which is created. Yeah. On the fly, it creates that volume on the. Create. Yeah. Yeah, yeah. Already it's presented to that host and it's available there. Yes. So when you re-scan, when you pick the right host and then re-scan, that's there. Today it picks the first one so we can do better maintenance. You could go beyond that. So see the demo is good but it picks the first one which is written. So the way it does it in the vCenter is reconfigure that virtual machine to attach that another disk which is it attaches that volume as an RDM volume. That's, so when you give a cluster, at least the current thinking is when you give a cluster to OpenStack, you don't mess with it. Okay, so that's the model. I hope, Sean, you wanna say something about it? Okay. It's about which chunk are you giving it for OpenStack? That's always the different. You don't do an import. At least for today, there is no import. Everything is driven through OpenStack and it is initiated from OpenStack and provisioned. Okay. Yeah, the next part of the demo is to see how we can use a template to deploy a new instance through OpenStack. So here there is an image. This is the image metadata which is available as part of Glance and this is coming out of the VM template which is available in the vCenter here. Yeah, image metadata is available in Glance and when you do Nova image show, you see that there is a metadata which is added as an image property. Yeah, yeah, yeah. So when you launch an instance, you will just choose the image just like that you will choose for a default image launch. It's possible but not done today. It's not reconfigured, it's not, you know. We can always do it. So it's not that it's impossible but today it's more of a template provisioning, yeah. So you can see that it has started deploying that new instance from the template. So that's about the deployment of the new VM. HNAble VMs? Not that I'm aware of and here we are taking the capabilities of what vCenter supports and it's HNAble VM. So, yes, yes. So we will quickly move on to the next demo. It should be the last one. So here we are talking about this health and module which collects the inventory data as well as the usage and the alert, the alerting data from for the ES6 driver here. So you can see the clusters, resource pools, VM hosts, instances, network storage and all the inventory stuff that we collected from the ES6 driver. So this is the VM host data that we have for KVM and similar to that we have the information for ES6 VM host cluster and resource pool as well. No, currently we have this created only for the demo and we don't have it like a production quality code. Yeah, it's API. Yeah, yeah. It's the best way of showing a demo. So it's more for the demo, we are done with time. So here drill down into the cluster. This is the inventory data that we are showing for a cluster and similar to that we have the data for resource pool, VM hosts, instances, network storage and we have the alerting data as well. Okay, so we are pretty much run out of time and so we'll surely post it, give us a couple of days, we'll post it and it should be there, okay? Okay, until we get back to India or the flight takes us there. So that's the best, if there is a flight delay, if you have earthquake, don't blame it on us. Okay, so that's pretty much that's couple of days. Okay, it's all about the airline industry and nothing to do with us. Okay, that's great, thank you. Thank you.