 Thanks for being here. Appreciate your taking time to be here at this session. I think by now you must have looked at tons of slides describing what VMware OpenStack, integrated OpenStack is, read about blogs, and this session we want to kind of keep the slide where to a minimum and show you the real deal what the product looks like, how you go about installing and operating the product. So a quick overview of what we'll be looking at in this session. We'll be looking at how to install. We'll start with a brief overview of what's inside the product to level set before we jump into the demos. We'll look at how to install the product, and once we have it installed, how do we go about doing data operations and maintenance. Finally, we'll look at what all this means to a developer consuming OpenStack. Most importantly, at the end of this session, we'll be giving out sweatshirts. So make sure you stay around for that one. Thanks, Trevor. So what is VMware integrated OpenStack? When you deploy the product, what comes with the product? You have your VMware software-defined data center infrastructure, your vCenter, ESXi, hypervisors, NSX, and you have your monitoring tools such as vCenter operations and login site. Then on top of this, you have the standard OpenStack components, your Nova Center, Neutron, OpenStack services, and all the consumption points, your web portal horizon, your OpenStack APIs, your SDKs, and your heat orchestration layer on top of your OpenStack services. So we've taken that standard OpenStack code from upstream. This is the exact same OpenStack services and API that you would get if you get OpenStack from upstream. We've taken that, tested it, hardened it. We've done bug fixes. When we do bug fixes, we always push it upstream so we don't hold any code back. We have drivers for each of these components that talk to the underlying infrastructure to do actual provisioning. For example, for the compute component Nova, we have the vCenter driver that enables Nova to talk to the underlying vCenter and provision virtual machines, and we have similar drivers for Cinder, Glance, and Neutron. And we've added a management piece to the OpenStack which is going to enable administrators to take the OpenStack code and make it really simple to deploy it on top of an existing vSphere infrastructure and do data operations going forward. So VMware integrated OpenStack is nothing but all the vanilla OpenStack components that you get from upstream bundled with the management piece that makes it easy to operate and deploy OpenStack in your vSphere infrastructure. So how does it look like when you deploy it on an existing vSphere infrastructure? So you need a management cluster inside your vSphere where all your management components are going to live like, going to live. For example, your OpenStack control plane, your different OpenStack services, Nova, Neutron, Cinder, the scheduler, the compute services, your virtual center is going to live in this cluster and if you have NSX, you're going to deploy NSX on the management cluster. So typically all your management components are going to be deployed in this cluster. You deploy a virtual appliance that comes with the VMware integrated OpenStack manager and you deploy that in your management cluster and then you use this virtual appliance to deploy your OpenStack services in a highly available manner on your management cluster. And once you have your OpenStack control plane up and running on your management cluster, you can use that to deploy your tenant workload VMs on your compute clusters. So if you have OpenStack deployed on your vSphere infrastructure, this is how it's going to look like at a very high level. So with that, I'll stop and I'll hand it over to Trevor to show you the real product. Sure, thank you Sansosh. So the actual deployment process is fairly simple and should look fairly familiar to a VMware administrator. We're trying to leverage the same kind of experience and tools that they have for deploying virtual appliances for deploying VMs. So to start off with, we're going to go to our management cluster and we're going to deploy an OVF template. It'll actually be in the OVA format. And so we're going to actually have a URL that we'll paste here that has our OVA location. And this is a familiar screen for VMware administrators, what kind of resources consumption we can expect from the virtual appliance. Accept the end user license agreement, sign your children away. And then you're going to select what folder you're going to place your virtual appliance in and what you're going to call it. We'll just stick with the defaults, call it VMware integrated OpenStack. Then we'll choose the data store where it will live and then the network that it's going to communicate on. And finally, some particulars such as the IP address, gateway, DNS, hostname, all of the good stuff that we love when naming our VMs. And last but not least, a notification that we're going to add a new plugin to the vSphere web client so that you can manage your build environment from the same interface that you're normally accustomed to. So the deployment will continue, then we have to log out and log in to see this new plugin. So when we log back in, we'll see that there's a new plugin on the dashboard for vSphere web client and it's called VMware integrated OpenStack. So I click on that and then I click on deploy OpenStack. So for those of you who are familiar with deploying OpenStack in production, the first step deploys our build server and VM template that will be used to deploy the entire control plane. So what we're going to do is set all of the configuration options that will give us a production environment of OpenStack. And I can either do a new deployment or use an existing JSON template that I have from before, which I'll do right now. I'll provide my vCenter credentials for all of my management activities. I'll choose the cluster that it'll run on. Also I'll provide two network definitions here right off the bat. The first one up top are the virtual, sorry, the virtual machines that we'll be using for our control plane. These are where they will get their IP addresses from. This is a network that you will not expose to the users. The network on the bottom, that's your API access. So those are the IP addresses that the users will, or developers will use to access the environment because they are the IP addresses, the public IP addresses for your load balancers. Then we specify the hostname for our virtual IP and the actual virtual IP address, which is on the same network as your API access. Then we select which compute cluster will be used for NOVA, instance deployments. And the data stores in vSphere could be vSAN, anything that's regular VMFS, NFS. I have a small environment, so I had to stick with NFS storage. And then on the glance side of things, I also use a VMware data store. And again, this leverages the open source drivers available to anyone in the community. We're using the same things that we have open sourced so that they can use our storage for both their instances as well as their glance image files. And here you select which kind of networking technology you wanna work with. The virtual distributed switch is an option, but it's probably going to be more for smaller scale deployments, very dev and test. But if you want something that's production ready, can leverage the full gamut of neutron services, this is when we opt for NSX networking. And for authentication, you can either use the Keystone database or LDAP. Last but not least, you configure a syslog server and for us we would configure a VMware vRealize login site server, or you can use whatever you're currently using for syslog. So then we get our summary screen and we can scroll down and see the virtual machines that will be deployed. So you can see from this description here, right off the bat, we have high availability built into the infrastructure from the beginning. This is without any special configuration file changes from you. We actually take all of the capabilities, all the configuration options, and then store it on our build server. Does the translation into the appropriate open stack settings so that when the environment comes up, it's already in an HA configuration. So we have two of each components with the exception of the database. It's using a Galera cluster which at a minimum needs a three node deployment. So for your management cluster of your ESXi host, your vSphere host, we'll have at least three of those servers in that cluster to service the Galera cluster because these VMs will have anti-affinity rules created automatically. Okay, so then it goes ahead and deploys the virtual machines for you automatically and then we're using some Ansible playbooks to go ahead and do the configuration afterwards. And at the end of it, then we can see our instances running properly, our control plane is up and running and then continue using open stack just like normal. All right, I'll hand it back to Santos. So we have a production grade, highly available open stack deployed and ready to go that admins can now hand off to their developers, developers can start consuming them. But for an admin, this is where it all begins. It's just deployed and there is more to come. Typically what you would expect is developers start using the open stack cloud and then one of the first, the next thing that the admin has to worry about is data operations. How do I go about, as developers start consuming it, the admin is going to run out of capacity and he has to start worrying about how do I add more capacity to it? How do I add more compute capacity so more developers can start spinning up VMs and likewise that admin should also be able to add and remove storage capacity as developers start creating new volumes or bigger VMs. So how does the developer go about doing all this? And the next thing that the admin has to worry about is maintenance. So there are, like once you deploy open stack and start consuming it, there are going to be bugs that needs to be patched. How does the admin make sure that he rolls out the patch without bringing down the entire cloud or without disrupting the existing workloads? And what about host maintenance? Say for example, what happens if the admin has to patch a bug on the hypervisor? How does the admin do that without disrupting the workloads? These are some of the data operations that the admin has to worry about after deploying open stack and open stack is up and running. And once you have different kinds of workloads that are being deployed on open stack and the underlying infrastructure, the admin would also want to make sure that certain types of workload get certain SLAs from the underlying storage applications. The admin may want high IOB storage for database kind of workloads. The admin may want not so much IOBs for a web server or a different kind of workload. So how does an admin go about doing that? So in the next few demos, we'll dig a little deeper into each of these workflows and see how VMware Integrated OpenStack provides really simple workflows to do each of these operations. Dr. Trevor. Okay, so back to our demonstration environment and I'll jump down to our actual day two operations type activities. So once you have your control plane up and running, it's a simple matter of adding additional compute capacity, adding more storage capacity and so on. And the way we would do that is through a menu-driven interface, similar to what virtual machine administrators are used to. So let's say we want to add more compute capacity. We click on this Nova compute section over here and say we want to add more compute capacity. I have my compute cluster two that I'm going to use. And with the way that we leverage the vSphere drivers, we don't expose individual hypervisors, we expose clusters of compute capacity. That way you can leverage VMware technologies like DRS, HA, vMotion on the back end that's completely transparent to your cloud users. So what this compute driver VM is doing is accepting all of the compute requests and then translating it to vCenter for it to go ahead and provision the workloads. And then I choose which storage I'm going to work with. I can choose any amount of additional data stores that I want. And then I go ahead and click finish. And then there's a notification that my compute services will be temporarily disrupted as I restart my Nova services. And as you can see here, my additional compute driver VM is launched. It has the service ready status and then it's able to serve up compute requests. And the same thing happens when we're adding more Nova storage capabilities or glance image storage capabilities. So I'll step through that real quick. So I say I want to add an additional data store. I select which cluster I'm going to work with which is the new one that we just deployed. And then I'll go ahead and select my data stores and then click finish. Same warning about my Nova services having that temporary disruption as it's doing the configuration. So again, as we're making these changes, they're not being done in one-off fashion. They're actually being written back to the Ansible playbooks that we have on our build server. And then the same thing can be done for glance image storage expansion. But let's head on over to updates which is something that Santosh talked about before. One second. So I head back over to updates and then I can say I have this update package that VMware provided for me to patch whatever bug came up in Nova or Neutron, for example. So I click on this green plus sign. Okay, the demo doesn't appear to be working properly but we do have a video for it. So what I'm gonna do is actually jump over to that video right now. I'll expand that. So we can see here that we're gonna do some updates and we're gonna click that green plus sign that I showed you before. And I'll select my package that I want to present as an update and then once the file is uploaded then I can click that apply button and I'll get a confirmation that, hey, this might be disruptive, do you really wanna do this? So this is actually going to load the package to the build server and then the build server will go ahead and apply the patch to the relevant nodes. And this is all being done automatically for you on the backend. And once it's done, you can actually revert the patch if you need to. If there's some unexpected behavior based on what you're doing you have the revert link right there. Okay, and I can actually confirm on the command line that the patch is applied. I just wanted to show one more maintenance function before I hand it back over to Santosh. So one of the things that may be common for administrators is you sometimes have to evacuate your cloud instances from your hypervisors and if you're working with other platforms they may be a little bit tedious. So one of the things that we do pretty well at VMware is we have the capability to put our host into maintenance mode, which is pretty much an evacuation using the DRS vMotion capabilities. So we can see here we have four cloud instances. Two of them are on Hypervisor 97. Two of them are on Hypervisor 98. It's kind of hard to see maybe from the back of the room but trust me, the numbers really are true. So I'll go ahead and verify where my instances are actually running and on which hosts. And then I'll actually go to my compute cluster and say, hey, I need to put server 97 into maintenance mode. And again, transparent to the users that vMotion happens, it can happen in bulk, it doesn't have to happen one by one. And as you can see here, once the action is done all my instances are now on server 98. And I forget how many vMotions we can do simultaneously but whatever VMware supports as the number of simultaneous vMotions that's fully available to the VIO users or administrators. Okay, so I'll hand it back over to Santosh. Thanks Trevor. Yeah. So now the admin has the capability to do perform some basic day to operations like capacity management, ad remove capacity, patch open stack, patch stand-align, patch or upgrade the underlying infrastructure using host maintenance mode. Another powerful tool that, another powerful thing that admins can do with the host maintenance mode is that when they want to do things like upgrade the underlying hypervisors, say from one version to another, say for example, the admin is running vSphere 55 and they want to upgrade it to 60. They can use the... Sorry to interrupt you, but if you have your hypervisors and you want to patch Heartbleed, for example. Exactly. This is a kind of usage for the maintenance mode capabilities. So what the admin can do is use a tool called vSphere update manager, which is going to put the hypervisors into maintenance mode on a rolling basis so that the workloads that are deployed on top of those hypervisors are not disrupted at all. So it's going to make sure that the workloads are moved to a hypervisor that's up and running so that it can bring some of the other hypervisors down, upgrade them, move the workloads back so that the workloads see absolutely no disruption. So it's a really powerful tool for admins to do upgrade and patching process on the underlying infrastructure as well. So the admin has these workflows to be able to do the data operations and as developers start consuming it, the admin may want to tie a certain type of workload to a certain type of storage underneath. Say for example, the admin may have SSD-based storage and want to make sure that all database or high IOPS workloads are placed on the SSD-based storage. There can be other types of storage such as the traditional SAN or NAS or the NFS storage which the admin may want to tie to different workloads that do not demand such high IOPS. On a vSphere platform, this is made really simple using storage policies. So what the admin typically has to do is when the admin has different types of underlying storage, the admin has to go ahead and tag each storage with a unique tag that identifies that type of storage. Say for example, in this case, for a virtual SAN kind of storage that has SSD-based disks, for storage, the admin can tag it as gold and for the regular SAN, the admin can tag it as silver and NFS as bronze policies. So next, what the admin has to do is create storage policies on vCenter which basically says that when the admin creates a policy called gold, the admin can say that any data store that's tagged with the name gold will be pulled in by this policy and when I create a virtual machine with that policy, the policy will automatically make sure that the virtual machine is placed on the right data store that's tagged with the name gold. So how all this ties into OpenStack is that when developers start deploying workloads through OpenStack, they can use these policies through volume, Cinder volume types to be able to create volumes on specific underlying data stores of different IOPS capabilities or different characteristics. Say for example, on Cinder, the admin would create equivalent volume types and tie them to the underlying storage policies using extra specs. So the admin will create a volume type, they call it gold volume type and using an extra spec tie this gold volume type to the underlying gold storage policy. And after that, every time volume is created, the developer has to specify the volume type and the underlying platform is going to make sure that the volume is placed on the right underlying storage. So what this gives the admin is a really nice way to make sure that different workloads with different IOPS or different storage requirements are placed on the right type of storage underneath. And once the admin has the right tools to do data operations and then the next thing the admin has to worry about is troubleshooting and monitoring. What happens if, say, NOVA is not able to spin up a VM? What happens if NOVA spins up a VM but it's not able to get an IP address? The admin needs the right kind of tools to be able to troubleshoot and monitor what's going on inside that OpenStack deployment that the admin has. One of the tools we have is we realize operations which is a monitoring tool for the vSphere infrastructure. We've added a management pack that's specific to OpenStack that gives the admin complete visibility all the way from the OpenStack layer at the top down to the infrastructure layer and in some cases down to the hardware underneath. Say for example, there is a tenant-specific dashboard and we realize operations where the admin can go ahead and see which virtual machines the tenants has deployed, which hypervisor the tenants virtual machines are sitting on, which data store the tenants virtual machines disks are being placed on. So he can track the workloads, track the usage all the way from OpenStack down to the infrastructure layer. And it also gives the management pack also gives the admin the ability to set alarms and triggers, say for example, if one of the OpenStack services go down, the management pack is going to send an alert to the admin saying, hey, your NOVA service went down, you need to go and bring it back up. And it also gives some remediation procedures to bring services back up. So that's we realize operations management pack for OpenStack that lets admins monitor their OpenStack cloud all the way from the OpenStack services at the top down to the infrastructure layer at the bottom. And another thing with OpenStack is that it's really generous in terms of generating logs. So if something goes wrong, if NOVA is not able to spin up a VM, there are about 10, 15 log files that one has to go look at to be able to troubleshoot what's going on. At a minimum 10, 15. Or more depending on how many instances you've deployed for HA. So we have a product called Virilis Loginsite which is basically a syslog aggregator. So and we've added a content pack for Virilis Loginsite that's specific to OpenStack. So what this gives is it provides custom dashboards and loginsite that can pull out specific log messages from the OpenStack logs. So what happens is you can direct all the OpenStack logs to a syslog server in this case, loginsite. And the content pack will make sure that it'll pick log messages from the logs that's coming in from OpenStack and populate the dashboards with specific events. Say for example, if something went wrong, there is a trace back in your NOVA service. It's going to pick that up and show that in a nice dashboard so you know what's going on inside your NOVA service. There are also dashboards that show API response times over a period of time for your different services. How your API response time has been changing for each service for NOVA, Neutron, Cinder, so on and so forth. And besides that, it also provides a powerful search tool. So you can go in and search your logs and a lot of times you could search for logs that'll help you type between log messages from your OpenStack layer down to log messages from your underlying vSphere layer. So to correlate between the different events and find out what exactly is going on and what triggered the problem that you encountered. And another good feature is you can also create your own dashboards in addition to the dashboards that comes with the content pack and you can do that without writing a single line of code. You can use your mouse to select what properties you want to be or what search terms you want to be monitoring and you can create a dashboard out of that without writing a single line of code. So it's provides a lot of different tools to kind of search, sift through your logs and identify what's going wrong in your OpenStack control plane. And lastly, we have integration with a product called V-Relays. Business, what this does is helps you do things like charge back and show back. Say for example, you have tenants consuming your OpenStack cloud and you want to find out what cost can be associated with each tenant, how many VMs each tenant is consuming, what that's going to cost you in your cloud. And it also gives you tools that help you predict how depending on different usage models how your cost is going to change over a period of time. It gives you that ability to predictive analysis on your cost going forward. And another thing that you could do with this tool that we've provided along with integration with OpenStack is that you have your tenant virtual machines running on OpenStack and you know what it's costing on your OpenStack cloud. You can take that and you could compare that with how much it's going to cost if the same kind of workloads are going to be deployed on different workloads. Say for example, if I deployed this workload on AWS, how much is it going to cost me? If I deploy it on GCE or Azure, how much is it going to cost me? So it gives you a lot of nice tools to do cost analysis, charge back and show back on your tenant workloads. That back to Trevor to show some of these operational tools and how it can be used. Okay, so we're just going to step through those three tools that Santosh mentioned real quick. So stepping over to the VRealize operations, we can see that based on the OpenStack plugin or management pack that they have for the solution, we have some dashboards that are automatically populated. First, and for most tenants, what kind of workloads are they running? How many instances do they have? You have a heat map there just to show if there are any dangerous zones because of quota usage and so on. And then you can take a look at the underlying storage. So for a VMware installation, that would be your data stores that the instances reside on, how much capacity they're at, what's the health, if there's any statuses that we need to be aware of. The same thing can be seen for the network infrastructure. So all of your NSX components, your routers, your transport zones and things like that that make up the underlying network topology for OpenStack. And also your compute clusters. How healthy are they? How healthy is the compute driver that is providing those instructions to the vSphere compute clusters? So then we scroll over to the OpenStack controller view and we see that we have a little bit of an issue. We have our storage services appear to have some kind of red status and we can actually hover over the various components and see what is going on, why it's having such a low health score. And then we can go over to our supplemental tool which is Log Insight and we can look at our errors. We can scroll down and then click on our Cinder Errors because that's associated with the storage service. And then they expose that to see or expand on that to see there's something wrong with the database connection between the Cinder service and the database, the Maria database Glare Cluster. So then we can go back and then expand on our, sorry, our console for our build server. And we can use that to access our storage nodes. So we can see that the Cinder volume status for some reason had stopped. And then we can go ahead restart it and then go back to our views and after the next polling interval, then we'll see that the health should be remediated and it's back to green, just like we expect it to be. And then moving on to the V Realize Business solution, this is where we can do our cost comparisons as Santosh mentioned. We can see our overall cost per month and a breakdown based on whether there's licensing, maintenance, storage, network, compute and we get these scores or these price values based on best practices collected from our customers as well as analysts as to what is the cost of running a cloud. And we can break that down onto a per VM cost. And then we can go down further a little bit into consumption analysis. And this allows us to see, for example, what's happening on a per tenant basis. So in my small development environment, I just have the admin tenant but if I had multiple one-for-each-department engineering, finance, HR and so on, I could leverage that and have that on a per tenant basis. And then I can expand, see which are my instances. I can actually see what is the cost per component that is being utilized by an instance. And then if I do the cloud comparison, I can see what costs are associated with running my workload on-prem versus running on the public cloud's provider. Now this is all tuned by what are your resource usage requirements, what are your SLAs with your internal users as well as things like uptime. If you want the kind of uptime that you expect from a private cloud from your own internal usage, you're gonna have a much higher cost if you go outside and that's to be expected, right? Going to a public cloud, they expect more ephemeral type workloads but your users may not be ready for that kind of compute architecture yet. So based on the cost that you have for their SLAs, that's how you can do these comparisons. And I think that was it for the management and troubleshooting, yeah, okay. So we have all these tools for the admins that is gonna help the admin keep the lights on on an open stack cloud deployment. What does all this mean to a developer? So one of the questions that I typically get asked is this real open stack. Because they see all these tools that are not readily available to upstream and it kind of makes people think that this is a different fork of open stack or maybe this is just an API layer on top of vSphere or maybe this is a different kind of open stack or something. So to ease developers out there who just care about getting the same standard open stack APIs regardless of all the tools that the admins have to keep the open stack cloud up and running, the open stack APIs that you get out of VIO, VMware Integrated Open Stack, it's the exact same set of APIs that you would get from any other distro of open stack or if you were to get open stack from upstream and run it yourself, it's gonna be the exact same APIs that you're gonna get with VMware Integrated Open Stack. It's not just the APIs, even the services that we run as part of VMware Integrated Open Stack is the exact same open stack services that you would get from any other flavor of open stack. And Santos to underscore that our control plane is actually made of Ubuntu VMs using the same packages. The only difference is that we're using the vCenter driver, the VMDK driver for the storage, as well as the NSX driver, all of this being open sourced and it can be used by any of the deployment partners. And if you were at the keynote yesterday they were talking, talks about DEF course. So DEF course is an initiative that the foundation is taken to standardize different open stack clouds so that developers who consume open stack from different vendors and different public clouds have the same sort of experience, get the same APIs across different clouds. And DEF course is the process that tests and verifies and standardizes different open stack clouds and kind of gives the certification mark saying this open stack vendor, this particular distribution of open stack has been verified with DEF course and has the standard set of APIs that is put forth by the foundation. So one of the first distributions of open stack that went through this DEF course process and got verified was VMware integrated open stack. So if you go to the open stack marketplace website and if you click on VMware integrated open stack, you'll see that green check box there that says this our distribution of open stack has been verified with DEF course which means that all the APIs, all the recommended tests that DEF course has put forward have been verified and certified on top of VMware integrated open stack. So what all this means for developers is they don't really, regardless of what the underlying tools that the admins have, the developers get the same experience or the same set of APIs on top of VMware integrated open stack that they would expect from any standard open stack distribution. Just to show this, one of the things that developers can do is if they can get, say, a heat template that orchestrates a bunch of virtual machines that they downloaded from any source online, they could deploy it on top of VMware integrated open stack and they could see that the experience they get is the same as they would expect on any other open stack cloud. Trevor, if you could show that. Sure, so I'm not a Python developer yet or any language developer. I learned how to spell YAML last week. So I'm gonna actually show you two tools that I love dearly, the first one being vagrant and the second one being heat. Okay, these are my videos and I'll go ahead and open the one that shows consuming open stack. And I didn't have much post production here so I'll jump around a little bit on this. So just to underscore that standard APIs are in use, I took one of the open source open stack plugins that are available to the community. So that's leveraging just standard Python or probably Ruby because vagrant modules tend to be written in Ruby. But the standard open stack APIs that are available to any user. Okay, there's no black magic. You don't see anything that's VMware specific here but despite all that, I'm gonna bring up an instance in our cloud using the vagrant plugin that we have pictured here. So you can see here is a vagrant file. This should be familiar to some of you who have used it. All I'm going to do is provide my capabilities. I've loaded my open RC into memory. So all my environment variables, I'll pull down dynamically. All right, so I go back to the command line and I do my vagrant up. Provide or select that I'm gonna use the open stack provider reminding me that I have to source my open RC otherwise you get some interesting errors from vagrant. And I can see here that it's going through and provisioning the workloads. One second. So things such as the network ID, what flavor I'm gonna use, the image that I'm gonna be using. Again, our image formats are a little bit different. We use OVAs, VMDKs and ISOs but pretty much the standard experience. And then also the name of my network. So then I head back over to horizon and then I refresh the view and I can see my new instance has come up. No fuss, no muss. Using the same APIs that a developer would use directly. Again, I'm just learning about this thing called YAML. So then I head over to my heat template. So this is a little bit more complicated than what I just showed. The text is kind of small and I apologize for that. But pretty much I'm doing a three machine deployment trying to simulate I'm gonna do my multi-tier infrastructure deployment. So I have my web tier, my app tier, my database tier. And then just using standard heat conventions. There's nothing VMware specific in here again. So that's just showing you my YAML file to prove that I'm not making up some fairy tales. And then I actually jump through and you can see here on the web tier that I'm actually going to not just provision an instance I'm also gonna provision a floating IP and then associate that floating IP with my instance. And then my app tiers and my database tiers they're communicating on a private network so their deployment configuration is a little bit simpler. So then I go ahead over here and I'm gonna do my heat stack create. It's a little bit cut off so I'll go ahead here. I'll let it run for a bit so that title screen can go away. Or I could just properly full screen it, that would help. Here we go. So I do my heat stack create, use my YAML file and give it the name of the stack that I'm going to be creating in heat. So I can see here that my stack multi-tier the create is in progress and after I refresh the screen then I'll see my new heat stack has been begun building. I can see my various resources that have been deployed. And then I can see my instances that were deployed, my database tier instance, my application tier instance and then my web tier instance. So again, just providing that uniform experience for developers and we have to wrap up. So I know Santosh has some things he wants to promote and then we'll get to question and answer real quick. That's pretty much all the demos that we wanted to show you and quick note on, if you want to find out more about our product you can, it's available for free download on our website and if you want to look at some of the other features you can swing by our booth there in the marketplace and we will be able to give you one on one demo of pretty much all the features that we have. And some of the other resources that you can find or you have a hands-on labs where you can go and you can spin up your own instance of an OpenStack Cloud, Tiny Cloud and play around with it. We also have a training that's gonna be up running sometime next week. And if you have questions, we have an OpenStack community with where you can post questions, discuss issues that you're facing and we'll have folks from our company, OpenStack experts will be also looking at the forums and answering your questions. That's pretty much all I had. I think we have a couple minutes for questions. It is ready for production. In fact, we have a few customers in the process of putting it through production. If you were here in the previous session where we had some of our customers talking about their experiences, they are rolling out into production and it is ready for production. Sorry, Santosh. So there's two aspects to that, right? So you're talking about the underlying infrastructure and we're just using vSphere, right? There's nothing special about the VMware configuration that we're using. Just standard VMware best practices on how do you make your infrastructure highly available? We're being able to leverage that with VMware integrated OpenStack because it's just the control planes provisioning workloads to your existing highly available, fully functional vSphere environment. And as you can see, the control plane has multiple instances, everything is all balanced, HA configuration. So right off the bat, it's ready for production. Any other, oh, okay. I'll take one more question and then you can meet with us afterwards individually. Oh, he was next? Yeah. Okay, so I'll be able to answer part of it and part of its future. So my product manager might hit me over the head if I say something wrong. So just hit me. All right, Santosh, all right. So we're using Icehouse right now and we're going to, which release are we going to next? Kilo. We're going to Kilo. So it's a good thing that he actually said it first instead of me. And then, sorry, what was the second question? Oh, other hypervisors. So VMware integrated OpenStack, we know VMware very well. So when it comes to doing a multi-hybervisor strategy, then we partner with the various distribution vendors out there. All our drivers are open source again. So they're using the same goodness that we are. So we have a gentleman from Morantis in the room. We partner with Morantis. We partner with Red Hatch. We partner with Canonical, whoever has some VMware smarts, we can actually work together with them, even though there may be some co-optition with some of the people I just mentioned. So yeah, we work together with the distribution vendors based on what the customers want. We'll have to take it afterwards because they're kicking us out of the room. But now comes- So we're going to stick around. Yeah, we're going to stick around because we have to give you the hoodies, all right? So if we can have people who are large size on this side, people who are extra large on this side, we'll gladly give them out.