 All right, so good afternoon, everyone. Thanks for stopping by the demo theater. We're going to do, for the next little bit, we're going to talk about running vSphere underneath OpenStack. So I'm going to be doing Fingers crossed, a live demo from the United States all the way across the Pacific Ocean, if the latency is not too bad. Of what we have released with OpenStack Havana for integrating your VM or vSphere compute and storage infrastructure into an OpenStack cloud. And we'll also show off some other things that we haven't released yet, just to kind of give you an idea of the direction that we're taking and the sorts of things that we're looking at and the things that we're doing. So we're going to start out with just taking a look at the infrastructure that I'm using here. So I'm just going to log in. This is the vSphere web client. I'm going to log in here, and I'll just show you real quick what we have in the lab that we're using for this infrastructure. OK, there we go. So this is a fairly straightforward vSphere 5.5 environment. Now what we're showing you doesn't necessarily require vSphere 5.5, but we're using vSphere 5.5 here. And so I'm just going to unpin this just to give it a little more room. There we go. And we'll go in here and take a look at the infrastructure. All right, so we have a couple of clusters here. We're only going to be looking at this one cluster called the NSX Cloud Cluster. And once the screen refreshes, you'll see this cluster is made up of three ESXi hosts, so three hosts running ESXi 5.5. The cluster itself is enabled for vSphere HA and vSphere DRS, which are, of course, the features that enable some virtual machine resiliency and redundancy in the event of a host failure. And they also provide some load balancing and preventing hotspots on your compute workloads. So you can see here that I've got, as I mentioned, this cluster is already enabled for vSphere DRS and enabled for vSphere HA. If I take a look at one of the hosts here, you'll see this pretty straightforward 5.5 host. You can note that we've enabled the shell and SSH. We have those little banners there that tells us those have been enabled. But I want to draw your attention to right here to the data stores. And I'm going to show you some of the data stores that are attached. Now, because we're running vSphere 5.5 here, we're actually enabled to take advantage of something new that we announced back at VMworld. And that is vSAN, which is a distributed storage model, where we take local storage out of all of your ESXi hosts and combine them together into a distributed file system. And you can see that in this environment, we actually are running vSAN. So we have a vSAN data store, which is spread across all three of these hosts. And we're actually going to be deploying OpenStack instances and persistent volumes straight onto a vSAN data store through a Cinder driver that was released with Havana. And we have OpenStack itself, Havana running in a VM. And so here you can see a VM. It's just running Ubuntu. And we've got OpenStack installed within the VM. And we'll give that a second to refresh. There you go. So you can see it's pretty straightforward VM configuration here. I'll pop over to this terminal window, hopefully that text is big enough for you to be able to see. Just to show you, this is actually a live system. I had it recording just in case the network connection didn't cooperate, but this is actually a live system. So I can actually show you that inside the Nova configuration right here, this last section in the VMR bracket, this is where we do the configuration to tell Nova to deploy workloads onto vSphere. So we provide the name of an integration bridge. And this will be a port group on a vSphere distributed switch, which cluster we can use. New in the Havana release for this code is the ability to specify multiple clusters in this text file. So right now, we just have one cluster listed. And so this one cluster appears as a Nova compute node to OpenStack, but we can actually put in multiple entries. So we could say cluster name equals cluster 1, cluster name equals cluster 2, cluster name equals cluster 3. And you could actually expose multiple clusters up through to OpenStack. And then we have a username password. You can see this is in clear text right now. So we're looking at how we can integrate into Keystone and other integration authentication services. But that's right. Then you have your IP address. And then we have a Redgex, a Datastore Redgex that allows us to specify a regular expression for where instances should be placed when they're created. And you can see we've just created a Redgex here that says, if it's a vSan datastore, then go ahead and deploy it on that. And that'll make sure that all of our instances actually get deployed onto vSan on the hosts. And there are similar configuration blocks, by the way, inside Cinder that show that you're using the Cinder driver to allow us to place VMDKs, persistent block volumes, for Cinder on here as well. So let's flip over to OpenStack. And I'm just going to log in here, all right? And pretty straightforward environment here. You can see I've got a few instances, actually just one. So one instance running. And if I go down the sign, you'll see we have more detail about this particular instance right here. You can see that it's got an IP address, all that kind of jazz. I can actually just go in here. And I can look at the details for this instance. And you can see it's got the name. It's got the ID, all that kind of jazz. So I want you to note the ID right there. It's 38, or is that 36B9. I can actually go over here, do a search, and you'll find that virtual machine. We use the UUID that is assigned inside OpenStack as the name for the virtual machine on the vSphere side. So they're easily correlatable, all right? And then there's the VM you can see. And here it's running. Now what we're showing off today that we haven't released is integration into the vSphere web client. So what you see actually is in this tags section. We're taking information from OpenStack and actually populating it as tags inside the vCenter database. So that means you can go and search for flavors, tenants, projects, all that kind of jazz, and find all the VMs that are associated with that particular tag. You also notice that down here in the OpenStack VM section, so right here, we're pulling out the information from OpenStack to show you the name of the instance, the name of the project, the name of the tenant, the network that it's associated with, the IP address that it was assigned. All of that is being populated dynamically out of OpenStack into here. And just to show you an example, I can go up here and say I'm gonna type for demo, and I'll look for the tag demo, and this will show you all the instances that are associated with the demo tenant, in this case just one, but we'll be spinning up some movements as we move forward. So let me flip over here, I'll actually log into the console, all right. And just to show you that the console integration works. Okay, here's a live, you're logged in live to the instance inside OpenStack through the OpenStack dashboard. You could also, of course, go through vCenter if you wanted to using the standard console access, but you can do it right here using the noVNC proxy. Let's go ahead and spin up an instance. All right, we'll give it a name, we'll just leave it as a tiny flavor, we're gonna boot from image, and we're going to boot from this Ubuntu image, and we're going to attach it to this particular network launch. We'll see a new instance spin up right here, it goes through the usual process. If we go over to the vSphere client while this is happening, I can show you what's going on behind the scenes. So right here, you'll see it creates a virtual machine. So it's actually creating the virtual machine that represents the instance we just launched inside OpenStack. And I'll refresh here, it'll go through and reconfigure the virtual machine a couple of times, and that's to customize the parameters for the virtual machine to match the flavor that we assigned inside OpenStack. And then the final step that it will take will be to actually power on the virtual machine. So it's done with the reconfiguration, and we'll refresh one more time, we should see it powering the virtual machine on, there it goes. And so now the virtual machine is actually powered on. We'll click on the link to follow to the VM that was created. Here's the VM that we just created when we launched the instance inside OpenStack. So this is the corresponding VM, and you'll note that live it's populated the information down here. So it shows the name, the flavor, all that stuff that I just filled in earlier. This error right here by the way is just because we're running VC operations inside the environment, VC operations manager. And this is a brand new object, so VC ops hasn't had the time yet to gather any statistics for that particular object. So it says it doesn't exist, it's perfectly normal. If I go back up here to the demo search again and say I wanna look at the demo tag, I now have two VMs because now that we have multiple VMs being represented for this tenant. And as I showed you before with the other instance I could go into the console section, I could log into this guy. Underneath we have the virtual disks for the storage for the instance is all being stored on VSAN, right? So if I flip back over to the vSphere web client and I take a look at one of these two VMs, it shows right here that it is being stored on VSAN data store 01. So we're actually using the distributed storage model that ESXi vSphere 55 provides, okay? And the networking is being provided by NSX. So all the network connectivity is being managed as well all through Neutron. In addition, we provide support for persistent volumes. So here's a persistent volume that's already attached to an instance. I could create an additional persistent volume just by going in here, give it a name. We don't use any types in our driver. So we leave that blank and just say create volume. And you'll know very, very quickly it pops up that I've created a 15 gig persistent volume, okay? Now the reason that was so fast is because we actually didn't do anything right then. We just created a pointer to the persistent volume. Now if I go in and say I wanna edit the attachments and I actually attach it to an instance. So let's say I'm gonna attach it to the instance that I just created and I give it a path. It's gonna go through and say that it's now attaching the instance, right? And when we flip over to the web client and look at the tasks, I'll show you what's happening underneath the scenes. So it creates what we call a shell VM, creates a virtual machine that is designed to hold the persistent volume. And you can see it creates it in a folder called sender volumes and uses volume dash as the name. So you always know that that's a VM created to hold a persistent volume. And then it'll go through and you'll see it modify or do a reconfigure virtual machine on the other instance we created to actually attach that. So now if I click this link, we'll see that the VM which previously only had one disk now has two disks. The original one gig ephemeral disk and the 15 gig persistent disk, which is stored as a VMDK on the vSAN data store. So again, continuing to leverage the distributed storage functionality. And just as well, of course I could go in here and say that I want to detach the volume and then it will detach the volume but it does not delete the underlying volume. So the data that is stored in that persistent volume remains, okay? If I flip back over to the web client and we look at the tasks again, you'll see a reconfigure where it detaches the VM but it doesn't delete that volume. So that data is still there. You could reattach that volume to another instance to use it for whatever you wanted, okay? And it's not until I go through here and actually delete the volume that you will see it disappear from your underlying vSphere infrastructure. So it shows that it's deleting there. Now it's gone. If I go over here and refresh, you'll see that it shows a delete virtual machine and that points back to the volume dash virtual machine that was created to store your virtual volume. Now other tasks inside here, if I'm looking back at instances, you know, I showed you earlier how you can hit the console, you can suspend instances which will suspend the corresponding VM inside vSphere and then resume them. Obviously we can terminate instances so if we terminate the instance and just say okay here then it'll say that it's scheduled termination and it'll run for a minute here and you'll see it disappear and over here in the web client we can refresh and you'll see that it is going to power off the virtual machine which it just completed doing and then it's gonna delete that virtual machine removing it from your storage, removing it from your inventory just as you would fully expect within the other platform. And because we are running Neutron and running NSX, we could go in here, we could create additional networks, additional subnets, attach instances to them. All of that would be transparent. It would all work just as you expect it to, okay. You can look at your network topology just like you would expect and it'll show you the VMs that are the instances and the networks to which they're attached and of course we could change any of that. All right, so that is the demo. What I've just shown you is vSphere 5.5 running underneath OpenStack Havana, showed you integration with NSX via Neutron, showed you integration with vSAN which is the new distributed storage model that vSphere 5.5 supports. Showed you full integration with Cinder for persistent storage volumes, showed you integration with Nova for scheduling instances onto your vSphere infrastructure and at the same time being able to take advantage of features like vSphere DRS, vSphere HA because we have that OpenStack information being pulled from OpenStack and being deposited right in the vSphere client. So what you really got are two views. You've got an operator view, right? So back here, this is the operator looking at the infrastructure, managing it. You could go evacuate hosts, putting it into maintenance mode, whatever, leverage vMotion, to move them off, all that stuff will work. And then here you've got more of a user view. Here we're consuming infrastructure to assign instances all kind of just. So two different views to look at, features supported for each set of users as you would fully expect. So any questions before we wrap up and let the next group get up? Yes, right here. I'm sorry, ask that again. So this showcases integration with vSphere. There's no support in here for a bare metal server. That's, yeah, we're just for vSphere. So to show that you can integrate this hypervisor along with Xen KVM, something like that. Yes. So the question was, can you import existing VMs? There's no facility for doing that right now. It's something we're investigating. And if that's something that you feel would be really important as a potential OpenStack integration, just let us know so that we can prioritize that to our developers and say, yeah, this is something that customers are really asking about. But there's no feature for doing that right now. Great question. What other questions do you guys have? Yes, sir. Can you manage multiple instances of vSphere with OpenStack? Absolutely. So you saw in the terminal, I can flip back over here again. This is the configuration. So you can actually have multiple instances of a Nova Compute node. And each instance of a Nova Compute node could represent a cluster or a group of clusters. And that cluster or group of clusters could be behind a single vCenter server or multiple vCenter servers. So you can scale that based on however you need. So you could have one Nova Compute instance. That represents three clusters. And you could have a separate one that represents one cluster. And the idea being that maybe you're going to embed some metadata. Maybe you're going to use availability zones or host aggregates or something of that nature to differentiate how you want to deploy instances on those. I'm sorry, I asked that question again. So there's a vCenter plugin that you configure to tell it this is the API endpoint for Keystone. And this is how you log in. And it logs in and gets the information to pull it back in to say, here's the tenant. Here's the project. All that kind of jazz. All right, it looks like my time is up. So I thank you all very much. I'll hang around here for a bit. If you have any additional questions, I'd love to talk to you. Thank you.