 Hi, I'm Doug Baer. I work on the VMware Hands-on Labs team. So we present the Hands-on Labs for the VMworld events, which are our conference, as well as the online portal. So we offer the labs that we present at the shows available 24-7, 365 for free to anybody who wants to take them. And so we basically have 40 minutes here. I didn't think it was enough time to go through, set up and get everybody in and logged into a lab. So I wanted to kind of scratch the surface and show you guys what we do have available online that you can just go sign up and take for free. So you can take it any time you want. They're available all the time. Basically you go to, if you didn't get one of the URL cards, we can get you one of those. But it's labs.hol.vmware.com slash hol. It asks you for your email address and your name. And then you can just answer a few security questions and get logged in and take a lab. So basically what you have is a lab portal. And let me show you what that looks like here. It's going to be a little sluggish based on the Wi-Fi. Basically what it's going to do is give you a list of all of the labs that we have in our catalog and that you can deploy and take. These are live running environments, so it's not like we're running any sort of simulation. They are actual deployed environments. So when you take the OpenStack lab, we actually have some virtual ESXi hosts with our Vova appliance, which is our non-production kind of test deployment of OpenStack in a virtual appliance. But if you want to get an idea of what OpenStack looks like running with vSphere on the back end, it's a really good way to get started. So basically you've got a catalog of labs you can browse here in the portal. If you're looking for anything in particular, you can search the catalog. So you can search for something like OpenStack and it'll show you the OpenStack lab. You click a button. You can see I'm already enrolled in this lab, but basically this will start up the lab environment and you can go ahead and take the lab. So the labs are typically updated twice a year for VMworld Europe and for VMworld US. It is possible that we'll get incremental updates during the year if we have product releases. So I'm going to resume the lab here. And the way this works is you'll have a console on the left side of the screen and a manual on the right side. You do have an option to flip them if you're left-handed or you prefer it that way. Usually what happens is you'll see this screen while the lab is being deployed in the background and you're being connected to it. So these labs run out of Amsterdam. They run out of Santa Clara, California and Wanaachi in the United States. If you guys have a feel that need to have a deeper discussion on any of these topics, the whole bunch of VMworld guys outside can ask them. I can take some of the questions. And we do have a booth. So the main part that I really like about this is the lab environment itself. Like I said, it's a full running environment. So you can kind of do whatever you want. You can completely ignore our guidance in the manual and just log in and play around. But if you want to know what the environment was built to showcase, then we have a manual over here that you can pop out and go through. We have a table of contents that you can break out and see exactly what is covered in the lab. So if you want to know how to use open-stack networking with NSX, we've got different sections you can jump to. I'll start here. Say you want to look at vCenter Web Client if you've never used that before. You click on that and it'll jump to that portion in the manual so you can take a look at it. Generally we have step-by-step instructions that show you how to do something. And usually there's some description in why you would want to do that or the background of what the feature is that you're trying to showcase. So for right now, I've actually got this lab set up and logged in. So this is basically, you know, it's Horizon if you're familiar with open-stack. How many people are open-stack users? So you know this interface fairly well. Or maybe you consume with the APIs and the CLIs. But basically, you know, on the back end we're running VMware stuff, but on the front end it looks just like anything else. If you want to start up an instance, you know, you come to instances, you launch an instance, we give it a name, we'll boot this guy from an image, and we'll drop it on our test network. So from a deployment perspective, it's Horizon. So in this case you'll see it starts deploying from a vSphere perspective. So if you're used to managing virtual machines in vSphere, basically that instance deploy is going to show up as a virtual machine in the VMware environment. So if you log into your web client, go here to the resource tree, you'll see we've got an instance, we actually have a couple of instances running. So if we go back here, we can look at the progress. It says we've got instance that's up and running. If we click on the instance name in Horizon, we can see here's an ID. That ID maps to the virtual machine name in vSphere, which seems like kind of a convoluted way to do things. We've actually got some integration with our web client, so metadata gets pushed in to vSphere, so you can search by the name of the instance, by search for Paris. The VM gets tagged with the name of the instance, and so now I can manage that instance here without having to go in and try to figure out what the secret decoder ring is. And there's actually more information that gets surfaced in here. We go to the summary. I apologize, my screen's terribly small for this, but you'll see down here once the tags populate that we push in information like the tenant, the user, the virtual machine name, the flavor that gets deployed, so you can basically manage those as sets of virtual machines. As you can see down here, I'll drag this over. As you can see, I've got information about that open stack instance pushed into the vSphere client, so you can manage it from this side as well. So from a Cinder perspective, we also can create volumes, as you would expect. I create a volume in here, give it a name, give it a size, give it one. So to go in there, we'll create a persistent volume. Now we get a lot of questions at the booth about, you know, why would I want to run vSphere underneath open stack? And usually that's driven because your developers want to consume resources using the open stack APIs, but the IT department isn't comfortable or familiar with KVM, and so they want to leverage their existing investment in VMware training, basically any sort of VMware experience they have in the organization. Also, we have our distributed resource scheduler and high availability services that you get as part of using vSphere underneath open stack. So, you know, if a host goes down, we can resurrect the instances on a surviving host in the cluster. Also, we will distribute resource scheduler can dynamically rebalance the instances across the host in the cluster. So, you know, kind of that defrag for your instances as you're going along and running your open stack. So once the volume gets created, you know, we can attach it just like you would expect. My instance. It's all kinds of fun with the screen resolution, running a nested virtual machine in a web page against a projector. So you see it's attaching as you would expect, and if you come out here back to the vSphere web client, it's not confusing at all, jumping all around. I apologize. You'll see that there will be a second hard disk attached to this virtual machine. Make sure that's finished. It doesn't want to refresh for me. Okay, well, it'll eventually refresh. I'm sorry? It's possible if I had two instances that I named the same with different tenants. Oh, you're correct. Thank you. Back here. I think I have two instances deployed with the Paris. The first, this one, okay. Yes, there we go. Yes, okay. Thank you. Yeah, so in this case, you'll see I've got a two gig volume attached to this virtual machine. So one of the things that we do, you know, is our live migration between hosts. Because we present a cluster up to open stack and a compute resource, we're able to move instances or virtual machines around within that cluster without disrupting what's going on from an open stack perspective. So vMotion actually works. You can take a virtual machine and migrate it. We see the correct virtual machine now is running on host one here. I don't know if you can read it in the back of the room. We can take this virtual machine and we can migrate it to the other host. So say we had to take host one down for maintenance or upgrades or some sort of task that would require it to go down. Maybe we need to add memory. So we change the host. This is something that the vSphere admin can do on the back end, someone who's managing the infrastructure without necessarily needing to do anything from an open stack perspective. So I'm going to keep it in the compute cluster, pick the second host and say next. We're going to just perform a default vMotion which will reserve the resources on the target host and then move the virtual machine. So we can monitor the job from here and see that it's migrating the virtual machine right now. So while it's migrating the virtual machine, we can still go in here and we can prove that the virtual machine isn't going to die while we're trying to use it. I open up the instance, nested console decided to jump. And so you can see in this case I'm attached and it sees that new volume that we attached. It doesn't have a valid partition table because it's a brand new volume, but that's available. If I come back here, it's about 50% migrated so it's still in progress. Yes? Sure. The volumes page here in Horizon. Yes. So in this case, the way it's implemented is it's a VMDK on a VMware datastore. So one of the benefits of using VMware as your hypervisor is any storage that is certified for use with VMware hypervisors can be presented up to instances in OpenStack. So whether that's local storage or NFS or iSCSI or FiberChannel, anything, we can even use vSAN datastores. Including vSAN, yes. Storage nodes. You mean like Swift storage nodes or... Cinder. So in this case... The process is there and Cinder needs something in the back. So you need Cinder with a driver and something else. In this case, Cinder is using VMDK driver, which is essentially talking to vCenter and saying, hey, create 10 links of this and we create it on VMDK. But then it will be directly attached. Correct. Let me show you that virtual machine so you can see once it wakes up here. You can see it's moved to the second host. It's attached to anything. You can create a volume. So that is actually we create a shadow VM. It's just a non-powered on VM and we attach the disk to that temporary VM which is just hidden. And then when you attach it to something, we move it over and attach it to the... So you can see actually there are two VMs over here called volume and then an ID. So that's what we call a shadow VM. There's no concept of an unbound VMDK in our API. So we create a VM that holds the disk while it's just kind of hanging around so we have a handle to that disk. Right. So the VMDKs will exist on a VM or data store and the shadow VM is our way to get to that disk. And when we do the attachment through Horizon, we'll attach that disk into the instance VM. Yeah, so if you... It's just a placeholder because in our... Currently, we can't have a... The disk can't be a first-order object which they are going to fix sometimes. But you need a shadow VM for each volume? Currently, we create for each volume. We are trying to do some optimizations to get to that. So we create a few shadow VMs and that's what we do. So it's not actually consuming resources but it gets powered on. But basically, the VM we create, it's got one virtual CPU and 128 megs of RAM from an allocation perspective. But it never gets powered on unless some admin goes in and decides to power it on. At that point, there's no operating system installed. So it doesn't really do anything except try to pixie-boot the first time, I guess. So you were in the previous session? Yeah, so there's policies to guide the placement. If you want to create tiers of servers like this is gold, distorted, expensive, cheap storage, you can do that using storage profiles. But let's say if you don't do any of those volume types, when you are locating the storage to be used for center in the install time, we have a bunch of data stores that will go. We'll pick the data store that is most beneficial for that particular VM. When you first create, we don't actually create anything. We just know that we can create a 2G or 20G volume. And then when you actually create the VM and attach it, then we'll create the extra space in the right spot. So you can configure a set of data stores to use, but we use internal legal algorithms to find out where to place it. Because over life it keeps moving as well. Detach it from here, attach it to different VMs. There isn't much meaning to control the placement of the center one if it belongs to the VM at the end. But higher level controls can be obtained as well as storage. Yeah, that's the service level offering that you can do with the storage profile. So how is that exposed to you? Through volume types? I don't know if horizon supports volume types, but you can use for center volume types. So in the volume type you will say expensive storage. So it will go and create on your expensive stack a cheap storage. Yeah, so there's a, we don't have any types defined, but you would basically, you could pick it from this drop-down list if you have it configured, and you would say gold, silver, bronze, or whatever. Yeah, you can guide the root disk as well. So Nova also takes the input of SPVM, the storage policy volume. So if you have created like tiers of the storage, extensive, cheap, whatever, you can pass it to Nova as well. It uses the same trick. You pass it in the flavor, I think. The drop-down list has a flavor attribute, or a one attribute will be way into place, and you say storage policy cheap, then it will go and create on the cheap volume. It's the same mechanism for Nova as well. It's there, I think the work itself is pending in review since I started somewhere there in the community context, so it's not outstanding. Yeah, I think it's waiting for reviews if you search for storage profiles or something if you find it in Nova. Yeah, I don't know that that's implemented in a lab at this point, but you could probably go in and configure it and see what happens. I mean, the nice thing about these labs is if you break something, you just end the lab and start a new one, and you get a fresh copy. It's a great place for messing around, you know. We will document that, what not to do in the review center. I think some things are like, if you change the name of a VM, that's probably fine, and nobody cares about the name of the VM. But if you resize it, then we have a problem, because Nova will not know that you just bump this one from 2VCPU to 4VCPU. You can migrate it within a cluster, no problem. If you migrate across the cluster, that's a problem. So we will document this, but increasingly more and more we will support these operations to open stack APIs so that you don't have to go to two different places or have this map in your mind. Yeah, it's interesting. I was in a session with storage vendors, and they were talking about similar issues of the storage arrays that you can use that Cinder doesn't understand, and you could potentially do something that makes Cinder not understand where the volume is now. For instance, they were talking about different types of replications, so we have the same sort of problem where you need to document operationally what you can't do at a certain level, because it's handled upstream, so good question. So also in the lab what we have is, if we get past the compute in the storage, we have the neutron plug-in, so we consume NSX in the lab. In this case, if you take a look at what I've got deployed in here, it's probably nothing terribly exciting. By default in the lab, you basically have the green test network and the blue external shared network. In this case, I've created a web tier network, and I assigned a router to it, so I've got the little router icon, and now I can drop a virtual machine on the web tier network and have it talk out. If I wanted to create a new network, so for example, I wanted a database tier network. I can come in here, create my network, make something up here. This can get rid of potentially nightmare IP management issues, because multiple tenants can have intersecting IPs. It doesn't matter, everyone can create 10.0.0 and .1, and it won't conflict because they're all segregated and they're all private to those tenants. Now I've got a new isolated DB tier network, but if I want to be able to route to it, I need to basically come in here and I can add an interface to my logical router. Come in here. I'm going to add my DB tier network to the router. We'll let it automatically set the IP address. It usually picks .1. Let's hope I come back here. Now I've got the two networks attached via a router. Now I can deploy instances onto those networks and have them talk, and I can do whatever I need to do from an application perspective. In addition to doing that through Horizon, I can also do that through the APIs if I want. I mean, if you have developed any kind of cloud application, the first hurdle of, I think, Compute is almost gone because you can spin up Compute and you can go hypervisor and spread in multiple posts. Networking becomes a big bottleneck because you run into issues of managing VLANs, managing IP conflicts, managing performance on various VLANs, and that's where a lot of hard work from no one network and the basic new product with VLAN managers is coming. So most of the large scale deployment that we have of our customers on OpenStack, we are using at least some form of software-defined networking because it gives them the freedom of managing IPs which becomes really a large scale environment that becomes bigger to manage. This one is using all NSX in the back end to create all the routers, create all the hooks, all the private end networks, and all that. So one thing you'll notice is in the lab, you have a fairly small cluster, so you've got a little bit of space to work with, but you don't have a full-blown physical infrastructure. So everything we have running in the lab is actually running virtual. So even the ESX hosts that we have in the back end, they're running virtually. So all the instances are actually nested virtual machines. So from a performance perspective, we're not trying to show that it's the fastest thing out there. We're just trying to show you what you can do at what it looks like. Yes, please. I always have to say that because we actually have a performance lab, but performance lab is more for showing how to troubleshoot performance issues than to showcase, hey, this is how fast this is. So just if you're using the labs, keep that in mind. It can be a little sluggish. There any questions? Anything anybody wants to see? Let me see. I can deploy another instance. I don't know if we have time for it to actually come up, but let's see what happens. Once you create them, you want to access them and not be on VPN to access them, not on the same tent or whatever network. So most often you will assign a floating IP to that VM, and ESX will actually do all the mad rules and everything in the back of the... behind the scenes, you don't have to worry about it. You just need to give it a pool of floating IPs that it can use, and then it will just assign IPs from there to the VM, then you can just SSH. You can block SSH as well. Like you can go to the security booth and say, like, no SSH only being allowed or something like that, or the other way around, no being allowed only SSH. But all those things are done at a software level in this framework. You can insert firewalls in between if you need to, like you can say, between the database and between the web and any external access that has to be a firewall. Those all things can be created on the fly using the networking software to find the networking. Yeah, I think if you're interested in that sort of thing, we've got in the table of contents you can go through and jump to different areas. So the three main topics here we've got are the computing storage and the networking, and then we have kind of an estimate of how long we think it would take if you went through the manual step by step. Module three at the end is the VMware Integrated OpenStack. It's currently in beta, but it's our distribution of OpenStack running on top of vSphere. Basically for installation of OpenStack, you can go through at your own base and see how various components are created. It will explain the architecture in more details as well. So let's see. I've got this guy up and running. My screen is a lot smaller than anything anybody else uses at their desk. I don't have an IP address available in the pool. Unfortunately, I can't do the cool thing that he wanted me to do, so... Okay, any questions that anyone has? I think we are running close to the end of the session. Anything else that anyone wants to know about OpenStack works with vSphere or compute network, images, storage, authentication, anything? All good? Starts the movie. Okay. And if you want to poke around, go ahead and take one of the labs and take it multiple times. It's really easy. When you're done, just hit end and... If you want to try the VIO, we are doing a private beta. You can go to the products page, request access. We'll end the nomination on November 15th, because...