 Then the tenant, tenant one, use role one. I've differentiated all those names for separation space, but in general, if there's one user in a tenant or a project, then the project that they're in and the user name that they have match. If there's a grouping of people, then each person would have their own user and then the tenant would have a more descriptive name of, you know, my project team or something like that. And then again, the roles are usually predefined. We don't create a lot of those. And then if you have a new user on the command line, you can then copy that RC admin file to RC underscore username, update the username and password, and then use that. So you can see here, I've created a new RC file with the new user. I've sourced it so that those environment variables go or set. And then I can log, I can get a token as my new user. I can source the admin file. And then down here at the bottom, almost cut off, I'm back as the admin user again. So that's just a quick example of the command line of how you would get started to be able to do more of the rest of everything, basically. So let's add a user. We're back in our installation of OpenStack. We just finished here. Down at the bottom, there's an identity panel. I'm going to create a user and give you my email so everyone can spam me with lots of questions. Now I've got two projects already existing. One here is for the administrator and one is a services project. The services project is for all those components in OpenStack. Like I said, everything is a member of a tenant. Everything is a member of a project. So all those components have users that they're doing authentication between each other and all those users for the components are members of that services tenant. So the services tenant is kind of a general purpose cluster project for things that are cluster related. And so we're going to use that later in networking for something that ends up being general purpose use for the entire cluster. So at this point, what I'm going to do as I suggested for just my user is create a tenant specifically for me. There's a bug in RDO that the patch is already in upstream but hasn't been released by the community yet. And so I'm going to fix that real quick with a script that I threw in here. And all it's doing is updating this member piece here. But the default member in Packstack has changed from this to that. And so that's why I got that error if you notice that red box. So let's try it again. The services tenant? Yeah. No, the services tenant is the same as all the other tenants. It's just created for all of the components or all the services in OpenStack to be a member of a tenant. So it's not better or higher or anything. You have to, everything has to exist in a tenant. And so that services tenant is for the services users to exist in a tenant. Exactly. So all services, like users, have to be authenticated. And so this is kind of a place you can put them and then they can run through Keystone. They all have to be authenticated to Keystone just like a regular user would have to be. So this is a way to help do that. So here, the dashboard has given me a new window to create my tenant. So I'm creating a tenant with the same name as my user. And then it will... Yeah. By the way, again, it's a little confusing. But over the various releases of OpenStack, people have gone back and forth between calling these groupings or resources a project or a tenant. So sometimes you'll see projects, sometimes you'll see tenants. Sometimes the CLI will say one thing and then the dashboard equivalent will say another thing. But actually, they're talking about the same construct. So I've selected as my question over there. Sure. Okay. I think the question is whether there's road-based access control. So right now there isn't. It's basically an admin and a user. There is a project, I think in the Keystone 3, there's talk about creating specific road-based access to specific resources within a particular tenant or slash project. Yeah. So at this point, admin or member is kind of where you're at, but there's plans to have different access for different users. So I've selected my project that I've created and I've selected the member role so that I have... Basically if I gave myself administrative role, then I'd be an administrator for the cluster. If I give myself a member role, then I'm a member of the project. So we'll create the user. This is again another bug that's fixed upstream and is going to be pulled into RDO. For this demo, it shouldn't hurt us. It's basically trying to add the user to the tenant twice and so it succeeds and then it fails because it already has the role. So at this point, we should be able to log out from the administrator and log in as my new user Radies with the password that I had created. So now, as a not administrative user, you see there's no tabs for the administrative pieces of OpenStack. You just have the tabs in the menu on that side of the screen, your left, that are for end users to create all the pieces of OpenStack to build out instances. So if you log into Tristack, this is the interface that you would get. So next, we'll talk about Glance. Again, it's that registry for those pre-built images so that when you launch an instance, you can just pull a copy of that image and launch off of it instead of having to do a whole installation of an operating system. You can customize these images so if you had, say, a web server and you wanted to pre-bake an image that had the web server and all the packages already installed so that all you had to do was a few configuration pieces for that particular instance when it launches, then you could build those packages into it. This is an example of how you would import an image on command line. I think the create name here is a little bit misleading because you're not actually creating the image file itself. You're creating record of it in the registry. And then you can do a list of the Glance files after you've imported it. There is a, sorry. I think we're gonna talk about that next, creating the image. Yeah, so there's lots of different ways you can create these images. I used Vert install to do the images for this demo. Oz is one that's very similar. Here's a bunch of them. It's something that takes time and patience because you're building an entire operating system into a tiny little image. I actually have a blog post on this. If you search my name and image building on Google, there's a blog post on how I created the disk images for this demo. The important thing if you're building your own image is to include Cloudinit. Cloudinit is a service that on boot the image will launch Cloudinit service. And the Cloudinit service calls back into OpenStack to get post-boot information and configuration about the instance. This is very important to authenticate to your machines because in the cloud we generally use SSH keys to get into the instances. Cloudinit is the piece on the instance that calls back into OpenStack. To get that SSH key and put it into the authorized keys so that you can then log in to your instance. I don't actually know. How do you authenticate to the Windows machines? Good question. SSH keys, I guess you would bake the image with passwords and users that you know and are able to authenticate with. Use LDAP would be another way, yeah. A callback, like a Cloudinit service for Windows, yeah. I mean, yeah, you could split up a guest instance that would then connect to an active directory environment. I've just obviously labeled myself as a very narrow-minded Linux guy, haven't I? Yeah. So great question, I don't have an answer for you though, I'm sorry. So let's add an image. We'll jump back into the dashboard here, I'm going to select images. I'm going to click my create image box. Can you guys see this okay? A little bigger, I'm going to call it fedora. And I'm going to select a file from my laptop, pull down the image. This is the same file that I put on those flash drives that are being passed around. So if you haven't gotten one of those flash drives, look for one and put your hand up so people can pass it to you. And this is an image that you can use on, if you do this later, you're welcome to use this image. It's a 200 meg image of fedora and the root password is not very secure and all lowercase letters, all one word. In case you need that to debug, because you do, I do. So I've selected my fedora image here, I've selected its format. I'm going to label it as a public image. If it's a public image, it means that anyone can use it. If you don't label it public, then only the tenant that it's in can use it. And the protected flag is kind of a read-write flag. But if something's protected, you can't delete it. You have to unprotect it before you can delete it. Even the owner of it has to turn off the protected flag before it can be deleted. So create image, it's going to upload this file into my OpenStack cloud. And now I have an active image ready for me to do an installation off of. A launch. So next we'll look at neutron. Before we can actually launch this instance, we need the glance image to boot off of, and we need a network for it to exist on so that we can get into this instance. So neutron is our networking service, and it's networking as a service. It's virtual network appliances and networks. And it is intended to be isolated so that the old style of networking, all the instances came up on a flat network together. And so they could all talk to each other. When we moved to neutron, because we have these virtual isolated networks, you can now isolate these instances from each other. So tenant A can't talk to tenant B unless there is provision for it, specifically made for those two to route to one another. It's, again, a very modular component. So under the covers here, I'm going to use OpenVSwitch as the networking back end for it, which is a virtual switching appliance. But there are plugins for neutron to be able to use your vendor of choice that supports OpenStack to use switches and routers and other hardware so that you don't have to rely on OpenVSwitch if you choose not to. So let's add a network, select networks, create network. I'm going to call this internal, because this is our unroutable set of IPs. And I'm going to give it a 172.16.00 slash 24 address. And I'm going to go ahead and put a DNS server, Google's DNS server of 8888. Because we're providing DHCP to the instances through this network here, it'll provide those DNS name servers to the instances. So now we have a virtual network named internal. Everybody say yay. You guys are great. The other thing that I need in networking before I can launch an instance is a router. A router is important because the metadata service that Cloudinit calls back into is functions through that router. So I create my virtual network for the instance to live on and I create a router and I create that route from the internal network to the router. And then when the instance comes up, it goes out to DHCP and gets its address. And then when Cloudinit comes up, it's able to call into that router and therefore get forwarded into that metadata service to get the SSH key or post boot configuration. So I'm going to create a router for my tenant. I'll very creatively call it the same thing as my username and tenant name. I'm going to select my router and add an interface. An interface is a connection to our internal networks that we have. So I'll select my internal network. Then there's a cool little topology visual piece here that you can see. So we've got, over here we've got our internal network. And we've got the router that it's connected to and it'll give us information. So we'll kind of build out the network on this visualization as we go. So you can see how networking fits together. Networking is by far the most complicated part of OpenStack. So I'm going to ask if you have questions about it. Let's see how much time we have at the end and I'll absolutely stick around to discuss it. But we could rat hole on networking for the rest of our 45 minutes if without trying very hard. Correct. I covered that so I'll give you a buy on that. So we've got a network. We've got a router which means we've got an instance and we've got networking. We can now launch an instance. So let's get into NOVA and actually look at our instance management. NOVA knows about all the compute nodes and it has scheduling to make an educated decision on where to put your instance. It's designed to scale horizontally so the idea here is with my single controller and single compute node architecture that I'm building, if I had the capability to scale that out I could add more compute nodes and go 3, 4, 10, however many compute nodes. So physically speaking you could imagine a server that's a controller and then 12 compute nodes underneath it which is almost exactly the architecture of Tristack. And if you needed more compute nodes you go beyond that. So it fits into the idea that OpenStack in general is intended to scale horizontally and to work on standard hardware. So these servers that you run your compute nodes on as long as they have the processor capability and enough memory and some disk space, fairly standard stuff you get in all computers. You don't need any special hardware necessarily, you just need kind of off the shelf stuff and because virtualization is so prevalent in computing today, I mean really any server that you purchase is going to have the capability for OpenStack to be able to run on it. Yeah, so as an example on our rack space's public cloud we've got tens of thousands of compute nodes and several hundred thousand VM instances running across them and it's one single cloud managed by one single set of controller, control planes. Was that? All of your hosts are physical nodes he asked, all the nodes themselves. Yeah, so we do OpenStack on OpenStack, so yeah, so we've got an OpenStack environment that's managing an OpenStack environment that's all bare metal and then we're running virtual machines on top. So yes, your control node would be a physical server, your compute node would be a physical server and so I'm doing it virtually here demo-wise but if you're actually going to run this in a way that you want to end users to use it then each of these nodes you'd want to be physical pieces of hardware. And really the most basic architecture for you to deploy on bare metal is really recommended to have a compute node and a network node and a control node so that your API services are separated from your networking services or separated from your compute services and then you don't have resource contention for those kind of logical separations of the pieces of OpenStack. Yes, exactly. The L2 and L3 agents are kind of a central piece that everything networking talks back into and so you really want that on its own node where it can not compete with the API services or computing services and let it just do networking. It can, it doesn't have, like the API server you mean, you can put it in either place. It's very modular so I think Red Hat's default installation puts the Neutron API server on the control node. Currently on TriStack I have the API server running on the network node. It doesn't really matter. So just to give you an example, obviously this use case dependent, I mean you could go as small as just one control node and one or two compute nodes but in a, what I would consider a true production deployment, we would probably, for example, recommend five controller nodes, bare metal. Three of them would be running like the API servers, the RabbitMQ and the database in a Quorum's cluster setup and then you would actually want two network nodes because the network nodes actually act as the gateway for your VMs to get out to the outside world. So if you lose the one network node, you don't have a second one, basically it's good as your VM being down, right, from the perspective of your client. Can I talk about live migration later? Thanks. Right, so the compute nodes will all talk, all the instances that live on the compute nodes, so again, compute nodes is just hypervisor nodes. They all route, as they need to talk to the L3, they route their way to one of the network nodes, which is one of the L3 agents and that is connected to an external provider network. So again, if that goes down, you basically lost access to your L3. And that's a known limitation of Neutron. Neutron team realizes that that's kind of a central point of contention for networking traffic to always have to go back through that networker. And so there is work to resolve that going on. To be honest, in some very large, these will all get fixed eventually, in very large implementations, actually use NOVA networks, partly because the way NOVA works is you can make each hypervisor node basically an L3, like a gateway to the outside world, right? But then you don't get some other functionality. So again, Neutron is a maturing project, so that's why I could put it. Let's get into the more details of that a little later. Let's keep moving here. We've got a lot to cover still. Using NOVA, the command line, you can list flavors. We'll talk about flavors in just a minute. You can add a key pair. So the SSH key pairs I was talking about, you can add those and list them. You can actually list NOVA list here. You can list your glance images through NOVA. Here's an example of booting on the command line, and there's an example of listing your instances on the command line. So let's actually do that in the web interface. Select my compute menu, jump on instances, launch an instance. How about we name it Supertest just for fun? Or demo, that would be exciting, too. My instance. Come on, help me out. What's a better one? I'm going to select an instance. I'm going to boot from an image as my source. I'm going to select that Fedora image that I just imported. Here's the idea of a flavor here that we can see visually, that when you boot an instance, you need to define what resources are going to be given to it. How many vCPUs, how much memory is going to be allocated, how much disk. And so you can select these different flavors. These are all put in there by Pac-Stack for us to choose from. You can create your own as well. One of the key concepts of flavors is, so one of the key tenants is, as you manage the cloud at scale, the more customization you have, the harder it is to manage. If you want to manage a very large scale, you need to simplify things as much as possible. So what we don't want is 20,000 different types of VMs running in your cloud. You want to keep it to a very small subset. So that's the idea behind flavors. Basically saying creating a service catalog of available resources and saying medium, small, medium, large, extra large, take your pick. And kind of reducing it to that very small subset, because then it's much easier for you to manage the environment. So next I'm adding a key pair here. You can generate a key pair similar to AWS, where it'll be generated in the cloud and downloaded, and then you use those keys. Here I'm just importing a key that I use on my laptop. Give me one sec. I'll get to you. We switch over to the other thing to notice here is that on this access and security, there's a default security group being defined and allocated to our instance that's being launched. A security group is kind of a cloud level firewall for your tenant, and we'll see how that works when we connect to the instance in a little bit. Networking-wise, we're going to use our internal network that we created. Post-creation is cloud-init. So if you put a script, typed out a script here in this customization script, then when cloud-init pulls down the metadata from the metadata server, this would be part of it, and it would execute it on the server. So you could see how you could invoke a configuration management engine or run a couple pieces of bash code that would customize things to join a node into a cluster of some kind or something like that. So I'm going to hit launch here. And in this spawning process here, what's actually happening is that disk image that we've put on the control node is being copied over in cache to my compute node, and then it's spawning a virtual instance using those virtual resources. Because we're doing nested vert, this will take a minute or so, and in a real-world environment goes much faster. But we should have a running instance here in just a little bit. So let's move forward. What was the question you had over here? It is, if he has an ESX image, can he run it on KVM? So I'll let you take this. You've brought up a subject, so the 30-lil-sql can interrupt you. So we can make it through this. Why don't we chat with that later in the question session? Let's try and keep questions specific to what we're doing here, and then getting beyond what we're doing. Let's wait for afterwards, because we still have a long way to go. Did you have a question that I can't wait, or? Yes. Yeah, that would be part of that. You can do that in the sender block store service component, so we can talk about that later. So underneath the covers, we've got a virtual network that our instance is running on. But that's an unroutable network. We can't get to that network from anywhere. So what we have to do now is connect the outside world into this network. So let's walk through that. The way this happens is ETH0 is going to represent my physical interface on my node, on my network or node. And then BREX is a predefined bridge in OpenVswitch. And so what we have to do is plug these two into each other. So it's called adding a port to OpenVswitch. So we're going to take ETH0 and add it as a port into OpenVswitch. And to do that, we also have to move the IP address over. Because what happens is OVS will take control of ETH0 and still pass traffic through it. But BridgeExternal will be the actual device that you communicate with the IP address. So what I'm going to end up doing here is on ETH0, I'm going to strip out the IP information that I currently have on my machine so it ends up looking like this. So you'll see we're still going to bring up the interface on boot. And then BridgeExternal, I'm going to, that should be on boot, yes. I'm going to move that IP information over to my BridgeExternal device and make sure that it boots. And then I'll run an OVS command, add port to add ETH0 into BridgeExternal as a port. And then the service network restart over there is kind of a cheating way to bounce these interfaces. All you really need to do is bounce the interfaces. But the reason you need to do that is because when OVS gets ETH0 plugged into it as a port, OVS takes control of that device and therefore we lose connectivity on it. And that's why Packstack can't set this up for it because Packstack is running over SSH and it has an active puppet session that it's using. So if we broke that SSH connection and therefore the puppet installed, then Packstack would be dead in the water. So we're doing this afterwards. And so when I lose connectivity on that port, my port will already be plugged into OpenVswitch because I've completed my add port command. But then I'm going to bounce those network services so that those two interfaces, ETH0 and BridgeExternal, get turned over and the IP information gets moved off of ETH0 onto BridgeExternal and therefore traffic will flow all the way through. This was the most complicated part of understanding neutron for me. So if that was really quick and didn't make sense, come see me afterwards and I'll try and help walk through what took me a really long time to understand. So once we actually get this done, we can jump into OpenStack and create a virtual external network that will represent this connection that we've made to the outside world on our public subnet. And then we should be able to ping an SSH into our instances. So let's work through that. I'm going to look on my, now I'm doing this on my control node more, to be more correct, this should be done on the network node, but my control on my network node are the same in this demonstration. You cannot, because this is underlying networking infrastructure, it's not OpenStack management. It gets back to where I said 80% of the stuff you can do in the GUI, 90% some percent, CLI 100% API. So again, the idea at scale, you don't want to use the GUI too very often, to be honest. So I'm editing my network script files for ethernet zero and bridge external. So you can see currently I've got the IP address on my ETH0 device. So I'm just going to take that IP address and gateway off. And now I'm looking at my bridge external device, and I've pre-populated this for six speed. So all I did was take the exact information that was in ETH0, my IP gateway and net mask, and moved it over into that bridge external device. And then I'm going to, this is not related to instances. This is underlying networking infrastructure that the instances will use. No, this is a one time thing for the cluster. Once it's set up, then all the instances use it. It is essentially, this is the admins setting up the underlying infrastructure. The users don't ever worry about this. The users just make an API call to, say, launch an instance, and then the requests will go out and pull all the pieces together. Well, this is rather complicated. But it has to do with setting up security at the VM level. You need to use IP tables. Yeah. The way you're trying is there's several places where services are being handled. So it creates a lot of layers, just the way I'll put it. So I did the ad port here. And for a minute, it kind of hung and didn't say anything for me. That's where we lost the SSH connectivity, and SSH was trying to reconnect. Then the network was restarted, and those two interfaces came back up. And so when my SSH connection here was re-established, that showed that moving the IP address off of the physical interface onto the virtual OVS interface worked properly, and that I'm now talking to this node through OpenVSwitch, instead of directly through the Linux networking. I'm sorry? You were quick enough to say it. Yeah. So we've got, it's really big. So you can see ETH0 right there. It doesn't have an IP address on it, but its state is up. And then there's BridgeExternal down there that now has my 122.101 IP address that I connected to the dashboard with initially. So at this point, we've done underlying infrastructure for the networking, and we have to make OpenStack aware that we've done this. We have to tell it that there is a block of IP addresses that is routable from the outside world, that you can now use a subset of those IP addresses, create an allocation pool, so that they can be distributed to the instances as floating IPs. I know that was a big mouthful, and we're going to actually work through what that means. So I'm going to jump back into OpenStack's dashboard, and log back in as the administrator. That's because external networks can only be added by the administrator, because it's related to underlying subsystem that we had to do. So as the administrator, I'm going to select networks. I'm going to create a network. I'm going to call this external. Is that exciting? Everybody say, yeah, you're so funny. And so here's the place where we get back to that services tenant, that the external network is a general purpose external network for all tenants to use. And so labeling it as inexternal ends up giving it to everybody, but we don't want it to come up listed as a network that they can attach instances to. So what we do is we put it in the services tenant, where no one's going to try and launch an instance, because it's specific to services. And therefore we use this generic cloud tenant for all of our subcomponents of OpenStack to house this external network, because everything in OpenStack has to be a member of a tenant. So I've created a network. And you saw with the user, we kind of did the subnet creation in the same swoop in that box. Here you have to add the subnet separately. So I've created an external network, but I'm going to go in and create a subnet now with it too. And so I'm going to use my 192.168.122 network where I'm connecting to my dashboard. We're just going to kind of pretend that that's our externally publicly routable set of IPs. And then on subnet detail, I'm going to disable DHCP because OpenStack is going to assume that the IP addresses it pulls out of this allocation pool are statically assigned. They're not provided by any DHCP agent. If I left this checked, then OpenStack would try and launch a DHCP agent inside of this virtual network, which we don't want, because it's provided to us by our network administrators. And then we have to tell it to use. So I'm sorry. I'm assuming after this, Dan's going to show a topology map, and it'll probably be a little easier to figure out exactly what we're doing. Yeah, I'll go back to that visualization that we did before. Had planned to so that we can see what it looks like. Thanks, Ken. I'm the administrator right now of the whole system. Correct. So I'm telling this the wrong thing. So what I've done now is said MI-122 subnet only use IP addresses 200 through 254. And so that would represent a static block of IP addresses provided by a network administrator for your cloud to use. So now I have a subnet inside of my external network. It's labeled as an external network. And now what we want to go do is go back into our end user tenant and connect this external network into our tenant so that we can now route traffic from the outside into our instance. So I'm going to sign out from the administrator. Log back in as me. And the way we do this is to connect it by way of that router. So we looked at the topology before where we had an internal network. And you can see our instance here. You can see our router. And we have our external network over here. But there's nothing routing traffic between the two of them. So what we'll do is we'll attach a route to our router to that external network. And therefore traffic will be able to flow from the outside to the inside. Before you go, how many of you, is there anyone here who work with VMware technologies at all? A bunch of you, OK? Do you know vCloud Director? It's our conceptual. This is very similar conceptually. So you want to go back to the topology real quick? I'm sorry. I'm getting so excited. Yeah, I know. So this external network is a provider network. It's basically a map to an actual real external network that goes out to the outside world. And then in each tenant, we'll have their own internal network that allows the VMs to only access other VMs within that tenant. And then what we're basically doing is we're going to create a route using a virtual router that will connect this internal network to this external network, which is really a virtual network that's mapped to a real, real, true external network. And so when VMs need to access the outside world of vice versa, it works from this internal network, do the virtual router to this external network, which then connects to a real physical infrastructure and out to the internet. And I think a good clarification was mentioned before that the external network only has to be done once. So if we went through the process of creating a new user again, and this user created an internal network and launched an instance, the state that we're in right now is where, with that external network, is where all users will be that are created and then used. So we do this external network creation once, and all new users, all new tenants will have access to this external network to then create routes to it. So the thing that the end users has to do is create the internal network and create the router to the external network. So that external network will always be provided now that we've created it. Absolutely. With the caveat that if you're trying to do it over SSH, you might have a little bit of trouble. But absolutely scripting that and really providing that external network access is something that's done before you give internal users or end users access. So to attach that router to my external network, I just click Set Gateway and select the external network. And then if we look back at that topology, now that router has been connected to the external network and we have the route from the external network into the internal network. So what we do now is the instance on the inside has a private IP that was unroutable before. So we'll assign it a floating IP address on the public network. And then OpenStack will route traffic for us from that floating IP into that instance so we can talk to it. So let's do that now. I'm going to go back and select my instance and say associate floating IP. It tells me there's no IP addresses available. This is specific to the tenant. What that means is there hasn't been an IP address allocated to the tenant yet because everything has to exist in the tenant before I can actually do the association. So I click my plus button and it brings me to the allocate floating IP dialogue. Allocate an IP. Now I have a list of IPs that have been allocated to my tenant. I select the one given to me and it's connected to the super test port and click associate. And now that IP address will come up down here. So at this point, you might be thinking, yay, let's ping the instance because we'd like to talk to it. Man, you guys are awesome. I'm here all week. So it doesn't ping. And this is important because we have to go back to those security groups that we talked about before. Remember that I said there's that firewall at the cloud level. If we connect access and security, and I've already got that default group. So here's that default security group that was allocated to my instance when it was launched. If we manage rules, then we can add a rule for ICMP. So I'm selecting all ICMP traffic, be allowed here. And while I'm here, I'm going to go ahead and do SSH2 because if everything worked, then we should be able to SSH2. So now I have two new rules. Here is my SSH rule and right above it is my ICMP rule. And apparently I've done something horribly wrong because it's not working. So sake of time, instead of debugging it, what was supposed to happen was it would ping there and then we could SSH2 it. This is actually a great place for us to talk about looking at the instance when you can't get to the network. If I select my instance in my project, there's a console tab over here, which will give me, well, that's why I can't connect, it's not running. So look, I debugged on the console. Yay. In Hong Kong, there was a guy giving a networking demonstration. And he started to ping in the demonstration. And everyone erupted. And he was like, man, that was the best response I've ever gotten from a ping. So this is the best response I've ever gotten from a console. So let me try and re-launch this just for fun and see if it is. Yes? For a sake of time, we're probably going to have the descender and then end there and just see if we want to have people get a good-ass question. Yeah, that's great. Yeah, that depends. In the underlying architecture, that's kind of a whole different topic. OpenStack takes care of that for you as the end user. So Neutron, in particular, I think can leverage old-fashioned VLANs, but typically is used to create virtual networks, which is a network overlay sitting on top of your physical network, which is using a VLAN. Does that make sense? So the idea is you can set up your VLANs, as you would, in a non-cloud world, but then Neutron is actually opening additional virtual networks that sits on top. And the main reason you want to do that is, believe it or not, 4,000 VLANs is not enough for certain environments. So because we're looking to try to do a very large scale, in theory, using Neutron with virtual networks, you could actually scale to millions of virtual networks in the same VLAN space. As the end user, you don't need to worry about the VLAN. And I'm not using VLANs as the transport underneath. I'm using GRE tunnels. So in GRE tunnels, the tags are all taken care of for you by the infrastructure. If you're going to do VLANing, then it's, again, another topic beyond you would use a different network transport between your nodes underneath instead of GRE. And then you'll limit it. So let's keep going. We can do storage, hopefully, fairly quickly. And it appears that the instance is actually booting this time. So let me do all the storage. And then I can go back and do the demonstration on Cinder. And that'll be all I have. So hopefully, we'll have a minute or two for lots of wonderful questions, because you guys are awesome. All right, so Cinder's are persistent block storage. You can do snapshotting. Basically, you're going to create a virtual device, as shown here. And then you're going to attach to your instance that device. It's going to show up just as a normal block device on your machine. So on my Linux machine, I'll have VDA for my OS disk. And VDB will show up as the Cinder block device. And then what you would do is use it as any other block device, right? So you'd create a file system on it and create partitioning tables on it. And you would mount it somewhere. And you would use it. And then if you are done with it, you can then detach it from, you can make sure it's unmounted, and then detach it from your instance. So I'm going to skip this create attach mount real quick and let that instance finish booting up and talk about Swift. So Swift is our other storage. This is object storage, where Cinder's that block device that gets presented to your machine. With Swift, you would install a client and then do API calls out to Swift to get simple content. So with the block device, there's lots of metadata. It's a full functioning block file system. Swift is very simple storage. And so the magic of Swift is more so under the covers that your backing store for that is going to do replication and mirroring and striping. And you can create very large storage infrastructure underneath Swift if you can live with having just that simple content in simple content out interaction with the Swift. Yeah, what I think of it is a Cinder block storage is basically a SCSI device for your cloud instance slash VM. Swift, however, is storage for unstructured data that's actually presented to your applications, not to a specific VM. So a use case for Swift, for example, would be if you had to store millions and millions of images, you wouldn't want to do that in a block system. And you actually wouldn't even want to do it in a file system because a file system can't handle millions and millions of images. Whereas if Swift, you can store all those millions of images on the single namespace and be able to access them. So here's an example of using Swift at the command line. And I'm going to do it in the web interface instead for you. Looking at the console here, you can see we have actually booted. And you can see that cloud init spawned. And down here, you can see at the bigotum. The bottom, it has pulled down my SSH key and put it onto the instance. So let's look. The image has to have it installed. It is. I think I installed it directly out of the Fedora repos. So I'm going to create, first I'm going to do sender here. So I'm going to create a test volume. And I'm just going to make it a gig large. There's types. And there's more configuration you can do with sender depending on what kind of backing store you use for it. So that's what all those other boxes are. We obviously can't get into it with the time we have left. So at this point, I could say edit attachments and select my test instance. And there's another message I have never seen before. Live demos. So let's just use the console. So you had a question? Yep. There is. It's a whole other topic. But yes, there is. So we kind of skipped over it. Whenever you launch an instance, there is a root disk that's called a frmroot disk that typically sits on the local attached storage within the server. I hope you know. Although it could also sit on an NFS mount. Then that's where things boot up. If you can't choose to boot up off a sender if you want, but that's not the default configuration. So here you see I've got VDA and VDB. So VDB is that sender volume. Not very secure. All one word. All lowercase. So if I go back to, so if I look at my volumes again and say edit attachments and detach that volume, then VDB should go away. And the way to think about sender, actually, of sender volume is think of it as almost like a USB drive. It's not share storage. It's not actually share storage. It's a storage attached to a one-in-one instance only. But you can detach it any time. Basically, we attach it to a new instance at any point. So now you can see there's only a VDA now that it's been attached. And then the last thing is quickly Swift storage. There's two things to understand about Swift. There's containers and there's objects. An object is put into a container. So I've got a test container that I'll create and then upload object. I'm going to just name it test and then browse and upload my Anaconda kickstart file for fun. So now we have this object. So the idea here would be if you went out to that instance and installed the Swift client, you could then use that Swift client to connect to Swift and pull down that object. So you can see how multiple instances could then talk to Swift. Couple resources, well, let's review quickly. We did, we started with Keystone at the bottom to authentication. We imported an image into Glance over here and then created networking. We launched an instance with Nova. Attached block storage to it and uploaded something object to Swift. And then top corner over there, we used the Web interface to do it. Couple resources, OpenStack Red Hat Comm is where you can get all the RDO community bits. If you're a customer, we have some docs for Red Hat OpenStack at that address. OpenStack.org, Tristack is that community cluster that I talked about before. Puppet and Django are underlying pieces that are used in OpenStack and for Puppet is used in PackStack. And then the last link at the bottom, I'll be posting the PDF of these slides here out to that location for you to download. And I'll also put on SlideShare. So if you search my name or Ken's name on SlideShare, you'll be able to find both of our slide decks out there. And the YouTube video will probably be up in the next day or two. So we've got some, I realize we only do have a few minutes but actually there's a break between 10, 30 and 11. So two things, if you wanna ask a question and everyone wants to say to hear that question in the answer, please come up to the microphone in the middle if you can so that everyone can hear the question. And then afterwards, people can also come up and talk to us individually. So if you have a question right now, I'll go to the middle. Thanks. I'll also put the glance image up in the same directory. So if you go to that Fedora people site, you can download that image. There's also a Fedora cloud image that's pre-built as well, though. The difference is that Fedora provides it and there's no root password to it. Mine has that not very secure root password on it. Okay, it looks like some folks, if you have a question? Yeah, somebody had asked earlier about a live migration and that's something I'm really interested in. Nova has support for it and if you use shared storage, then it can be done. But so would shared storage be the sender stuff? No, because sender is not shared storage, right? So you essentially need, right now I think what is supported is NFS map. So yeah, so the compute nodes would have to have shared storage between them. So yeah. And not all of you have some kind of, like another file system layer in between. And then I'll just do shared storage or something. Okay, thanks. So those floating APs that we allocate on external network. Can everyone keep it down a little bit so we can answer questions, please? Yeah. Hello? So those floating APs, there is a route to the external switch, right? And so that we can actually access those APs outside, like floating AP on Amazon, right? Same concept, right? Correct. Did you create the route in this demo? Did I create the route? Yeah. No, it's kind of assumed that your network administrator would give you a block of IP addresses that are routable to the hosts that you're building OpenStack on. And therefore when you use that block of floating IPs and tie that physical interface into OpenVswitch, it's assumed that those routes already exist. And so when you create that external network, you're making OpenStack aware of that block of IP addresses that are assumed to be routable to OpenStack. So then when OpenStack puts that floating IP onto the interface and does the SNAT back into your instance, it handles the routing from the floating IP into the virtual network, whereas your network administrator is gonna handle the routing from the outside world to that floating IP. Got it, got it. Cool, thank you. You're welcome. So what have you said? No, thanks. Hi, good morning. Can you use VirtualBox to run RDO inside of VirtualBox if you have, assuming you have no resources? Sure, yeah. Yeah, in fact, my recommendation is VirtualBox and Vagrant make life so much easier doing OpenStack. Okay, thank you. I'm trying to get Dan to use it for a year now. Get him to use Vagrant? To use Vagrant, yes. I'm just using generic Liver on the Fedora machine. But it's okay, it's cool, right? Yeah, I mean, the trick to doing that if you're gonna do this setup, making sure you can put two virtual nicks on your boxes. But if you wanna do the all-in-one install, then absolutely, there shouldn't be any restrictions that I know of to use whatever virtualization technology you want. Also realize, though, that when you launch an instance, you're doing nested virtualization because your node is virtualized and your instance is virtualized inside the virtualized node. So things will be slow. Slush. But I do it all the time for demos and stuff, and if you can wait a few extra seconds, it works really well to learn off of and to demo. There was a question we had tabled about images. Specifically, if you're running EXXI, and you wanna pull that over to, I think the question was OpenStack. How do you do that? So several ways to answer that. First of all, it isn't EXXI to OpenStack, it's EXXI to some other hypervisor. Because the reality is OpenStack can be used to manage vSphere. So it's not an either or. It isn't OpenStack for vSphere, it's OpenStack for vCloud. Really, it's the answer. That being said, the two biggest problems today in terms of portability of environments, you hear a lot about moving from one cloud to another cloud, or moving from bursting workloads up from private to public. The two biggest problems today, one is data gravity, right? How do you move a lot of data? Terabytes of data from one cloud to another. I don't know. Unless they're all in the same data center. Number two, though, is image portability. So the reality is VMware's VMDK image and KVM's Q-Cloud 2 image is not at all compatible. And there are no good tools at all in place today to easily convert those images. So the two ways around this is you would have to make sure that whatever clouds you're moving to, whether it's public-private or public-to-public or private-to-private, use the same images. Use the same hypervisor. Or the other way to do it is to not use golden images. And when I say golden images, I mean this concept of I'm gonna take an OS, load it up as much as every patch and every application I need, and then stick a snapshot and use it for an image. If you do that, basically you lock yourself in. So the way around it is to you don't use golden images. Use the thinnest, smallest image you can possibly find. And then use a configuration management tool like Chef or Ansible or Puppet to do your configuration after the fact. Because that is actually transportable across multiple clouds and multiple hypervisors. So then if you need to move something, you essentially just find an OS on using a different image template, spin that up and then use your configuration management tool to configure that image. Does that make sense? Yeah, it's this. But what I'm talking about, if you have an existing like ESXi image VMTK, the conversion over is not very clean. It's so reality. But then again, you're basically recreating thing anyway. It's no easier to import that and make sure everything comes over. Yeah, so is that clustering in the L2 agent? I don't know. I know there's no clustering in the L2 agent. Neutron is new enough of a component that clustering and high availability is something that's still in the works. So it's actively being worked on. Red Hat's been spending a lot of time on testing, balancing and making Neutron highly available, having multiple nodes. And there's a lot of issues that are still being worked out. So I don't know that it's fully baked at this point, to the point where someone would say, yes, you need to run multiple Neutron nodes and it's going to work, but there's lots of active work to iron a lot of that out. So a bunch of different vendors, open stack vendors have basically put their own solution to try to make Neutron more highly available. So for example, I know several, the vendors have put HA proxy kind of on top, layer that over the two nodes so that when one fails, you can kind of reroll things over the other nodes. So, but then no, far as I know there's no HA built into the core of Neutron itself, which is actually one of the problems today with deploying. So in Rackspace, good question. So Rackspace, we actually do not use the L3 agent for production, for our private cloud. We actually use like physical switches, physical routers, because we don't think, so you can use Neutron for the other pieces, but for production usage as an L3, we don't think that's a good idea. I thought you were going to say you use super secret sauce locked in a vault in the bottom of a- No, no, no. We've looked at using HA proxy, but there's a lot of work involved there that makes it difficult. Thank you, sir. Not a question necessarily, just a statement. When you think about using ESXi images, I was going to say like OpenStack goes hand in hand with the whole DevOps thing about not relying on golden images. Yes, that's right. So it's better not to think in that direction at all. I agree. The only customization as you guys said we need is like, we anyways need cloud in it and stuff which is required for the metadata, but minus that everything should be handled by Chef or Puppet. I agree. Right, I agree. The thing is I think most people grew up using VMware, grow up using VMware, are used to golden images. And to practice that's changing, right? Because if you look at how VMware is doing their VCloud hybrid service, they actually use Puppet. So they actually are creating finned images using Puppet to do configuration model. So they're kind of coming around to it too. Oh, is that not okay? You need that one? Any other questions? It's too big for me. It's dependent on your situation. I mean if you need something that's super easy to set up, then GRE and VXLAN are gonna be the easier way to go, but they're tunnels. So the traffic has to be encapsulated. Every frame that goes through, there's a header attached to it and therefore you have to monkey with like NTU settings and stuff like that to be sure that your frames are gonna flow through those tunnels properly and not fragment. So you're gonna get much better throughput out of using a VLAN backend than using GRE or VXLAN. The difference between VXLAN and GRE, which I actually just learned yesterday, I think, is GRE is TCP based and VXLAN is UDP based. So there's a little bit less overhead in using VXLAN because you don't have these long running TCP connections. It's all fire and forget UDP traffic. Yeah, so it's just, sorry. So the question was which network model to use VLAN versus GRE or essentially virtual overlay. So the consultant's answer is it depends, right? Depends on your use case. What I will say is the future is clearly going to be virtual overlays, right? So we know in our private cloud, we actually support GRE. We use pieces of, no, I said we don't use Neutron related to rerouting. We're using other pieces of Neutron. Oh, great. Thank you. You still have it over here. Thank you, I appreciate it. You can start over. But basically you can route it. You can actually talk to a physical router. But anyway, so the point is you can look at your use case and be prepared for the expected overlay networking is going to be the future. And also obviously if you're playing on a large scale, VLAN could be a problem someday. Does someone have one of the USB keys to copy the image off of this gentleman who's looking to copy it? A first, an overview of overlay versus underlay. No, so underlay network is actually a tradition of like physical network with VLAN. A network overlay is basically you basically abstract that away and you basically can lay, I'm just going to explain. So you can actually have networks that aren't physically connected in the same space. And you can actually, and network can actually span across multiple networks. Two phones were found, so if you're expecting a call from your distributor, I've got your phone. So I'm not an honest, I'm not a networking guy. So I know enough to be able to help architect some of the stuff. But you want to get drilled down, talk to Dan, or there's actually some great new trans sessions I would review. I have a question. Yeah. So this is around the services container. Containers? I don't know. So the services is another tenant, right, a project. Can there be multiple services? Overlay. I'm trying to create an abstraction where the services that I put in is controlled by the security domains. If I have many security domains and I want to create a construct that who can get access where, say a network service is exposed to security domain A services project and then another services for another network. Is that something that's exposed? Is it doable? I'm not sure. In my head, I have to picture it. Right, so in another words, there can be many services projects. Right. And whatever that services project gets exposed to is, I'm losing the train of thought. I need to talk to you. Yeah, we should talk, we can help there. Okay, thanks. So if there are no other kind of big global questions, I actually have to run in five minutes. Looks like Dan, but you can come up and ask questions. Looks like Dan can stay a little longer. And thanks again.