 Because I was having a hard time typing on it. Your keyboard. I put it right there. Is that OK? Can I? So go to get under request. Oops, that's the one for the lab. Hey, how's it going? Can we get the screen on? Can we get the screen on? So how's it going, everyone? My name is Aaron Rosen. And today we're presenting a hands-on lab, basically showing the new features of Neutron that were developed in the Havana release. So we have a lab prepared for you guys. So if you can go to this Bitly link at the bottom. We have a Google Doc set up. This page, that link will take you to here. And then you can access, you need to go to these two links. And that'll bring you to this Excel spreadsheet where you'll need to put in your email address to reserve a lab. And then from that, you'll use this IP address to SSH2. So I'll leave this here for a little bit so you guys can get to that page. So it's important to note that these are actual real labs that are deployed in our cloud back in VMware. So you want to make sure that you don't pick another one that someone else is using. Otherwise you're going to run into problems because two people are going to be accessing it at the same time. Say it again. Cool. Has everyone been able to get to that? Reserve a lab? So in order to get into the lab, I'm assuming everyone has a SSH client. If not, you could download something like Putty to use. And the username to get into the lab is Nasira. N-I-C-I-R-A. So basically what you're going to want to do is if you pull up a terminal. During the presentation, if you guys have any questions, we have some helpers here. We have Somic, Eric, Dan, Duffy, Ben, and Salvatore. So if you have any questions, just raise your hand and someone can come over to help you out. Cool. So we're all logged in. So when you're signing into the Google spreadsheet, make sure to just use your regular Gmail account. But you should be able to access it without any account. Yep, the user's always in the center. It's spelled right here, too. N-I-C-I-R-A. Cool. So if anyone's having any issues, just raise your hand and we'll keep coming around. Somic. Cool. While we're getting the login issues settled down, I'll go ahead and just start presenting what we're going to be trying to do here today. So basically what we're going to do is we're going to deploy a multi-tier application using OpenStack. So the components that we're going to use to do that is we're going to leverage security groups, which are basically a construct to say which hosts are allowed to talk to which other hosts. And in this lab, we're going to create a jump host, which I'll talk about more when we go to deploy it, which is basically going to be a host that's the only one that's publicly accessible from the internet. So you're going to be able to use this host. You're going to be able to jump into this host to get to your other hosts. Next, we'll deploy a few web servers and a load balancer. And after that, we'll leverage the firewall as a service component that landed in the last release in order to enforce ACLs on the router ports. So here's an overview of what we're going to try to create. We can create a logical router that contains a firewall that's uplinked to a public network. This router is also uplinked to a private logical network which has these three hosts. Web server 1, Web server 2, and the jump host. The jump host allows TCP port 22 into it and ICMP. And the web hosts allow TCP port 80 in it in order to serve web traffic. And they also allow the jump host to access. So the web servers are not going to be publicly SSH-able from the internet. You'll have to reach them from the jump host. So this is just a way to lock down security a little bit more. So what we've prepared for you is this... This is what the lab topology looks like. When you SSH to that IP address on the sheet, you'll end on this lab jump host, which is connected to the public internet. From there, there are these four nodes that are deployed. Nova compute. Two Nova compute nodes. Those are going to be what are going to be running our virtual machines. And opens that controller, which runs all the API endpoints. Nova API, Nova scheduler, Keystone, Glance, and Neutron. And the network node, which provides DHCP and L3 connectivity in addition to the load balancer agent and metadata. Just to give you an overview of what all these services are, the L2 agent basically provides L2 connectivity between hosts. So one interesting deployment option that Neutron allows you to do is allows you to deploy compute nodes regardless of your physical network. So for instance, in this topology, these two Nova compute nodes are actually on two different L3 subnets. And we are able to deploy VMs on top of those compute hosts that are on top of the same L2 boundary. So this allows you to simplify your deployment. You don't have to worry about trunking VLANs. It's using GRE with overlays. So the interesting thing about this lab is we're deploying this in our cloud on top of the Nasera technology. In this lab, you'll be using the open V-switch plugin, which is the open source reference implementation that Neutron provides. So just to give you an overview of how this looks as a deployment option. So if you look at one of the labs, as you can see from this slide here, there are several networks. There's a management network, which VMs are managed, which you basically just use to administer VMs. There are these two data networks that are connected to your compute hosts, and then there's this external network that connects from the internet inwards. So this is the landing page of the MVP manager of our production cloud, which we use to dog food our product. So here you can see we have 101 hypervisors, several gateway nodes, which provides L3 access in and out of the cloud. And we have our controller clusters here. So if we look at one of the labs I've pulled up, we can see that here are all the ports that are attached to the VMs. So we have 18 ports in total, and we have eight logical networks. So the two data networks, the external network and the management network. This one port here we have marked as down on purpose, and that just allows us to administratively take the port down. So this way you guys don't have access into the internal VMware network. So that's why that is down. Cool, so is everyone at a point where they're able to SSH into the jump post? The username is Nasera, NIC IRA. So in order to complete this lab, it'll probably be helpful to pull up these instructions alongside so you can copy and paste it into the terminal. So just to give you an overview of what the neutron abstractions are, the first thing we're going to do is we're going to create a private network. So this is basically accomplished via the neutron net create command. So this creates a layer 2 broadcast domain. So we'll go ahead and do that now. So when you log into the jump post, the first thing you'll need to do is source the credential file. So basically what this file does is it allows you, it tells you where all the API endpoints are and allows you to run API commands against the OpenStack APIs. So the first thing we'll do is we'll create that layer 2 broadcast domain via the neutron net create command. One thing I want to point out is what we've already deployed is we've already deployed an external network, which will be internet reachable for your VM. So if you do neutron net list, you'll see that there's already a public network that's deployed out here. So this is the first step we've done. We've logged into the machine and we've created a private network that we're going to connect our VMs to. The next step after creating that private network is we're going to associate a subnet to it. So basically this... Is anyone else having issues out there? Okay. I'll mention it. Anyone else having problems? Cool. So we've just created a layer 2 broadcast domain. So after doing that, the next step is we're going to associate a subnet to that. So that basically allows us to IP address management for the VMs that are connected to that network. So in order to do that, we'll run this command neutron subnet create. We'll tell it the network that we're creating it on top of and then the sitter that we want to be associated with that network. So right now what we have deployed is we have a layer 2 broadcast domain and then we have a subnet that's associated with that that we'll use to feed IP addresses out to the instances. Cool. So if you're running into this unsupported locality error, there's a command that you'll need to run in the terminal and that command is export lc all equals c. All right. So our next step is basically going to be uplinking this network to a router. So this way we'll be able to go in and out of the network. So in order... So the next thing you want to do is you want to create a router. So to do that, we'll use neutron router create. So I'll go ahead and do that too. And then after creating the router, we'll need to uplink our subnet to that router. So in order to do that, we'll need to attach the router to our public network. So you can have multiple public networks. So in this case, we'll just have... We only have one public network provided for you. So to do that, you'll run neutron router gateway set. You'll tell it the router that you want to connect and then the network. So this is what the logical topology looks like right now. We have a router that's uplinked to a public network. Our next step is going to be attaching that network that we created earlier to that logical router. So this will allow us to, when we attach VMs to this network to be able to go in and out of the cloud. So in order to do that, we'll run this command here, neutron router interface add. We'll tell it the router and then the subnet that we want to attach to that router. If anyone needs more time, just raise your hand and I'll wait up a little longer. Can I get a show of hands for anyone who still needs more time? Cool. So the next step that we're going to do in order to deploy this multi-tier application is we're going to start creating security groups and security group rules. The security group that we're going to create is going to contain rules for our jump host. So this is going to be a host that's going to be publicly accessible via the internet. So the first thing we're going to do is we're going to create a security group. So basically what this is, this is just a container to hold rules. So we'll go ahead and run this command to create this container, neutron security group create. And we're going to call the security group jump host that we're going to associate to the jump host. So after we've created that security group, we're going to add these two rules to it. So this first rule allows us to have ICMP be accepted into the VM. So this just helps us debug if you run into any issues. The second rule that we're going to create will allow TCP port 22 to go into the VM. So this allows us to SSH to it. So the next step we're going to take is we're going to launch this jump host. So just to give you a quick overview, I have the command here that does it. But we're going to use the CROS image and we're going to tell it Flavor 1. This is the size of the VM. So that's going to have 512 megs of RAM and 1 vCPU. This is the name of the VM. We're just calling it jump host. And these are the security groups that we're going to add to that instance. So I've run this command. And if we do nova list, this will show us the status of the VM while it's booting. So you can see right now the VM is in build state. One thing to note is we're running this lab in a nested fashion. So we have our physical cloud deployed with hypervisors. And then we have this lab deployed. And now we're going to be deploying VMs on top of that. So there's two layers of virtualization. So it will be a little bit slow to boot these. But as you can see here, the VM is eventually one active. It's in running state. And it shows the private IP address that's added to that instance. So at this point you can see this VM has this private IP address of 10.0.0.2. At this point, we're not able to access that because that IP address is attached to this logical network. And there's no way for us to route in and out of that. So in order to be able to connect to that instance, we're going to need to create a floating IP address and associate it with that instance. Sure. So when you launch an instance, if you only have one network, when you launch an instance, by default it'll attach to one network. So there's only one network that's accessible to the instance, which is the private network. So if you do neutron net list, this is the only network that the instance is able to connect to. So by default it'll automatically attach to that. But if you had multiple networks, you would need to pass dash dash nick net dash id and give it the network that you explicitly want to attach to the VM to if you had multiple networks. This makes typing it in a little bit easier. This way you don't have to copy and paste that network id. So the next step that we want to do is we want to determine the port we're going to use this command neutron port list, and we're going to pass it a filter to basically search for the instance id that we just launched. So this will allow us to find all the ports that are attached to this instance. So if you do nova list, that'll display this output with one VM. You'll want to copy that id and paste it into this command. So this shows the port that's attached to the VM, here's its MAC address, the subnet that it's attached to and gets probably still in provisioning state. It should come. Has it still not shown up yet? Hey, Eric. Is anyone stuck still? Still booting? Well, while the VMs are still booting, our next step is to associate a floating IP with this instance that we just booted. This will allow us to access the instance publicly. So in order to do that, we use this command neutron floating IP create. We pass it the port id that we want to address with. So using that neutron port list command to determine the port id, we'll take that and pass it to this. So after running this command, you should be able to ping the floating IP that it returned to. Net-id. But you shouldn't need to pass it the network id. Could I get a show of hands for everyone who was able to get to this step? Cool. Looks like we're getting there. What's that? Just one thing to mention. The lab that you're logged into is just unique to you. It seems like some people have logged into the same lab as other people, in which case you're going to stop on each other. Like, because multiple people will create multiple networks, and things won't work correctly. Cool. So our next step is we're going to create a security group for our web servers. So in order to do this, we'll do the same command before. Neutron security group create web. These are going to contain the rules that dictate what traffic is allowed in and out of our web servers. For our web servers, we're going to allow TCP port 80 into them. So this way, they can be accessible from the Internet. So we do that by with this rule, Neutron security group rule create. We're allowing the protocol TCP and the port range between port 80, and we're adding it to the web security group. The next step is we're going to create a security profile rule that allows our jump posts to access all of our web posts. So in order to do that, there's a self-referential rule here that specifies the group ID jump post. So basically what this means is anyone who's a member of the jump post security group can access the web members of the web security group on protocol TCP port 22. So one nice thing about this is you can continue to add more and more web servers, and by default you'll automatically be able to access them from your jump posts. So this way, you don't need to continue adding and removing rules if you add more servers. Can I get a show of hands if everyone's up to this point, or who all is up to this point? Cool. So after creating the security profiles and rules, we're going to go ahead and boot two web servers. So this is accomplished via these two commands here. And again, we're going to use the flavor one, and each one is going to be called web server one and web server two, and we're going to tell it on the security group web. After running those commands, you can run an over list to see the status of the VMs what state they are. So as you can see, the two VMs on my set up are in building state. So by default, the direction flag on the security group rules allow us to enforce policy ingress and egress. So if you don't specify the direction, the default policy will allow ingress traffic into the VM. So those rules will allow port 22 to come into the VM. We can also set up egress ACLs. So basically say the VM is not able to go out on port 22. So if we wanted to prevent an instance from SSHing out to the public internet, we could basically have rules to do that, and that's controlled via the direction flag. So at this point in my set up, all of my hosts are up, and you can see web server one and web server two are up with IP address 10.0.0.4 and 10.0.0.5. At this point, we're going to go ahead and SSH into that jump post, and from there we'll be able to jump into web server one and web server two. So in order to figure out the floating IP address of the jump post, you could run nova list, and that will show the public address here, or you could run neutron floating IP list. So the user name for the image is cirrus. So you should be able to run this current from the jump post. And then the password is cubs win with the smiley face. So that's right here. So when you get on the jump post, you should be able to ping the two web servers, 10.0.0.4 and 10.0.0.5. Actually, you shouldn't be able to ping them, because the profile rule from the jump post only allows SSH, not ICMP. So you should be able to SSH to them. So the next step is we're going to SSH to both web servers, and we're going to spin up a little dummy web server to handle requests. So we're going to give you SSH to 10.0.0.4 and the same password. And from there, we're going to use this little trick in order to spin up a dummy web server that's going to respond the web server name when someone does a get request on it on port 80. So after you do that on web server one, you'll want to type exit, which will bring you back to the jump post, and then you'll want to do the same thing for web server two. So we're going to add 0.0.0.5 type in the password, but this time we'll run this command which will return web server two instead of web server one. After doing that, you can type exit from that host, and if you run the curl command against each of these hosts, you should see the correct response returned. Web server one, web server two. So while everyone's getting up to this point, I'll just talk about what our next step is going to be. So what we're going to do next is we're going to create these hosts in that pool. So basically when you make a request to the VIP, which is the IP address of the load balancer, it'll go ahead and load balance the request to both of our servers. So we'll make the request to one server, it'll return web server one, and then we'll make the same exact request, and it'll return web server two, and we're just going to use the round robin method for now. Cool. So we'll go ahead and create that load balancer pool. We'll have to exit out from the jump post, and after creating the pool, we're going to go ahead and add our two web servers to it. So in order to do that, we're going to use this command neutron lbmember create. We're going to tell it the address of web server one, web server two, and then the port that we want it to listen on and keep track of. So you can see we spawn up our web server on port 80, and we're going to be using protocol port 80 here for our pool members. One thing to note is if you accidentally made a mistake or booted multiple VMs, your IP addresses might differ from my IP addresses. So in order to figure out the correct IP address for your web servers, if you run the nova list command, that'll tell you the IP addresses here. So in my case, the IP addresses are 10.0.0.4 and 10.0.0.5. So I've already added my first pool member to that pool web server one, and then I'll go ahead and run the same command, but I'll update the address to be 10.0.0.5. After creating these pools, our next step is to create a health monitor. So basically what this does is it'll monitor the health of our instances at a periodic interval. So if one of our instances dies, it'll remove it from the pool and it'll no longer send requests to it. So this allows us to do some kind of HA and scallop mechanisms. So to create the health monitor, we'll use this command, neutron LB monitor create, and then we're going to go ahead and associate this health monitor with the pool that we created so you'll have to copy this ID from the health monitor, and then you'll have to type my pool. So now you can see we have a health monitor that's associated with that pool. So at this point we're going to create a virtual IP address for our load balancer. So basically what this does is when requests go to that VIP, they'll be fanned out to our pool members appropriately. So as you can see here, the address 10.0.0.6 was returned to us, so when connections go to that address they'll get fanned out to our two web servers. One thing to note is this IP address here is also on our private network. So if we look at this topology diagram, this is this VIP IP address right here that's attached to the private network. So at this point, if we make requests to that, we're not going to be able to reach it from the public internet. So the next step is going to be creating a floating IP and associating with that VIP port. So to do that, one thing to note is this is a typo here. This UID won't actually be the UID of your lab. So you'll have to make sure to use the correct port ID. So from this the VIP output, you can see the port ID that's associated with the VIP is displayed right here. So what we'll go ahead and do is we'll copy this ID here and pass it to this command. So now we have a floating IP that associates the internal IP address 10.0.0.6 with its external IP address 172.16.1.5. So when requests are made to the floating IP, they'll be natted to the internal IP. So if everything worked correctly you should be able to run Corlegance this IP address and see web server 1 returned and then web server 2 returned. It should automatically come. So the members of the load balancer should come active right away. If yours aren't active, my guess would be that that command that we typed in to start that web server didn't get run correctly because what actually happens is the monitor that we associated will go ahead and poll on those instances in order to make sure that they're active. Yeah, it should be almost immediate within three seconds because that's the delay on the monitor. And you'll also have to run that service on port 84 to even be active too because it does health checking. So can I see a show of hands of people who are able to get this working? Awesome. So the next thing I'm going to demo is I'm going to show the firewall as a service stuff that landed in this release. So this is just a reference implementation. The API for it is probably going to change in the future. One shortcoming of it is that it's kind of a global default. So when you create a firewall it's applied to all of your routers unfortunately right now. There's no concept of zone edge just to give your heads up. There's no policy. And this policy is basically the same thing as a security group. It's a container that you can put rules in it. So we're going to accomplish that by writing a neutron firewall policy create and then we're just going to call this default policy. After creating that firewall policy we're going to create a firewall and associate it with it. So to do that we run neutron firewall create and we pass in the policy. We have a router and since we don't have any rules in that firewall policy by default everything is going to be dropped. So logically what we have is we have a router that's deployed and we have a firewall that's on top of it that's blocking all access from everything behind it. So if you run that same curl command to access your web servers you'll see that that'll no longer work. And the reason for that is we haven't added any rules into the firewall to allow that to work. So we're going to create a rule that allows our TCP traffic to work again. So we're going to go ahead and create the rule. So firewalls have a little nicer to work with than security groups. They're allowed to basically specify allow and deny actions. Security groups are basically just like a pinhole which you allow you specify what you're allowed to go through. So with firewalls you can say I'd like to allow all TCP traffic except for TCPP80. But if you're using security profiles to accomplish that you would have to write your rules in a more complicated way in order to accomplish that. So this created a firewall rule that allows the protocol TCP and the destination port port 80. And when things go to that we allow it to go in. We just call this allow HTTP. So if you still want to have access to your jump posts you'll also have to create another rule that does that. But you don't need to do that yet. So I'll just do that for sake of argument. Allow SSH and then I'll pass it port 22 for the destination port. So we've just created two rules. After creating those rules we'll need to add those to our default policy in order for them to take effect. So here's our first first rule that we're going to go ahead and add to allow HTTP. So this inserts that firewall policy and this inserts that firewall rule into the default policy. And if you optionally created that SSH rule you could insert that as well with allow SSH. So after doing that you should be able to run that curl command that we're running before to see that you're now able to continue load balancing over your web servers. Sure when you so what the firewall does is it protects at the router layer at the logical router. So when you create a firewall policy that's basically container that contains rules and then after we created the policy we created the firewall and associated it that policy with it that by default is associated on all the routers that your tenants own. So one thing that we're going to do in the next release is we're going to work on a API that allows us to have a more flexible ability of where you want to apply it to your entry and exit points of your network. Yeah right now firewall is global for a tenant and it applies across all of their routers. So after inserting that rule into the policy you should be able to access everything again. One thing that we did earlier is when we created the load balancer we associated a health check with it. So what this allows us to do if one of our instance crashes or gets deleted or dies it allows it we pull that member out of the pool and we take them out of commission. So when you make curl requests it'll no longer display one of the web servers. So in order to demo that we'll go ahead and type in the web servers. So if you do nova list just to show which web servers we have running then you can just pick one to delete and the command is nova delete and then you pass in the name of the server you want to delete. So I'll go ahead and delete web server one. So this will go ahead and tear down the box. So after a certain amount of time when it gets torn down we'll be able to make requests and you can see only web server two will respond. Well actually right now you can only create one firewall. If you create multiple firewalls it won't do anything for you because it's applied globally for a tenant. So it's something that just to prove out the API is something that we have added in the last release just to figure out how we want all the rules to work but the zone and stuff of how things are applied on the router are not really flushed out yet. So one thing that we're going to work on is basically to map firewalls to what we have today. Can you repeat that? It's a question about the scope of the firewall. So you said it's applied to the router. I think you also said it applies to all hosts. Can you just confirm you mean it places traffic from the outside net to those hosts rather than intra-VM traffic. So it's still pleased by the security group rules. Exactly. So security groups are applied on the tap interfaces of the VM so those are applied behind the router. So all the server stuff only applies to the router ports just on the router so that won't help you to enforce intercommunication on the same network. But one thing that we could do is we could create a network for our web servers and a network for our jump host and attach them both to a router and then you could enforce that cross network communication. So earlier I deleted web server one. So now when I curl to that VIP and see web server two is the only thing that is returned. You can order firewall rules. When you do neutron firewall so when you do neutron firewall policy show this order that's actually returned here is a list. So one thing that you can do is pass in a dash V option which shows the output and you'll see that the rules that are actually returned is they're in an ordered list. So one functionality that it has is they wait to insert rules before and after other rules. So if you do neutron grab on firewall you can see that it allows you to so if we look at the insert rule command it has the ability to pass in which rule you want to insert before and which rule you want to insert after. So by default if you just run it it'll insert the rule to the end. So if you write rules that need to be in a certain order the way you would put those in that correct order. Those are all the steps for the labs but if you have any questions as you go along feel free to shout them out or come up and ask. We're going to go ahead and leave the labs running for the next 24 hours or so so if you want to continue playing with them later on we'll keep those up for you guys as well. One of the things I showed at the beginning of the presentation was the underlining infrastructure that's running behind this lab so I'm going to go ahead and show that again. One of the things to help operationalize the cloud is we have this NSX manager so that displays everything that we have going so currently in our cloud we have almost 5,000 logical networks deployed so those are basically the neutral network layer 2 broadcast domain we also have 1,500 routers so that's what those networks are connected to we have 8 gateways these are basically what allows us to go in and out of our cloud so these sit at the edge of the cloud that connects our networks out to the physical internet and here is one of the labs to show you what it looks like in the manager these are all logical networks and the ports that are on so as you can see we can go in and out and see which transport zone that's on it one nice thing is in order to debug it from this level we have tools like port connection so I'll drill down into this access lab switch so on this switch you can see we have the three ports it shows us the physical transport nodes that they're deployed on so it allows us to know what virtual machine or what hypervisor your virtual port is deployed on so one nice debugging tool is this port connection tool and this will actually show you how the traffic flows in the physical network so I'll select which port so I'll select port 1 on the switch and then port 3 on this other switch this was, I did it to one of the ports that were off so you can see that activity between that so I'll go ahead and pick the other port port 2 and this will go ahead and draw out the path that it takes in the physical network so it's a helpful debugging tool that we use one more time, what did I create the public network for oh I created the public network beforehand because you need to know the IP addresses that uplink to the public network so as a normal tenant this is not something that you'll usually know so that's why we went ahead and pre-provisioned and created the public network ahead of time, yep usually that's how it would work, the admin would go ahead and create the public network or there could be multiple public networks and you can choose which routers you need this connect to Neutron also it depends on how you want to deploy it if you expose that information to users then you could allow them to bring their own networks or create it but for sake of the lab we pre-created it just to simplify things, sure we actually have the dashboard working here we took it down only for bandwidth concerns so if you if you take the IP address of your lab and you point your web browser at it pass in HTTPS I don't know we might have taken it down but one nice thing to use is Horizon which is a UI that helps display all this information and one cool thing of it is it has a utility that shows what your networks look like visually so it helps see that visually