 All right. Everyone can hear me? Good afternoon, everyone. Thank you for coming. And for the next an hour and a half, we're going to go over some of the hands-on exercise to deploy some applications on OpenStack. For the last few days, we all hear about great things about OpenStack and Cloudified to give you the infrastructures. Now, how are you going to use it? And how do you use it reliably? So today, my colleague, Xing Bu, and Ted Elrani, and myself, Conzi Zhang, we're from Big Switch. And what we do is we're providing unified physical and virtual network typology or networking solutions to OpenStack. And so we have a neutron back end providing L2 and L3 plugins for the OpenStack network. And the main difference between the Big Switch solutions from other SDN solutions is that we're managing the physical switch, physical infrastructure, along with the virtual infrastructures together in a unified manner. And not, you don't have to manage two networks. You don't have managing the underlay, overlay. We're managing the entire network together. So you have the visibilities, the config capabilities, operational capabilities to managing all your physical switches, along with your virtual switches. And on top of that, we're providing the virtual concept, virtual constructs to create logical networks, logical routers, NAT, all those capabilities that we are familiar with, with Neutron. So let me get into today's content. The first, today's environment, everyone should have a little slip of paper. And that describes your login credentials, your passwords, and the entry points for your sandbox. And the environment, let me give you just a quick, my colleague going to go into a bit more details about the environment. Everyone have the environment, is a virtual environment hosted on the public cloud. And at the end of the talk, we're going to talk about if you're interested in getting more information and getting more online times, we can arrange that. So at your leisure time, you can log in and then get more familiar with the environment. So everyone, please go to this website, labs.bigswitch.com. And you should use your credentials that people have and log in. Once you log in, and you should see on your upper right hand has a link, has a picture, just like this, to take you to this Austin module. Once you click on that, you should get to this page. And then the lower right is the Launch button. So click on that. Then your sandbox environment will launch up. It take about five to 10 minutes for the environment to come up because everything is hosted on the public cloud. So today's environment, everything is hosted on public cloud. Everyone have their own virtual instance. So what this public cloud has, it has an open stack environment. It's a multi-node environment. I have two computer nodes and a controller nodes. And it has a simulation for the entire networking fabric. So it's a two rack deployment. Yes, sir. Can I take a look? And also I have a colleague, Syed. Four of us, one person will be talking in the front to walk you through the whole process. And the three of us will be roaming around the room. So if you have any problems, any issues, just raise your hands and we'll reach you. So once you launch, just leave it there. It will take a few minutes to get everything prepped up. So let's talk about what exactly we're going to do today. So we know that, I assume you already have open stack clouds up and running. Now you want to bring some applications onto the cloud. And there's many ways of introducing this application onboarding process. Open stack natively, we're providing heat, which has heat template capabilities. You can write a heat template, and you can publish it, and everyone can start using it. Or you can use Merano, which is the catalog capability that whatever is being published can be maintained by Merano and then the tenant can consume it directly. So for today's talk, we're using a very simple heat template. And just in a little bit different manner. The big switch, we provide a heat horizon plug-in, which describes the workflow a little bit differently. We envision, I think we were working with the customers, that usually the cloud admin knows about the infrastructure the best. He knows what are the services available, how many resources it has, what kind of topology or application they can support. So usually, the admin is the one that put up this template, the one that he can support together and verify the template works, and then publish those templates to the tenant, so the tenant can consume those templates. So that's the thing that we, underneath is using the heat. But in terms of workflow, that's the workflow that we are going through today. So basically, the admin is the one that defined the template and then publish it. But however, for admin, you need to make sure that the tenant that you publish, it works. So today, we're going to go into the exercise of using some of the tools to help the admins to debug if the template doesn't work the way that he expected, how the troubleshoot. And also, another concept is we're bringing the DevOps environments where the admins can write a template and can write a set of unit tests to verify the template works. But then when he evolved the templates moving forward, he also can continuously run those unit tests to make sure the next version, the version after that, continue to provide the same baseline confunctionality that it provides. So let me interrupt. So a few folks have the problem of the experiment has been expired. So we think that's because the load, we never tested this in infrastructure with so many testers at the same time. So they launched the experiment at the same time. So be patient. Probably wait a while, listen to the talk, and then try it again. Sorry. Yeah, just probably just pace out ourselves a little bit, even though everything's on public cloud, the capacity is still sometimes congestive. But anyway, that's back to the talk that, so we sort of describe how you write a unit test for your template. So then you know that next version when you modify something that you can run the set of unit tests and make sure that the template that you publish next time, it works. And then the next one is when the template works, the tenant consume it. And they may run into some issues. Some, maybe some network goes down, maybe some elements are not functioning the way that they're supposed to. So what are the tools for you to, for the tenants to use to debug, right? So to figure out is that application problem, is the networking problems? What are the things that it can do? So today, we're gonna go into that scenarios to help you to understanding not only the network level, but also getting to the application level, what are the things that you have capability at your hands to figure it out, okay? So here, sorry, so here's just a slide that went over what I just described. And so today's workflow is that my colleague, Ted, gonna go walk us through on the template deployments and troubleshooting on the template. And my colleague, Shane, gonna go over on the troubleshooting once the template is deployed on the networking issues and the application issues of how do you troubleshoot. And then towards the end, we're gonna close to summarize what we just gone through. And then go over the logistics of how to continue accessing these environments at your leisure time. All right, thank you very much. Ted? Okay, so sorry about the issues you're having. Can we get maybe a raise of hands of people who already have the setup working for them? Okay, and for people who don't have it yet, please be patient a little bit and try in maybe a few minutes. The system probably should be able to handle the load again in a few minutes. So let me get started on the hands-on part of the session. So the setup that you have in front of you is using big cloud fabric as the network provider for the OpenStack cloud, okay? And what does this mean? This means that any time you go to, let's say, horizon and provision in network, there will be a ML2 driver and L3 plug-in from big switch that will communicate with our controller and provision these networks. And what's really cool about big cloud fabric is that it will provision your L2, L3 state for your networks in both the virtual and the physical part of the network. So you don't have to handle both separately. And it's all automated, of course. So that's the setup that you have in front of you and if you have seen at the beginning, there is that page that you get. This will provide you access to horizon. So if you go under the hands-on lab tab, make sure everybody who has the setup at least got to this page. And when you right-click on the OpenStack icon, there is a drop-down menu horizon. Click on it, it will take you to horizon. Similarly, when you click on big cloud fabric, right-click on the icon for the controller, click on big cloud fabric, and you have a page for big cloud fabric. So this setup that you have in front of you really consists of two racks, two physical racks. Each rack has a single compute node. There you see like there is a COS C1, that's the first compute node, the second compute on the OS C2, and each rack has two leaf switches. Now, there's a virtual switch running on the compute nodes, on each of the compute nodes, and the leaf switches are interconnected at the top with spine switches to form a full mesh. So when you log into horizon, and if you haven't done so, please do so, log in with user dev user, as the username, DEV user, password BSN123, so dev user. So once you log into that dev user, you'll see like you're already in the dev project. And the setup that you have is really a single project that we have pre-created for you, and there's a second project test. The dev project is pretty much empty, there's nothing in it, and this is where you will be deploying your three tier app using the heat template. The test project is just there for helping us to do reachability test to the external publicly facing web servers in our three tier app. So it's only for testing purposes. And as you can see, the test project has this test server and it has a floating IP address assigned from the external network. Again, here the external network we have already created it, so you don't have to touch anything, it's already there, ready to be used and to serve floating IP addresses. So this is the goal of the test setup. We want to deploy that three tier app in the dev project. We'll go into the details of what is in that app, but before doing that, let's go ahead and just deploy the app and let it run in the background while we talk about what's in the heat template and what are the main components of it. So if you go to your in dev user, your in dev project, navigate to project, and then network, and then there is this network fabric tab here. This is part of the big switch horizon plugin. You click on it, it should take you there and then choose the network template and click the apply network template button. So here there is a list, there should be a list of heat templates that the admin user has created and verified for you. And you can, as a regular user of the OpenStack Cloud, you can choose any of them and run them. So the purpose really is to have the admin create templates that are verified to work or their common templates and share them with the rest of the users so that you can make things go faster in terms of template development. So we have, for this case, we have developed this big switch template. Let's choose it and run it. So this shows you here a list of the default values for your parameters and the template. And again, we'll go over them in a little bit, but we'll just keep the default values in and say apply template and continue from here. So now it should be on its way to provisioning the setup. Okay, so while this is happening, let me go back and talk a little bit about what's in that template. Really quickly because we're not really here talking about heat templates in specific, but this is something we need to run before we get to the point where we can talk about troubleshooting your network and doing monitoring and other sort of things. So this heat template, just like almost all heat templates that you see around, starts with the parameters section. And the parameters section, what we're really doing is just defining, in this case, the names of the private networks. So if you see here, there's this outer net name and it's defining a network name and the default value is webnet. That's what we would be using in the setup. And then there is also with each private network, we associate a side or subnet and we give a default value. So for example, the outer network which really corresponds to the webnet, private network, has a side or value 10.10.20.0-24. And then, so I forgot to mention, if you go back to your main setup and if you wanna take a look at the whole heat template, just click on configuration examples and scroll down through it. So I already mentioned the parameters section. We talked about then private network names, the cider values, et cetera. And there is also at the top, I forgot to mention that there's an external network name. This is, and the default value is external, which maps really to the external network that we have already created for you on the setup. And then if we scroll down to the resources, which is really the meat of this template, we start defining the actual resources that will be provisioned, beginning with the outer network. And as you notice there, we are choosing the name parameter, the user input. The default value for this one will be webnet. And then we create the subnet resource and we associate it with the outer network. Okay. We do the same thing for the mid network and for the inner network. And then the second part, important part of the heat template is the router that will interconnect these private networks together. And the router has a default route to the external network. Again, this is a definition coming from the parameter list. Then we declare the router interfaces. So there are interfaces to all three private networks. Okay, so there they are, the three of them. And then of course we have to create our servers. So in each of the private networks we create a single server. So there will be a web server or the web network. Each of the servers will have a port, a neutral port defined for it. And we associate with the port a number of security groups resource with the rules and a floating IP in the case of the web server. The web server is a public facing server unlike the other two servers that we're defining here in this heat template. So these are the other two servers. So there's the app server and the DB server. So that's pretty much what this heat template is about. And you could go back and check now to see how far the provisioning is. So here it looks like it has completed. If you see some issues, if you see some failures, please remove the template and try again to provision it, okay? So how many people approximately have the template provisioned? Only three, four, five, six, a few people. It will take a few minutes to have the heat template. We're still at the launch. The password for dev user. It's BSN123, big switch networks, 123. So once the template provisions, if you go to instances on the compute, you will see the three instances that we defined in the heat template. Please note down the IP addresses because we'll be using them a little bit, including the floating IP for the web server. Okay, so... I saw quite a few folks started the wrong lab. So the title cannot show how to start the right lab again from our portal. So how to start the right lab? So they started the wrong lab, the other lab. Oh, I see, I see. Okay, so let's back up a little bit here then. Let's go to labs.switch.com. So you log into this page. Don't click on any of these modules. The easiest way to get to the module from the top, click on that banner, the Austin banner, summit banner, then it will take you to the last module at the bottom and launch this one. Now, if you have launched already a module and it's probably provisioned, please terminate it. And the way to do that, go to the top under the modules dropdown menu, select your, there will be only a single one, select that module, and then it will say, you know, it will give you this thing, this page, say terminate, and give it probably a minute or so, it will terminate, and then you can go back and launch the summit module at the end of the page. Okay, any questions on that? So let's proceed here. So as we have seen, we have provisioned that frontier app and we have declared security groups for each of the servers on the heat template. And when we declared these security groups, we had some sort of reachability requirements in mind. We wanted, let's say for example, that the app server to be able to, you know, receive requests from the web server, but only from the web server. We don't want any other machine on the setup to be able to make, you know, calls on the app server. Similarly, we wanted the DB server to accept connections on the inbound, on the ingress, only from the app server or from the app tier. So we don't want to, you know, have calls coming from the web server directly onto our DB server, right? So we have those sorts of reachability requirements in mind. The issue is when you write your security groups, it's easy to miss certain rules or to, you know, put the wrong remote IP for a given rule and then allow reachability that you don't really want to be there at the beginning, okay? So how do we put in a certain number of tests that we can use over and over again? Tests that are similar really to unit tests that developers use that you could run any time you update your template and just to make sure that your reachability requirements are always satisfied. And that's what we were gonna talk about next, okay? So if we start, let's take for example the web server talking to the app server. We want on the ingress of the app server to accept connections from the web server. We want that to happen. How can we verify that? So one of the things that you will see on the Horizon GUI under network, network fabric, there is a reachability tests tab and here you say I want to create a test, okay? And I'm gonna call my test web app to verify reachability from the web to the app to make sure that my security groups were configured properly, okay? And then I want to choose a tenant. A tenant in the cloud fabric lingo is equivalent to the project. So in this case, the project is dev, okay? And the source segment is the web net. Segment is equivalent to network in open stack, okay? And then my source IP would be IP address of my web server. And if you've forgotten what the IP address please go back and look it up. So in this case it's 20.3, 10.20.3. It has to be in the 10.20 subnet. And the destination is the app server. So 10.10.21.3. And what we want here is to have the test pass for forwarded traffic. So when you forward traffic, you want the test to pass. So we can select the expected result to forward it and then create the test. So the next step would be to run the test and it says here it passed. So that's good. That means our rules that we had put in the security groups for both machines, the web server and the app server were correct, okay? So the ingress was fine at the web server and the ingress is okay also on the app server. So we can take a look. If you click at, sorry about that, go back here. There's that link. If you click on that link, it will show you some of the details of that test. So at the end here, this is the first hop. This UUID represents the web server. This is the web server UUID. And the next hop is OS compute two, which is really a virtual switch running on the second compute nodes, okay? And then directly connected to that is the app server. So let's go quickly and go to the next test. We want to write a test to make sure that DB server is allowing ingress connections from the app server. So again, back to project, network, network fabric, and then create a new test. This time we call it app DB. The tenant or project is still the same. That's dev. The segment this time is app network. And the source IP address is 10.10.21.3, so that of the app server. The destination is the DB server. And again, we want the expected result of this test would be forwarded. We want the traffic to be forwarded. So we run the test, it passed again. That's great. And we can tell it quickly at the details. So again, it looks like they were configured on the same machine. So finally, we want to write a test to make sure that the web server cannot directly talk to the DB server. So in a way, it's a negative test that has to pass. And let's call it web DB. Source tenant, we're still in the same project. So dev, source segment is web network or source network. And we put in the IP address on the web server. The destination is like the address of the DB server. And in this case, we want the traffic to be dropped by security groups. So we'll say that the expected connection result is not permitted by security groups. Let's create this test. We run the test. So the test has failed. So what we are reading here is saying that the expected was not permitted by security groups, but actually the traffic was forwarded. Now, this doesn't give us a lot of information about where the problem with security groups is. It could be that the egress on the web server is blocked or it could be the ingress at the DB server. So there is another test that we could do this time from Big Cloud Fabric itself. If you go to Big Cloud Fabric, and at the beginning, you'll see, if you log in, the log in is admin BSN123. Let me log out here and make sure that everybody's in first. So admin password is BSN123. If you click on Fabric at the beginning, you'll see the fabric that you have on your setup, fabric of switches, both physical and virtual switches. So these are the spines and these are the leaves. This is the first track, this is the second rack, and these are the virtual switches for each of the compute hosts. And you can look at the connections. So if you look, click on a leaf switch, you'll see that it's forming a mesh with the spines, and there is a peering connection with the leaf switch on the same rack, okay? So back to our reachability tests. Let's go to edge and end points. So on this page, what you're seeing, what you're looking at is really a list of neutron ports. These are all the ports that have been configured, including taps for DHCP and the ports for the instances. And this allows you really to perform reachability test between any two ports on your OpenStack pod. So let's repeat that test that we just did. We started from the web server. So let's look up the web server by IP address, it's easier. It's 20.3. So that's the web server endpoint. So this is here, the neutron port for the web server. So if you go at the beginning here, there is a dropdown and you could choose test path from that dropdown. Who's so far at this point? Few people, okay? Any questions before we continue from here? We're good, okay? So I selected the first endpoint to be the web server. Now I can select my destination. And so you say endpoint here. And then look for the destination, which is the DB server. So 22.3, that's the DB server. You select it, then say you selected. And then at the top run simulate. So once the test runs, you could see on the right hand side the segment interfaces, the tenant, which is pretty much the project here, the dev project, the OpenStack, and the endpoints. Okay, this is a logical view of this test path. And under this list here, you would see the different steps that the traffic has taken. So at the web server, this IP address, you see that there's a rule that has allowed the traffic out. So on the egress, we are fine for a web server, okay? And then from there, we went to the router. And then here, at the DB server, it looks like the traffic was allowed in at the ingress too. So this is not what we had in mind. We thought that our DB will allow increased traffic only from the app tier, okay? But it looks like there is a rule at the ingress that is allowing traffic from anywhere. So that points us directly to where the problem is. Let's go back to Horizon and check what's happening with our security groups. You go to compute under access and security. And then we look at the DB server. Let's manage the rules, see what's in there. When it looks so at the ingress, we're allowing traffic from anywhere. That's where the issue is. We can change that. Let's delete this rule and add another rule that would allow traffic only from the app tier. By the way, we're using ICMP, it could be any traffic, but just for the demo purposes here, we're just using ICMP. So on the ingress, we're allowing traffic only from the app tier, 10.210-24. Let's add this rule. Then we can go back to the reachability test that we had already written under network fabric. So network network fabric and run it again. So again, this is a test from web to DB. And this time it passed. If you look at the details, we see that we were able to get the test and it looks like it has been dropped here. Okay, so we have verified the reachability requirements within the private networks, but we haven't done anything on the external network yet. We don't know if an external host can reach our publicly facing web server. Let's put this on hold for a little bit and before we go there, let's look at some other possibilities for traffic control. So if we look at the big switch horizon plugin again, you could use something like the traditional router ACLs that people usually use to do pretty much the same thing. But what this also allows you, and this is even nicer, is that you can control traffic between subnets. You don't have to configure security groups on an instance by instance basis. So I could say, I don't want the traffic to go between DB net and my web net. So I can go and say, okay, this here, the upper row are the destinations. These are the sources. So I can say, I don't want my DB net to talk to my web net. And block the traffic. And similarly, I don't want my web net to talk to my DB net. So if you go back now and look at the same test that we ran a while ago for these two instances, you just say simulate again, and you look at the last hop, you will see that the traffic has been denied. So we couldn't at the router level this time, not at the instance level. And that's what the ACL did, okay? So in the last part, let's try to see if we can externally reach our publicly facing web server. So go back to Horizon and log out from DevUser, and let's go to the test project, see if we can ping the web server. So log in as test user, and the password is BSN123, yeah? Okay, and here we have already created the instances for you. So the test instance is connected to the external network. If you don't have them up, start the instances. Please start this BSNspan instance as well. We'll be using it in the next section. And in this case, the floating IP of the web server is 130, so 20, 20, 20.130, and start to ping to verify that you can reach the web server instance. So at this time, I'm gonna have Shin take over and go over the monitoring application, use case. Okay, Shin? Yeah, let's slow down a little bit. I saw a few friends have some problem, so I'll just slow down for now and explain a few things. First, how to get on this lab? So the way to get on this lab is go to bigswitch.com and our homepage, in the homepage, so there is a big switch lab. So if you go there, you click on it, and if you scroll down this page all the way to the very bottom, there is an open stack summit. So this is the lab designed for this summit. So you click on the launch, the launch on the bottom, not on the top, the top belongs to another lab. If you click on that, you'll start this lab. So another thing is once you get into it, you'll get into this lab environment. There are three tabs on the top right. So the first tab is called directions. So the first tab is called directions. So where you can find the slides. So it's a self-guided manual. If you follow the slide, it will, you will accomplish whatever we show today. Take your time. So on the second tab, that's the physical topology of this lab. So in this physical topology, we have two racks. Each rack has two leaf switch and there are two spine switches connect these two racks. And in each rack, there is one compute node and on top of each compute node, there is a virtual switch. Virtual switch is connected to both of the leaves. That's the physical topology. So that is the physical topology we are working on today. And logically, what we are trying to accomplish today is if you go back to the slides. So logically, what we want to accomplish today is to use heat template to deploy a project called the project is named as that. So I know many people log on horizon using the username admin. However, we didn't provision anything underneath the admin. So the heat template is not for admin. So the username for the first set of labs is using dev user. The password is BSN123. So here we see the username is dev user, DEV user. The password is BSN123. So BSN stands for big switch networks. So if you log on as this user, which is not admin and you go to project, you go to project networks and the network of fabric. So you'll see a tab called network template. That is where you apply the heat template. So it's already applied, so we don't have the button anymore. But if you just reach here, you'll see apply a template. If you click on it, you'll see the template which shows up here. If you go to the lab photo, the third button, that's what I'm going to show here. This is the template that's got applied on this dev user, in this dev user. So this template creates a project called dev and it consists of three networks and one router. These three networks are attached to this router and each of the networks has a server, web, app and DB server. And then we associate a floating IP with this web server. That's the very first part of this lab. So then what we have done by far is we have this three tier app and we would like to achieve this connectivity requirement which is web server can talk to app server and app server can talk to DB server. However, web server cannot directly talk to DB server. That's what we try to achieve. So given the heat template is complex like this, how can we even tell if our heat template is correct or not? So to figure out if our heat template can achieve the requirement, the approach we take is like a test driven heat template developing and debugging. So what we've done is if you go to the project network and the network of fabric, there is a reachability test. So like the title has already done is we create a three test which is web to app, app to DB, web to DB which exactly matches here. And in each test, in each test, if we take this web to app example, for each test we specify what is the source, what is the destination of the test. So in this particular example, the source is the web server's IP address. The destination is the DB server's IP address, is the app server's IP address. So we expect that these two servers can talk to each other. So we expect these servers can talk to each other. So if we go to the expected result, you can see actually there are quite a few expected results. Some is dropped by policy, dropped by security group. There are quite a few possibilities but in this particular test, we actually want the packet to be forwarded. So we pick the expected result to be forwarded. Then we save this test and then we run this test. So the result is passed. So which means our heat template satisfies the requirement in the first place, which means that web can talk to the application. So similarly, if we follow the same logic to config the, using in this network of fabric reachability test, we configure two other tests, which is app2db and web2db. We created two other tests and we do a similar thing. We can verify if our heat template has really achieved our goal. So in previous example, what we have done is app2db can absolutely talk to each other and the web2db, they cannot talk to each other because there is a security policy configuration error in the heat template. So we fix that security group and the test process. So that's what we have done so far. When I was down there, some friends asked, what does this test exactly mean here and is it HTTP or is it a PIN? So here, we just use the PIN as a demonstration. However, you can configure much more complex tests. For example, you can configure UDP, you can configure TCP, you can configure the different L4 ports, yes. But in this particular test, we are using PIN for demonstration purpose. So that's what we have done so far and with big switch experience solution, big cloud fabric experience solution, what we can achieve on top of that is if you go to big switch UI, you will see in the visibility, there is a test path icon. If you click on into that and you can configure the exact same test as you did in the horizon GUI. However, we provide much richer information over here. For example, we tell you exactly what the package to look like, what are the headers and then we tell you exactly what are the effective routes and the policies. In this particular example, this is the effective security group. It tells you explicitly which security group is effective. That's why this package can get through this hop. That's for the instances. On the logical routers, so Ted has right now, just now, in the router grid, we block the communication between two subnets. So in big switch GUI, we can see exactly the policy is effective. The package will get a job because of this policy. This policy is the echo in the router. Okay, so this basically gives you, if you are more like a networking admin background and you are more familiar with echoes, more familiar with the want to manage the network from a subnet perspective, so this is the tool you are going to use. So is there any more questions about what we have done so far? Okay. I'll continue then. So right now, with the help of a heat template and with the help of this networking, powerful networking debugging tool, we can provision the three tier application. We can make sure these applications are connected correctly and their behavior as we expected. We can even tell the exact physical path between each of the two instances. So which interface the package gets in, which interfaces the package gets out. However, that's pretty cool. However, that's not enough. That's not enough. So let me give you an example. Say you have a multi-tier application and in your web tier, you have like a 10th or even hundreds of web servers there. And you put a load balancer in front of it. And some customer are complaining, how come your website becomes so slow? And when you are doing the test, the load balancer always directs your traffic to the good web servers. So they are fine. They are perfectly fine. So only occasionally, the load balancer directs the traffic to some bad web server. And it's just out of your control. You just don't know which are the bad web servers. So usually the way to debug those scenarios is you really, really need to look at the package level, look at the trace, to see if there's any TCP package drop, to see if the congestion window is somehow never goes up. So to see if your TCP delay act is turned on accidentally, something like that. So how do you actually identify which is a problematic web server if you have multiple of them? So usually in traditional network, the way people debug this scenario is first, you have to really locate where is each individual web server. So you have to know, okay, this web server, web server one is attached to my physical leaf switch one. My web server two is physically attached to leaf switch the five or something like that, or a virtual switch, it doesn't matter. You have to locate that web server. And then you have to go to that switch where that web server attached to and config a span. So to redirect the traffic, to basically mirror the traffic to another port. And on the destination port, you attach a tool, for example, of a wire shaft. And then you look at the package trace. You have to do this step for every single web server. That's just like too much work. That's not a one day or even one week job. That's too much. So with the beauty of SDN centralized the control, we can really simplify this workload, okay? So the high level idea is with the centralized control, we the controller knows exactly where each instance is. And because of that, you can tap traffic. You don't have to tap traffic based on port. You just tap the traffic based on the logical concepts. For example, I just want to tap the traffic belonging to this subnet. Or I just want to tap the traffic to belong to this tenant. Or I just will tap the traffic which has the destination port 8080. It doesn't matter where it shows up in the network. You can have all these crazy combinations and from the logical perspective just to tell what you want instead of physically locating where is each end point. So this is what we are trying to do over here. So let's us get back to this picture. We are fairly familiar with. So we have created a three tier app in this staff tenant over here. And we have this web app DB. And we associated the floating IP in this web server. It didn't show up over here, but we've done that in the heat template. And then there is another project called a test which has been poorly configured for you. So you don't have to provision this network. The purpose of this tenant, the purpose of this project is to purely testing basically purely testing if this web server is reachable. We've already showed that we can use this test server to pin the floating IP over the web server. We've already shown that. So now I have a question. What if I want to tap all the trap with the destination to the floating IP of this web server? And so in a traditional network, how are you even going to do that? There's basically no way to make it happen even port-by-port basis, right? So what I'm going to do over here is, I mean, first I'm going to start an instance, a regular instance in OpenStack, which is a VM. In this case, we already pre-created for you how the BSN span is over here. That's a regular VM instance, right? So we start a TCP dump on this instance. And now we are going to do some magic. The result of the magic is all the traffic that goes between this web subnet and whatever the source is, will be spanned across the entire fabric to this VM. And in this VM, you can start whatever tool, for example, a TCP dump, to really look at the packet. We capture all the traffic. So let's go ahead and do that. So the way to do it is, first, you go to big switch, big cloud fabric of GUI. In the UI, you'll see this visibility. In the visibility, there is a span. You click on that. So let's do it. So we do, there is a visibility and there is a span. We click on that. Okay, so in this portal, we see there are two. One is called the local span. Another is called the fabric span. So we are not going to talk about the local span, which is the traditional networking span. You make span on port basis, box by box. So we supported that, but that's not cool. So we started with this fabric span, which means we can very flexibly define the criteria the traffic you want to span across the entire network, physically or logically, it doesn't matter. And we specify the destination, either we want to span the traffic all the way to that destination. So you probably have seen something over here in your setup, so I'll just delete it. We do it from scratch, I'll just delete it. Okay, so the next step is we are going to configure a fabric span here. You see this plus button, you click on it. So this is like a wizard style UI. In here, for every span session, we need to specify a name. So I used to call it BSNspan, we just give it the same name. So it's absolutely active, and we assign a priority because span can have different priorities if we define multiple span sessions if they have overlapped the policies, you want some span sessions to be like a higher priority than the other one. So here, we just give it one. Okay, so here it's called the destination span, the destination span fabric interface group, which is, it's the destination, which port you want this span, all the span the traffic will go to. So here we support the type of port we support, the port can be on a virtual switch, the port can be on a physical switch, the port can be on multiple physical switches, and those ports form a lab style of the thing. So the traffic is a load of balance between those labs. So in this demo, we are going to use a virtual switch because we already bring up a virtual instance somewhere in the network. We don't know even where that is. So, okay, so I have a BSN span port over here. So I'm not going to use it, probably you'll have it as well. I'm not going to use it. Instead, I'm going to create another destination port. So I'll call it BSN span, give it a whatever name. So we know this definition span port will be on a virtual switch, will be on a virtual switch. So instead of a leaf switch, we choose the virtual switch. So now we really need to know where this packet is going. Where this packet is going. So the way to figure out is if we go to the horizon, horizon UI, we're logging. So if we look at this, the high level goal we are trying to achieve, this BSN span VM instance is already put it up in this test tenant. So what we, right now we are trying to find out where is this instance attached to. So the way to figure it out is you go to the horizon UI and log in as test user. So the user name is no longer dev user, it is test user. The password is still the same as the BSN123. BSN stands for big switch networks. You connect into it, you go to the project and you look at the instances, right? You see this BSN, there's already an instance over here. It's created for you, just for this lab. So we go to that instance. So it's a Ubuntu server. The user name and the password for this server is user name is Ubuntu, password is also Ubuntu and no capital. So this is a regular Ubuntu server. And if we do a, if config, we see there are two interfaces. One is link local, another is zero. So basically there's just one, one interfaces. So we, what we want to achieve again is we want to span all the traffic all the way to this instance. That's what we want across the entire network. That's what we want to achieve. So right here, if we do a TCP dump. So in this demo, we are going to use ICMP. So I use the ICMP to filter all the other traffic. If we do a TCP dump over here. So at this point, nothing happens. At this point, nothing happens. So, oops, oh yes. Okay, at this point, nothing happens. We just leave it there. We just leave it there. So, we go back to the, so we know there's an instance that we want to span the traffic to this instance. Now, from the networking perspective, we need to span the traffic exactly to the port where this instance is attached to. So the way to figure out the port is pretty straightforward. If you are fairly familiar with OpenStack, you just go to the admin tab. And in the admin tab, there is a network. You go to that network. And then we see there is a network called BSN span. That's where that instance is living. You go in there, you go in there, you check out the ports. This is the port that instance attached to. You go to that port. So here's all the information. It tells you that instance attached to a host, the virtual switch on the host of computer node one, the interface UUID is 3BB77 something. So you'll notice all this information and you'll go back to the big switch GUI. So we stopped over here just now. So we want to use that virtual switch one on the computer node one. The port is the 3BB77. That's the QVO port. So this is the definition port that we want to span the traffic to. Okay, so we submit it. So at this point, we will configure a span session, a fabric level span session using BSN span. That's the name. Give it a priority one. And the definition goes to a virtual, a port on a virtual switch. That's what we have done. So you click on next. So here, that's where the magic happens. Here, we are really going to specify a few policies to basically to highlight what kind of a traffic we are interested in across the entire fabric. It doesn't matter where it shows up. It doesn't matter if it's a physical concept or logical concept. So what we do is there is a plus button. We click on it. We click on it. So we can see you can configure a whole lot of things, different layers of the packet. So on the top, you can specify the logical level. For example, you can specify what a tenant you are interested in. And if you specify a tenant, you specify a segment. The segment in OpenStack world is called a network. You can specify logically over here. And if you move, go down, scroll down a little bit, you'll see, okay, I can actually match on this much more criteria. For example, we can match on ether type, IP portal, all those like L2 and L3 headers. And we can also match on some higher level. See, this goes all the way to layer four, the TCP, UDP port. So in this particular example, so I'm going to spend all the traffic in this external network to this guy. So the reason I'm doing this is because this web server and this test web, the test server, they are talking to each other using floating IP. So essentially, what happens is they are all, the destination of both direction is some IP in this external network. So that's why I'm tapping from this external network. So go back, the external IP prefix is 20, 20, 20.0 slash 24. Okay, you specified here, that's the destination IP. So you append it, so you save it. So now you have a session, a span session. What it does is basically span all the traffic, which is designated to the external subnet will all span it to that VM. So let's go to the open stack GUI to check out what's on the VM. Instances, we go to that instance. Let me see if there's any traffic going on. So we go to the test server, yeah, it stops. So I start the traffic to the, this is the test server and I start the ping to the web floating IP. I start the ping over here. So the ping goes through. And then we go back to the span instance. Let's start the TCP dump again. See if something happens. Nothing happens. So what's going on? So let's go back to the big switch GUI and check on the destination port. So check on the destination port. So right now we have C2 destination port. So I probably configured the wrong port. Let me delete this BSN span port and I go back. We added it. We double check the destination port and we double check the policies over here. Okay, we save it then. So that was a misconfiguration. So I span it to the wrong destination port. But we can see from now here we get the traffic on this instance. So what it shows up here. So this 20.3, that's the web server, the private IP address of the web server. The traffic between the web server and the test server is using the test server floating IP of the destination. And this is the traffic from the private IP address of the test server to the floating IP of the web server. So we see both directions. So this is just a simple demo that with the centralized control we can span the traffic across the entire network without knowing where each server is. Okay, so then we can do the TCP dump or whatever flow analysis using this powerful tool. So now that's the end of my session. Now I hand my job to Connie. Thank you, Shane. Hello? Thank you, Shane and Pat. So just summarize. What we have done today. Hi, Clay. So what we've done today is, first one, the admin, you can construct a template for your tenant to consume. And you can apply the dev-up models to your template creations and writing a set of test-driven unit tests to make sure that the evolution of your templates is still consistent with your policies. And then Shane also demonstrates the span capability. The span capability that's different from our traditional world is that before you have to physically know where you wanna span from and where you span to. But with this fabric approach, you can always still have the same capability as you have before, physical span. But in addition to that, you can also span based on your logical typology because that's what the cloud is about. You don't know where your VM is. That nobody's gonna schedule the VM anywhere that you like, right? As your admin, you're not supposed, you don't need to have the burden to figure out where the VM is in order to do the span. So here, what we provide you is a logical construct. You just tell me what segment, what logical network you are interested in. And the fabric will figure out where the VMs are, where the endpoints are. It's not only applied to virtual endpoints. The physicals, if you have physical servers connecting to your OpenStack cloud, ironic notes, or your legacy database, whatever, they can all connect to the same fabric. And you can use the same API, the same constructs to span any logical endpoints or the physical locations to a particular location, right? If your tool is connected to the fabric as well, that tool can be leveraged by multiple tenants, by admins, for a different type of debugging. You can attach multiple tools to your fabric. And for example, you can do Warshark, you can do some sort of VDoS detections, you can do some sort of behavior analysis tools, all attached to the fabric, and then spend the traffic over there to do some additional analysis for your network, right? So that's what this, the demonstration was about, is really we embrace the logical network construct and be able to bring those traditional debugging capabilities into this cloud and leveraging those logical constructs. All of these are possible because the Big Car Fab is managing not just the virtual networks, but also the physical networks, right? So we know both virtual world and the physical world so that for the first case, the test path, all the tests, because we know both constructs and we can link them together, the two are not independent to each other, they are combined together. Fabric knows, the controller knows everything about the physical path as well as the logical path, and so that it knows every single hop, what happened in the logical world as well as the physical world. And it can tell you how the physical path getting realized in your network, right? And same thing with the fabric, it knows where are the locations and be able to create a spend session on demand from the moment that the package entered the fabric and then create a spend session and spend the traffic to whichever points that you want in the fabric. So that's the thing that really differentiates and different from our approach, from anybody else approaches, we managing the network physical and virtual together. So physical switch is under controller management as well as your logical construct all under one single control plane, right? So that's, if anything wanna take away, it's the different approach, there's no overlay network, there's no underlay network, there's just one network that you need to manage, right? And just to conclude, sorry for today's unexpected load, we did some testing to 30 and we assumed that, okay, 30 equivalent to 100, that's very not the case. So you can always welcome to log on to laps.bigswitch.com and enter your emails and we will give you access. So in about 30 minutes, you will get approval letters and give you the username and the password and you can do everything that we did today, including step-by-step instructions and do it at a leisure time. So you can log in and then you have your own sandbox environment and you can have your own experience of how OpenStack integrates with the Power Fabric and unifies the P plus B, right? Thank you very much. Any question will be hanging around and so you're welcome to approach any one of us and let's chat. So OpenStack and the BigSwitch Cloud Fabric and other products, so feel free to play with it. It expires maybe end of the week or something like that, but if you're interested once, I mean any time basically, you just log in yourself, ask for access and we'll give you access.