 All right. So I guess this is the last session of the conference. Thanks for making it. I think you need to give yourselves a pat on the back. This is a really good turnout. We were not expecting, although 165 people had RSVP'd, given that it's the last session, we were expecting fewer people. But great. Let's get started. The title of the workshop is Developing, Deploying, and Consuming L4 to 7 Network Services in OpenStack Cloud. This is a hands-on workshop, so it will be interactive. You'll be given an environment where you can poke around and play. We have a landing page for this entire workshop. So the content here will have life beyond just this session today. We're trying to make resources available to you so that you can go back and run through the exercises that we will do at your own pace. And we'll try to keep it updated to the extent possible. We have a team of presenters here. I'm Sumit Nikeshatham from Cisco. And we have Igor Hemant in the front here, Ivar, Jason right in the front, and David. And they can introduce themselves when they come up. So we're just waiting for people to walk in before we start handing out the access username to the cloud. We just want to get an estimate of how many people are going to be there. We do, unfortunately, have constraints on how many independent users or tenants we can support. So if we have 40 or less in the room, everybody will get their own individual access. If not, I would request you to share. A few people might have to share. So we have a packed agenda. When we submitted the abstract, we were budgeting or we were expecting a three-hour slot, but turned out to be a 90-minute slot. But we didn't want to cut down on the content. We want to disappoint you guys. So there is a lot to cover in 90 minutes. And like I said, again, the idea of this workshop is to be able to give you the resources needed so that you can follow up, do follow-up exercises by yourself post this, and then you can reach out to us. So we'll start with the intro and workshop logistics, which is happening right now. We'll give you a brief overview of where services in Neutron are today. And from there, we'll move into the group-based policy abstraction, which leverages these services and helps to achieve service chaining, which is the topic of today's workshop. We've split the workshop into three parts. As against the title, which talks about developing, deploying, and consuming, we'll actually run through the workshop in the reverse order. We will first see the tenant workflow. Then we'll see the operator workflow. And finally, we'll see how, as a vendor, you can develop services and plug into this framework. And finally, we'll have an actual tour of a production setup where we'll try to introduce you to more complex features like high availability and failover, which can be supported as a part of this framework and which are actually currently in production in the Sun Guard environment. And hopefully, we'll have time for QA as well. So like I said, we have this home page set up for this workshop. The link to these slides and the workshop guide is linked on this wiki page. If anything changes over time, we will keep this updated. And you can go back and refer here. The workshop guide has the IP address for the lab that you would be accessing. I don't believe we have yet handed out the user names. Once we do that, you would be able to log in. But like I said, what we have set up for you out here is an environment which is very similar to what is in production in Sun Guard today. But the exercises outlined here, you would still be able to go back and replay them in a Dev Stack that you can install on your laptop. So with that, let's start with the first part of the agenda. Then why do you go to basically level set with Neutron Services? Hello. My name is Rigor from Intel. I'll give a brief overview of Neutron's advanced services and how GVP interacts with them, including in terms of service chaining. So Neutron's advanced services allow us to use and place load balancers, firewalls, and virtual private networks. But for someone to use them, they need to know about how to configure them, actually, how to instantiate them. How to, they need to take into account the networking constraints and requirements and make sure everything is properly configured and tuned so that they can actually be used and be used between the VMs or the services that they want to. So group-based policy tries to and actually does abstract Neutron services by managing all of the deployment configuration instantiation of these services themselves without the user, without the tenant or admin, having to worry about the specifics of these services and how the network needs to be arranged so that they can work. So what happens is that users only by specifying policies and intent definitions, they say, well, I want a firewall or a load balancer or a virtual private network between my groups of VMs, my groups of servers, my tiers, my different tiers of servers, without having to worry at all about networking configurations. They say, between these groups, I want these services. And group-based policy has all of the logic necessary to make sure it goes and it works well between those groups. So hence the name of group-based policy. So besides instantiating and configuring these specific services, group-based policy is also capable of instantiating multiple of these services and connecting them together between those groups. So it does some kind of service chaining. And the way it does so internally is that it decouples the configuration and instantiation of each individual service, like a load balancer or firewall or VPN. Decouples that from the actual plumbing of them together, of the actual chaining. So we can easily include a new kind of service through what we call a node driver. For instance, we can create a service VM driver. And actually, we will see afterwards, with a network function plugin, we can actually have service VM services. And that's decoupled from the actual plumbing. We could introduce a new plumbing driver that, instead of just connecting these nutrient services together, it would actually chain nutrient ports themselves. So a possible plumber would be one to interact with a networking SFC project that is able to chain ports together. So in summary, we have plumbers. We can see in the blue part of the diagram a service chain plumber. The plumber connects the services together. While the green side is the instantiation and configuration of the specific services themselves. And we have a specific kind of those green drivers, or plugins, which is a network function plugin, which has enhanced capabilities like monitoring, configuration, the full lifecycle management of that. So having said this, and having had this introduction in terms of the services and the chaining of them together, I will now give my work back to Summit. And he will go and deep dive a bit more and talk about the remaining parts of group-based policy. Thank you. Thanks, Igor. So that was kind of a level setting in terms of what is available in Neutron, and which we are leveraging as building blocks here. So as Igor mentioned, the group-based policy abstractions are on top of the other fundamental open stack projects, namely Neutron in this context. So let's, how many people here are familiar with GVP? Oh, I see some heads being nodded here. So probably not. So I think this is a good introduction then. Group-based policy is a policy framework which helps to, which provides intent-based automation. So to very quickly summarize. And at a high level, we have a very simple model, which is you define something called as groups, which is a collection of entities with similar properties. And then you have policy rule sets which define how these groups can communicate with each other. At a very high level, if there's one thing that you want to take away from, that you need to know rather from a resource model perspective from GVP, this is it. So if you pee in the layer, so to say, that in turn has certain sub-resources which add richness to the model and help us achieve the relationships between the groups. Namely, groups have what we call as policy targets or endpoints. The policy rules set or contracts have policy rules in them. And each policy rule has a classifier and an action part. So an action could be as simple as anything matching this traffic allow, the traffic to go through. Or in very germane to the context of this discussion, you can have a redirect action which actually redirects to a service chain instance. So you can say traffic matching this classifier, redirect to a service chain, and then the service chain takes over for processing that traffic and then sends it along. So there are policy aspects to this model. And then there are service chain aspects to this model. And then there are infrastructure aspects to this model. So how the connection happens is the infrastructure part. But who can talk to whom is the policy aspect? So I wouldn't go too deep into that. I just wanted to have people of a rough understanding or a high level understanding before we jump into the exercise. The document actually has a little deeper summary of the GPP model. And then there are other documents that you can look at or reach us and we'll be able to help you out. So with that, so at a high level, the exercise that we are going to do here, you will see at the end of it that essentially in one, two, three easy steps, you can go from standing up your topology to having a service chain instantiated in it with multiple services. So you define your service chains. You create a policy, essentially actions which redirect to those service chains. So these are still definitions. And then you create groups, and you specify how these groups communicate, at which point the service chains are instantiated. So that brings us to the hands-on part of the lab. At this point, does everybody here have either independent or shared access to the cloud, to the lab rather? Anybody needing access? Have you been able to log in? OK, nobody has. Awesome. So the objective of the exercise that we are doing today is to be able to stand up this topology. So this is the standard off-repeated use case of a three-tier app wherein you have a web tier, app tier, DB tier, and the web tier front ends and provides access from the external world. And between the tiers, you got to be able to set up very specific connectivity and achieve traffic isolation for the rest of the traffic. So if you start walking from the right to left, between the app and DB tier, the requirement is that we allow TCP traffic, but then we want a firewall to filter everything except ICMP and database traffic. Between the app tier and the web tier, in the app tier, we have a bunch of servers which are running the app. So we need a load balancer that will load balance the traffic coming from the web over these application servers. And the web tier is frontended by, first, a firewall, which is a different set of rules from what we have protecting the DB tier and a load balancer. So we have multiple actual web servers for a particular web service IP. And then the leftmost block is the modeling of the external world. So any questions in terms of what we are going to try and do today here? A very simple example, right? The three tier topology part is well understood. The need for having services inserted between the tiers is also well understood. The dynamic automation to achieve that is something that is new and hard to achieve, which is being achieved here with simple workflows. And that's what we are trying to see here, right? Now, if we were doing a three-hour lab, we would have actually had you go through the steps for standing up each and every block in this diagram. But since we had a lot to cover and a short amount of time, what we did is when we created these tenants for each of you guys, we pre-created some of this topology for you. And we'll go through exactly what is pre-created. And there is an exercise portion of it left for you to do. And we'll do that. But that way, you get a feel for what the workflow is. At the same time, we don't spend too much time on doing repetitive tasks, which is similar things for each of the tier, right? At this point, I will switch over to the dock. So if I start scrolling through the dock, I think we are past the point where we are able to get access to the lab. You should be seeing your own tenant at this point of time. You will see a policy tab, which you probably are not familiar with if you have not worked with GBP before. And that policy tab is the place from where you can essentially run all these workflows and configure the resources that are part of this. So one note here is that in this lab right now, we are going to be solely using the UI. This is equally well-represented in CLI, which is kind of documented in here in this document. So you can use either of the two to perform your actions. So we talked about what the exercise goal is. We are at the part where we are talking about a tenant facing API and the user workflow. So the first thing that we do is stand up the three tiers, right? So web, app, DB. So that's as simple as saying that create the DB group, create the app group, and create the web group. In here, actually, you will see that we are referencing to something called as a network service policy, which has a specific reason why we are using that here. But we won't get into that right now. When we cover the operator workflow, so for now, think of these as just creating groups, right? These are groups such that the VMs which come up in these groups can talk to each other within. So they have connectivity within the group. But other than that, they are completely isolated, OK? There are certain policies that are needed for bringing up these groups in terms of what part of the infrastructure or what type of the infrastructure is used, what network, what subnets. But this is the part of policy-driven automation which kicks in here in GBP, where these things are kind of automatically chosen or implicitly chosen when we create these groups. Of course, you have the option of explicitly specifying these. But if you don't want to, you don't care what subnet it gets, you can just create a group and implicitly you will find that it has been allocated a subnet from a pool. So in this case, in this example, the screenshot shows an app tier. And it has some specific L3 policy, and it has an L2 policy which translates to a neutron network. So it gets its own L2 network here. So moreover, if you've seen the document, it says in this set up, the IP pool or sort of say the supernet that we have set up for each of the tenants is from 11 slash 8 address space. So every new group that you would create would get a subnet from that space. And I think we are creating slash 24 subnets by default. So at this point, and we also have launched some VMs in those groups already for you. So this might be a good time, actually, to go and look into your tenants if you haven't already. So I have a completely separate 10 and 10 4000, which I'm going to go to. And here we can take a look at, so we have internal groups and external groups. So to represent the external world, we have the notion of external groups. Currently, we've been talking about internal groups. So we have the app, web, and DB groups created. And if I click on the app group, I see that it has a few members. So members in GBP terminology is VMs or server instances, NOAA server instances. And similarly, for each of the tiers, so app, DB, and web. So we are done with the first part where we have the three tiers established. Now we start going through the workflow of creating the policies for these different tiers to communicate between them. So like I mentioned earlier, we have a notion of a policy rule set or in some other places referred to as a contract. And these are essentially rules comprising of classifiers and actions. So we create an allow action for the ICMP traffic. And then we create, for each of the service chains, we create a new redirect action which points to a service spec definition, which we'll go into detail a little later. I'm sorry? Yeah, a lot of this was already pre-created. So the actions were so, in fact, we can go back to the screen. And if you go to application policy and under policy actions, you will see that the allow action is created, the redirect actions are also created. And the redirect actions in turn point to a service chain spec, which is a definition of a service chain, it's not an instance yet. So we have laid binding at the point where to groups start communicating, a service chain is instantiated. So this just defines what a service chain would be. So this spec has essentially the configuration for the services in there. So going back to, we'll go into more details on that in the operator part of the workflow. So we had policy actions. We had policy classifiers. Again, these are all pre-created in your setup. So you can see that we have a HTTP classifier. We have an ICMP classifier and a TCP classifier. So if we go back to your diagram, these are driven by the requirements in that problem statement for that 3-tier topology. And then we essentially create a policy rule comprising of a classifier and an action that we want for that particular classifier. And then we create policy rule sets which have one or more of these policy rules. So the HTTP load balancer redirect policy rule set essentially fronts and a web tier, wherein we are saying that allow ICMP traffic, allow TCP traffic, but use the load balancer to, actually this front ends the app tier. So we want to allow all ICMP and TCP traffic but have it load balanced. So go through the load balancer. The web firewall LB redirect PRS is the one that ends the web. So we have it go through the firewall as well. So servicing comprising of firewall and load balancer. So if we go back to our document now, so we are at the point where we talked about application policies. We talked about classifiers, actions. We created policy rules. We created a policy rule set. And this is what it looks like, which we saw in your setup. OK, so now we come to the exercise part of the workshop, wherein you will see here that if we go back to our groups. So the way to get two groups to communicate with each other is have one group provide a policy rule set and another group consume that. Such that the consumer can have access to the ports or type of traffic that the provider provides. So if you see here, we have already set up again in the environment. So app tier is providing the HTTP LB redirect PRS. And the web tier is consuming that. So the connectivity between the web and app tier is already set up. The web tier is providing this particular rule set. And the external group, which is called external world, is consuming that. So again, if we go back to our diagram, so this part of the connectivity is already set up just by virtue of the provide and consume relationships or associations established for those groups. This part of the connectivity is already set up. And as an exercise, we are going to set up this part. So these are the CLI commands, essentially, to do that. But we'll use the UI. So what you can do is, in your app tier, you go and say Edit. And you want to consume the TCP firewall redirect PRS. Say Save Change. The next thing that you want to do is, so this sets up this part of the connectivity. And now you want to say that I want the DB tier to provide the same contract that was consumed. So the point that I say Save, what it's going to do is it's going to go and start the process of creating the service chain. That service chain has a firewall instance. And it's going to launch that firewall instance, and it's going to configure that, which will take some time. During that, we'll cross over to the next part of the workflow. And then we'll come back in a couple of minutes to check what has happened in terms of instantiation of the service chain. So at this point of time, I'm going to hit Yes. And this is going to, unfortunately, the UI is kind of synchronous here. So it's going to wait at this point until the service chain instance is completely created. And then we can come back and we can see what the service chain instance looked like. And we can ping and test the connectivity, validate the connectivity. But while we are waiting, so is everybody able to do this? If you're trying? Yes, Sam. Done. Oh, good. OK. I always like gratification. So far, no questions. Nobody had any issues. Awesome. OK, so let me quickly hand over to Ivar. We'll walk you through the operator workflow, and then I'll come back. Thank you, Sumit. Hello, people. I'm Ivar from Cisco, and core reviewer of the group-based policy project. And the part I'm going to cover, just like Sumit already mentioned, is the operator workflow. So why do we have a need for operator workflow? Well, first of all, because we want to scare you. Second of all, because all you see in group-based policy as a tenant seems, I mean, it's pretty magical in the sense that you just create groups. You don't care about subnets. And then you say, I provide this policy. I consume this policy, and things happen. You get service chains. You get your subnets. You get internet access. You get everything based on policy. But who is defining those policies? So it's usually the operator. And the reason why it's important to have a clear separation of concerns when you deal with service chaining or connectivity in general is that you want to be able to do the hard work so that you do the hard work once depending on the need of your data center and the type of policy you want to provide so that your users can have that magical experience of just creating their groups and deploy their apps easily. So the other reason for having a clear separation of concern is that in any decently-sized company, there is no one that can do everything. Actually, you usually have different teams that take care of physical infrastructure. Some other team want to take care of defining the policies that needs to be consumed, like what kind of internet access the tenants should have. So you can get all those separate teams to do those different operations so that the users can then use them. So more specifically, based on the exercise we did today and on this workshop, what the operator needs to take care about is external connectivity, which is usually pretty tricky. Service chain policies, so defining the chain and giving a good way for the users to consume those chains and to instantiate those chains, and then the application contracts. And what is important is that when you are an operator and you define those things, you got to share all those contracts to the tenants so that they can use them. So let's go step by step. External connectivity first. When you think of external connectivity, you basically need to worry about pretty much two big things. One thing is the definition of what external connection means in your infrastructure and data center. And the other thing is defining how your tenant can go to the outside world using public IPs or not instead. So let's go on the document. And I will show you a couple of snippets from the UI to have a more clear understanding of what is going on here. Of course, this is all explained in the document. So when you go home and try it, you will be able to do it through the CLI or the UI. OK, so the first thing we see, do you see this, or do I need to zoom it a bit more? Should we zoom some more? Is it better? So what you see here under the Policy tab, Network and Service Policy, you get the external connectivity section. In the external connectivity section, what you've got to do is that you've got to create using Neutron, that is the back end upon which group-based policy is providing automation. You need to create an external network and its own subnet. And then you can refer, basically, import that subnet from Neutron itself. So many of you might be asking why you need to go to Neutron to do that. Well, the reason why we decided to do like this is that, basically, if you think about external connectivity, you may not want to have too much magic around it. You want to be able to specify your segmentation ID for connecting outside your provider network, the subnet you want to expose. So all that can be done already today through Neutron, and it's really pointless to reinvent the wheel. So what you do in this case is that, for this specific object only, you will be creating it explicitly into Neutron and import it into GVP by just referencing it when creating the external connectivity. Yes? If you do already have one, you can just reference that provider network through GVP with that. And so once you do that, and you, of course, share it, as you see, there is always a share attribute over here, what you want to do is that you want to define how your tenant can go to the outside world through NAT policies. So I'm not sure I do have a screenshot for this, though. So I will just show it from the CLI from now. So even if you have your Neutron subnet now, you may want to create a pool of IP addresses that can be used on that external segment for tenant's consumption. So you can create a NAT pool giving a subnet seeder that can be whatever seeder or even overlap with the external subnet of your provider network. And what GVP's policy is going to do is that every time a group by policy requires external access, it will automatically figure out that that group needs external access and get an address from that pool and use it for your load balancer VIP, for example. There is also, of course, possibility to provide port address translation. So if you don't have too many addresses, you can just have all the members of your group to connect to that side world using just one IP. And this is all about what you need to do for external connectivity. Then you need to define the network service policy. So the network service policy is basically a construct that you, it's kind of a label you place on a group that tells GVP how the IP allocations should be done for that group. So for example, you can tell GVP, hey, this web group, by the way, needs to have external access only on deep IP, only on deep policy targets, on deep ports. So all the other members of that web group will not get a floating IP while all the dips you are going to instantiate there are going to. And this is a pretty common use case, if you think about it, on a web tier. Because you don't want the consumers of your application to access all the single members. You want them to be able to reach the VIP. You don't want to waste floating IPs. And from there, they will be able to access the other web services through the load balancer, if any. And this is how you create it. Once again, remember to share when you do that. This is already then documented on the CLI part as well. Now that we covered the external connectivity part, we can go to the service chaining part. So what we need to do here is that we need to define what the service chain is made of. So the service chain, first of all, is made of services, of course. That they are seen, in this case, as nodes in a template. So this template for us is the service chain spec. The service chain spec is composed by nodes. And what those nodes are is defined by the service profile that you can see here. So as an operator, depending on the services you have available in your cloud, they can be open source services. They can be banned or specific services, depending on there is integration provided also for those. And what you want to do in this case is that you want to create a service profile in order to describe what kind of node you want in the service chain. What kind of node means that you can tell, hey, I want this firewall that needs to be a layer 3 firewall of a specific flavor that can be a tiny or can be a small, a medium firewall. And I want this firewall to be of this specific vendor. Or I want this firewall to be to any vendor. I don't care. Just pick the first that matches the definition. Depending on this definition that is the service profile, you can then go, share it, and create the service chain node. So the creation of the service chain node is pretty much defining what node will be part of the service, will be part of a specific service chain. So you will specify the service profile that you already created. And you can also provide a configuration template that can be a script. It can be a hit template. It's basically some generic configuration you can provide to the service chain node that the service needs to understand, of course, so that this can be the common denominator across all your tenants. So obviously, when a chain is instantiated, depending on which group is providing this chain, the configuration of the node will be different. It will be placed on a different network. It will have different routes. But there are some things that you really want to have commonly set across all the services you run. And this is what this configuration template is about. You can then specify all the common configuration. So you create your nodes. You share them. And the last step for the operator now is to create the service chain specs. So you define your services. You have your LEGO bricks. Now it's time to put them all together in order to define what a chain is for you. So depending on the needs of your users or depending on what you want to provide to your users, you can decide, OK, I want for the web type of groups. I want to have a far wall and a load balancer and a far wall on front so that I can protect myself from internet access. Then for application group, I may want to have a load balancer only because I don't really need, maybe I don't need too much security in this east-west traffic that it's happening over there. But I still need a load balancer because I want to be able to scale my application tier and maybe kill nodes that are not functional anymore and create new ones. So I get a load balancer. But then there is the database tier. The database tier is super critical. So I want to put a far wall in front of it because I want to be able to be protected because I'm paranoid. And even in east-west traffic, I want to make sure that my data is safe. So you define the specs. It is very simple, actually. So you just go, you click on Create Service Change Spec. You compose your nodes, whatever order you want. What the order defines is where the services, how the traffic is hitting the services, starting from the consumer side. So in this case, for instance, you have node 1 and node 2. If traffic is coming from a consumer of the contract, of the policy ruleset, to the provider, the traffic will first hit node 1, then will hit node 2, and so on and so forth until the end node. When the opposite traffic happens, so when it's actually the provider sending traffic to the internet, for instance, or to another consumer, then the traffic is going from the node number 2 to node number 1, and so on and so forth. So this is what the order defines. Once you do that, extra steps that an operator could do is that it could go and create policy rulesets that are basically these interfaces that the group can provide and consumes. It can create policy rules in which it will set up a redirect action saying, OK, with this contract, with this policy ruleset that is the policy ruleset that's supposed to be used by web tiers, there is a rule that says redirect the traffic to this chain. And by doing that, you're not still creating the chain, so the actual services are not created. You are not wasting resources. But you are creating a resource that is shared for all your tenants so that when a new tenant comes and wants to create his application, it's just going to create the group. And once the policy ruleset is provided, then all the magic happens. So for the users, it will be like, whoa, it was super easy. Because operators, I mean, because of a great deal of automation that exists in group-based policy, of course, but also because the operator went and defined, depending on the needs of its cloud, it defined all these constructs. I think we can now go back and check the rest of the exercise. So I will give the word once again to Sumit. Thanks. Thanks, Ivar. So I hope you were able to, while Ivar was showing you in the document, I hope you were able to go and poke around in your tenant and see actually these constructs. So the other thing to note here is that the service chain specs or the PRS itself is very dynamic. So if you change the definition of the PRS itself, automatically wherever this PRS is deployed, it is going to take effect. That's a whole idea that you define the policies once in one place. And then wherever you apply them, if you dynamically change the policy, it applies everywhere else. So let's go back here and see where we are at. So if I go to network services, and I'd encourage you to click through as well in your setup, if you go to service chain instances, what you'll see here is the service chain instance that we expected to come up is here now. If you recall, the DB tier was providing the TCP firewall redirect PRS. And this service chain instance now is corresponding to that. So now the actual VM has been instantiated a firewall VM. It's an ASAV in this case. And the traffic path has been stitched for it to go through this firewall. And there is communication between the app tier and the DB tier through this service chain. So we could probably do the validation exercise here. How are we running on time? I will leave it to you to try it out. But the validation, a simple validation, is mentioned in the document. So I think there are other more interesting aspects to cover than the validation. So at this point of time, what we should be able to do, if you see this part, validating the service chain creation, is we should be able to go to the app group and pick any VM here, console into it. So this is just a basic Syros image. So the app tier has the 11-0-1 subnet. I know this is 11-0-4, because I've seen this before. But if you check here, well, I don't want to go back again. But you can check in the DB tier, you will have a VM with 11-0-0-4. And the ping should go through. And I can try and tell net. And it should allow this. So you will get connection refused, because it's an odd port. But essentially, showing us that the traffic goes through. But if you try to do the same thing on a different port, it should block it. So you'll see that this basically hangs. So the firewall, the ACAB in the service chain is dropping all the traffic that is going through. So that kind of validates the service chain that we created now. We can go back and removing the service chain would be as simple as go here, have the app, stop consuming the earlier PRS. And then we can have the DB tier stop providing this PRS. And this will, again, take a little bit of time, because now we are tearing down the resources. But if you happen to do this and check back in a little bit on your network services screen where the service chain instances you were seeing earlier, you will see that particular TCP firewall service chain instance has gone away. So if I go here, so we are not seeing that anymore. So the chain which was dynamically launched and the resources that were created have gone away now. So everybody with us until this point, yes, right? So the question is how to paraphrase verbatim, how about logging into the service chain instances itself? You should be able to see the instances here. So there are things which are shared with the tenant and there are things which are not. So in this case, the service chain instances are actually managed by the operator, right? So yeah, so there is a separate services tenant. So the question is are the resources or the instances which are launched or are triggered for a particular tenant when he does the provide consume are those in a separate tenant? So the service VMs specifically in this case, we are following a model where, so like I said in the beginning, so this is how this specific production deployment wanted, in this case, Sankar, they wanted those service instances to be controlled by the operator and not by the tenant itself, so that's the case here. So yes, the GBP resources are still tenant facing, but internally because we are providing a level of abstraction in terms of the service chain and the service instance lifecycle management, the user actually does not actually see that service instance because he doesn't have to manage it. So there will be cases where you would want that level of access as well and that can be supported but it's not there in this particular lab or model. So if I go and I think the question was instances, so this is the, sorry, so there is a services tenant here and all the service, it's a good question, but so since we have about 40 odd tenants here, it takes a little bit to come up. So I'm inside a common services tenant which has admin privileges and here you will see, so the service chains with each one of you have spun up these VMs, so as I was mentioning earlier, the firewall VMs are ASAV and then we have some HA proxy VMs, so you can see all of them here and they have multiple interfaces. So what we do is we share one service instance in this particular deployment, one service instance is shared for every tenant as one service instance and it's shared for any service chain that is being used for that particular tenant. Hence you see the multiple interfaces on that VM because we created firewall, instantiated firewall in two places, we instantiated the load balancer in two places. Okay, so I want to give enough time for the developer workflow. So we're gonna switch gears. You're welcome to poke through the lab while we are doing this part, but this is the part where we like to call it bring your own function. So all the magic that you've seen here, we like to show you how you can take any service essentially that you have and be able to incorporate it into this framework fairly easily. I'm Hemant Ravi. I'm from OneConvergence. I think most of the presentation so far has been focused on the Northbound API, which is the GBP API. I'm gonna walk through a little bit of what happens on the Southbound side where the actual services are being deployed. So then, so this is the use case, so this is the use case I'm going to walk through in how somebody implementing a firewall service could plug in into the Southbound APIs to insert their own service. This is what we've used internally to deploy like the wires and HAProxy. So the API is fairly simple in how somebody providing a service could leverage and implement their own service. So I have this deployed on my laptop in a DevStack environment. I'll walk through the APIs in that, but before I do that, I just want to spend couple of minutes on the architecture and what NFP as a framework provides. So at a high level, these are the pieces that are present in the architecture. So this GBP is the Northbound API. That's under the cloud, which is the infrastructure side of the deployment. And that actually communicates to the components of NFP of which an orchestrator is running under the cloud and that's what's got the state. And that communicates via REST API to the service VMs. I mean, these in this case, it could be a service VM or you could have a controller piece running here that could be managing a number of service VMs. I mean, that's another model that you could do it. I mean, that's the way we have it deployed. There's a single controller talking to HAProxy or wires or ASAV for that matter. So today I'll just walk through this case but the APIs are similar on how you would interact over the cloud. So one of the key being that we trying to put a lot of the logic over the cloud, in the sense that's a stateless component but that sort of brings out the power of the framework in what it can achieve. So the framework itself provides all the connectivity, all the mechanics necessary to establish this connectivity and which takes away the pain from a service developer. So like we've gone through, I mean, the northbound API is GVP and the southbound API essentially covers these three things. I mean, things that are needed for the insertion of a service where you have to actually configure some networking stuff in the service. The actual service configuration, if it's in the case of firewall, you configure the firewall rules or if it's a VPN, the VPN tunnels and so on. And then there's a health monitoring piece where a service will report back to the infrastructure when a service goes down or comes back up. I mean, the idea is the framework should provide all the primitives to enable any service to be inserted easily into the framework. So with that, I'll walk through a little bit of, so I have this deployed, so I have this deployed in a VM that's running on my laptop and actually all the steps to do this are documented in the document that's published on the link. And it's fairly straightforward for somebody to bring up the DevStack environment and to replicate what I have here. So I'm actually, so this is the VM that's running the, that's actually running. So yeah, so this one, I hope everybody can see the screen, but this, the top, that's the address of my VM. And basically it's a similar service chain as what we walked through so far. It's just a chain with a firewall. Basically that's deployed here and that's deployed between two groups. So this is going to be a little slow using the VM on my laptop, but yeah. While this is connecting, I can come back to this. So like I said, these instances that you see are, one is the actual service VM, the other two are the actual VMs that are running the provider and the consumer VM. And this, I'm actually logged into the service VM. So what this is just a Linux VM that is using IP tables to configure a firewall. Yeah, so basically this, the test chain is what's configured here, which is to allow just traffic to SSH traffic and log traffic to port 80 and so on. But what I wanted to show is the reference configurator code that is part of, is submitted as part of the open source, which gives a reference on how somebody can develop a hook their own service VM into the framework. So this is something that's built using a Picon framework and it's got a few APIs that any service VM or a controller for a bunch of service VMs would have to provide. So this post API is one API that consumes the rest API from coming from another cloud orchestrator. And that includes things to configure, like I said, the interfaces here, the routes, or the help monitoring API, or to consume the configuration that is being sent from another cloud. In this case, the configuration was just a simple JSON string that was sent to the service VM and the controller can consume it. That's one API that a service VM would have to provide, as well as an next API is a GET API which enables the orchestrator to receive notifications from the other cloud components. So those are the two APIs that would have to be provided. I mean, this can be a reference for anybody developing a service VM on their own. So, I mean, that's as simple as it gets. So if there are any questions, I could answer those, or yeah, like I said, this API has come up. And as we saw, as I showed on the UI, these are the two groups that are created with the firewall in between. And each of those groups have a VM, actually, that has traffic to go through. I mean, I could walk through the data path, but it's a little slow going on my VM. And this is something that you can try using the DevStack workflow that's published. So the document has a developer, service developer workflow section that documents all of these. Yeah, basically these sections documents what I've been talking through. So this is a service chain that was being deployed between two groups, a firewall between two groups, a service chain with a firewall between two groups. And this was the configuration that was passed from the orchestrator to the service VM, which is a description of what needs to be configured in the service VM. This could take different formats and based on what the service VM can accept. And the service API that I walk through is also described here. The post to consume the rest APIs and the get for the orchestrator to get the notification from over the cloud piece. What's published in the open source also has a disk image builder. If you have the software for your application, I mean there are components that the disk image builder could use to actually build a VM and deploy it by creating a flavor and uploading that VM as a glance image to the OpenStack installation. And once you've done that, you could create a profile which will make that service available for any service-chaining APIs. The question is, could it be any image that boots in OpenStack? Could that be used as a service, is that right? I think that's true. The only requirement is that it has to provide those APIs that I've showed earlier, but it could be any image that provide boots in OpenStack. Actually, it's just a Ubuntu Wiley image that we've just put this Picon server application that consumes these APIs. Yeah, and on the fly, the DevStack process essentially puts the init scripts in there, it copies over the implementation, the Picon packages and this implementation of the hooks. And there is a small bit of code to configure that IP tables rules and it builds that image, puts it into glance and goes up from there. So you had a question. In order to use group-based policy, does it have to be supported using firewall as a service? Not necessarily, I mean, group-based policy allows you to consume services that provide those APIs as well as services that'll provide the APIs that I just walked through. I mean, if you have services that do provide firewall as a service as an API, that can be used by group-based policy to use as a service. But it's not a requirement. It's not a requirement. Yeah. Yeah, yeah. Right, so that's exactly what we've done here. We've just taken a standard image and put this application on it and provide these APIs that I've walked through. Yeah, that's a very good question. So actually, if you can scroll up and show the JSON config. So we very intentionally used a configuration here which is a very vanilla config that it's something that is not related to any standard or any API. All that it says is that, you know, we cooked this up, right? Port, TCP traffic, port 80 log, you know, and so on and so forth, right? So both models are equally supported. Yes, that's true. I mean, there would be some restrictions and we have, Ivar talked about it, we have supported modes in the plumber. So really what the service does is the service says that, hey, I operate in this particular mode and I need this kind of plumbing. Do this for me, get me the ports, you know? And as long as it complies, and we pretty much support L3 insertion, L2 insertion, as long as it's able to do that, then the service doesn't really have to care about the plumbing aspect. And at the end of it, everything gets codified in policy. Yes. Yes. So the question is, rather than directly talking to a service VM, you alluded to also having the model where you could have a service controller, how do things change with that? So the way that works, at least the way we have it is, so the over the cloud service VM, think of that box being replaced by a controller. And that controller provides like a plugin framework. I mean, the service has to expose what it provides via REST API and you would have to have a driver that talks to that REST API. I mean, that way you abstract all of these into a common controller, but you would have to have a driver that works in that framework. So that controller has a driver framework, similar to like a star as a service plugin driver framework. There's a framework where you can plug in a driver, talks to REST API. So one other thing I wanted to mention is the NFP, right now the primary API, not-bond APIs are the GBP APIs. As a rendering mechanism, we could also use it to render like star as a service API. I mean, just use those as a not-bond, but just use the same framework to render services. So that's the thing that we're looking at. Hey, sorry, I'm just switching my laptop. So my name's Dave Grzanti. Take your laptop. Just switching things and trying not to show up on my email. So I guess there was, I was gonna show a slide, but I don't have it up now, so. We're using some of the services that Heymont and others have been talking about, specifically the GBP and some of the L4 and L7 services. So what I wanted to do is I wanted to show an example of that. And I guess some of the questions that people were asking, I can show it in our environment, in what Horizon looks like, people wanna see that. Give me two seconds, let me just make some of this bigger. Just make a bunch of things. So for those who aren't familiar with SunGuard, we are building a platform on OpenStack. And a big part of our offering is managed services. So companies are coming to us to support their IT infrastructure and offer advanced services like firewall, load balancer, VPN, and also have self-service. So with the GBP model, we're able to offer kind of cloud native networking with loading IPs and VMs, but we're also able to offer some of the more advanced services like firewall. So I've kinda got two browsers open here. And I wanted to show kind of what the difference looks like if you're a customer or a user and an admin and what the instantiated services look like from either perspective. So this login now, I think this is big enough. This is me as a user. So I'm in Horizon. This is, the UI might look a little different because we've customized it, so I'm not gonna go over that too much, but I see this one web VM here. And this is inside of a group, a GBP group called web. And I don't necessarily see the firewalls, but I can if I go to network, see some of the information about the firewall. I don't see the instance, but I'll see what my firewall rules are and the fact that I have one. So if I looked at the network topology, you wouldn't see it anywhere, but I know that it exists. Actually loads. And this is, the other thing, I guess I didn't paraphrase that I'm doing a live demo. This is actually on one of our production sites. So something goes wrong. This is what I see. These are all the rules. So one of the things here is that we do, one of the things that we're offering is monitoring and managed services on top of this. So we need access into the VMs through our management network. So a lot of these rules, the reason there's so many firewall rules is a lot of it's just managed in traffic. Not necessarily things that the customer would want, but I just wanted to show that you can see this. And then from the admin perspective, all of the service VMs are deployed in the admin project inside the default domain. And particularly this one I wanted to show is an HA firewall pair, an ASAV pair. So what you see here is the two ASAs once as standby, once as active, the names don't necessarily change, but depending on, if one of them went down, they would kind of, HA would be handled by the ASAVs. And the service controller that Hamath was talking about takes care of launching these and configuring them. And in our case, we're using the ASA for the premium services. And then we're using a BIOS firewall for what we call self-managed. That's the non-premium offering, so it's less expensive. But in either case, the service controller handles configuring the VIP and also each of those firewall services handles the rule management differently. So the ASA is capable of kind of syncing rules between the two, the primary and the secondary. So if you push out configuration to one and once down, it'll re-sync when it comes back up. Service controller takes care of pushing the configuration to both of the firewalls in the BIOS case. So I wanted to show the, I know someone asked about looking at the instances accessing them, so I was gonna open both consoles and show, taking one down and showing that the failover happens. That's really an ASA construct, though not necessarily a GBP thing. So I just wanted to stop and see if I had any questions, because I kind of went over stuff quick. Sorry, I couldn't quite hear you. I think I heard some of it. In this case, we actually don't have a load balancer configured. It's just a firewall. Question is, can the firewalls be a part of a group? I mean, it's not really part of, not part of the group. I mean, yeah. So the instances which the VM instances which are used to bring up services, they are managed separately. We showed you the NFP framework earlier, right? They necessarily are not part of the group construct that is exposed by GBP. But we do have a notion of a cluster. So in this case, so there is GBP plumbing involved in trying to make this active standby pair work. And the way it happens is, actually, I'll take that back. We do create groups in here in which we bring up these service VMs. But there is also an internal notion of a cluster that applies for this kind of HIV pairing. So that is not exposed to the user. But there is a notion of clustering. And that's how we know that different services have, sometimes they allocate the same MAC address, sometimes they have different MAC addresses and all those things are taken care of by this clustering notion. Ibar here is the person who's actually written this quote. So feel free to sync up with him after this. Sorry, David, go ahead. That's okay. I was gonna log in and just show this real quick. Just get it set up so I can show it if you wanna see it. So you can see, I'm just gonna show the failover state of each of these. It'll tell you this host is a secondary and it's a standby and the other host is the primary and it's active. This host is the primary and it's active. So what I'm gonna do is I'm just gonna restart this one. Let's do a hard reboot. Okay, so the UI didn't refresh but the console disconnected. So what should happen, hopefully, is that this eventually recognizes that the primary is no longer active and then this becomes active. So it's jumping up on the screen now. And then eventually, when the standby starts back up, it will stay in standby and then the primary will know that it's running again and resynchronize anything that's needed. So that's it, that's all I wanted to demo. But if anyone has any questions for me or I think we're done in what, we have eight minutes left? So yeah, for the ASA specifically, Sun Guard is, that's part of the managed services offering that we have. So the customer's ability to control it or even our ability to control it is entirely through what's exposed in the UI, like what's available in the GVP interface. So when you do the service node configuration, whatever's exposed there, that's all you're supposed to touch. So we've got, that's very different than the traditional login to the ASA and configure whatever you want for the customer. So we get a lot of questions about features that may not necessarily be available yet through GVP that the customer wants. But we're like, no, you can't do that because if the ASA dies and it gets respond, you're not gonna, none of the configuration's gonna be saved. So the, our model is the ASAs and the managed workplaces are controlled by us. So like an operator or implementation person at Sun Guard would be doing this. For the BIOS though, that would be the customer and the interface to the two machines is exactly the same. It's the same, they see the same UI, it's just on the back end, it's a different appliance. Any more questions? So we actually finished. On time. On time, yeah. So yeah, we'll be around, you know, in case if you want to chat and follow up. Otherwise, like you mentioned at the beginning, so this particular lab setup will go away, but you'd feel free to pull the DevStack. So there are two versions of DevStack, the one without NFP, which is the one that is merged entry right now. Using that, you would be able to run the tenant workflow and the operator workflow and even the service chaining part. But if you want to develop a new service using this NFP framework, then there is a separate DevStack and we point it to it and you can use that. So that's just some additional configuration on top of the base GBP DevStack. So we are hoping that, you know, what you learned here is not constrained by this lab environment. You can try it out beyond the setup. And as a part of the GBP project, we have weekly IRC meetings. So if you have questions or, you know, you have specific requirements or you don't need clarifications. Or if they want to contribute. If you want to contribute, of course. It's spoken like an upstream developer. So... You know it's the way that it would work. Right. But yeah, please feel free to bring in your feedback and that's how we're deriving the project, like any other OpenStack project. Yep. Thank you. I'd like to thank all my co-presenters as well. Thank you.