 Can you hear me? Hello, everyone. I'm Rukhsana Ansari, Principal Engineer at One Convergence. And in the talk today, I'll be covering an overview of our unique product, which provides policy-based network services for the OpenStack Cloud. And in the second part, I will also walk you through a showcase demo that uses infrastructure as a service use case to demonstrate the versatility of our solution. So One Convergence's policy-driven network service delivery framework essentially enables the deployment of network services in a virtual environment. The key feature set that we provide includes lifecycle management and configuration of network services. And additionally, the traffic steering to a set of chained and inserted services. In addition, we also provide for high availability and scaling of these network services. The entire solution is driven through a policy-driven framework. The solution itself works with a variety of network fabrics and network controllers, inclusive of One Convergence's own network controller, which uses a tunneling-based network virtualization solution. We also have support over vanilla OpenStack Neutron and Cisco's AP control ACI fabric. So in a nutshell, the brain of the solution is our service delivery controller, which takes in the policy intent, which is expressed by the user and then consumes it to render a set of network services at the network edge. The solution itself is based on OpenStack's group-based policy. And the key differentiating aspects that I want to call out here in terms of our solution is the fact that there is a distinction or a layering of approach in terms of the fact that the user can specify the application needs in terms of an application policy, which is simplified. And the infrastructure operator takes care of the fact as to how does he realize or render the application policy by mapping it to the infrastructure in place. The other unique aspect of the solution is the fact that, unlike other vendors which have very specific solutions for a certain kind of service, such as a load balancer or a firewall, we differentiate our solution by the fact that we normalize the approach by grouping services into categories of TAP, L2, and L3 and enable the insertion and chaining of any of these services. So you could have a load balancer that works in an L3 mode, a web application firewall in the L3 mode. Or you could have a firewall working in an L2 mode. We also provide for the configuration of any of these services. And another key point that I wanted to point out is that we provide a mix and match of both open source services and leading vendor network services. And finally, the versatility really comes across when we think about how the solution adapts to changes that happen in the network. A classic example of this is if you have a load balancers, the number of back end servers, if the user adds another back end server, then the user does not need to explicitly go and configure that in a load balancer resource. Rather, the policy engine detects this and updates and pushes that configuration to the service VM to configure the load balancers back end server pool. I wanted to just touch upon some of the key points about the group based policy. As I mentioned earlier, the policy engine that we have is based on OpenStacks group based policy. The group based policy itself is a very comprehensive set of extension to enable the user to express his application needs in terms of policy abstractions. So instead of the user adopting a very network centric model, the user is now an application user is free to express it very simply in terms of groups. So he would define a set of groups. A classic example of that is a three tier application with an app server group, a web server group, and a database group. And then he defines the connectivity between those groups to say, for example, that he would only want TCP port 80 traffic to traverse between the app server group and the web server group in both directions. If you notice that he, in my description, I did not specify that the user would have to go and specify what is the subnet that he wants these groups to work on. He doesn't specify what are the routers. So this aspect of it is actually implicitly defined by the policy engine. The second aspect of it is what I alluded to earlier, that the application policy is expressed independently by the application user. And then the infrastructure deployer would go and specify how he wants that policy to be rendered. So there is a separation of concerns and enabling a layering of policies. The third aspect is the entire simplification of configuration. So the user is now freed up from the onerous task of having to go and define networks, subnets, and even to some extent, a lot of the configuration of the advanced services. This happens because, as I mentioned earlier, that the L2, which is the network and the subnets, are implicitly created by the policy engine. The L3 policy, which refers to routers, would be implicitly created by the policy engine. And the third aspect is the derivation of context from this policy. One example of that is if you have a firewall. The user does not have to explicitly go and configure the rules of the firewall. The allow policies within a policy rule set get translated into corresponding rules for the firewall. Therefore, the user now, it's as simple as saying, ultimately will move to a one click user model. Another example of the derivation would be what I talked about earlier, as the pool members of a load balancer. All the VMs or the policy target within a group get automatically added as configuration of pool members within the load balancer. The other aspect of the solution that GBP provides is that it's highly automatable. And the configuration uses a heat-based template that enables self-service chains. Now, you must be wondering that we've talked about groups and we've talked about policy. But how does this tie in with network services? And the way that group-based policy achieves this is through a redirect action. So if the user would, the user here is I'm referring to the infrastructure needs. So the way that this happens is that a set of service nodes are specified for a service chain. And the redirect action sees to it that the traffic would be steered to that service chain. And finally, group-based policy, as all of you know, is a community-based effort. And one convergence has been a key contributor from the start. I think with that, I'll move into the second part of my talk, where we'll walk through the showcase demo. So this slide captures what is it that the user has expressed as intent. So we have a web server group and then an app server group. And the user has expressed the intent that the only connectivity between these two groups is the TCP port 80 traffic. The infrastructure guy would then add a redirect rule to say that all traffic traversing these two groups needs to be redirected to a chain. And what do we have in the chain that we will demo today? We have three components covering the three different groups that I talked about earlier. We have a tap component, which is an intrusion detection system. And we've used SNOT for this. We have an L2-based firewall. And the example that we've chosen here for the firewall is pfSense. And the third thing is a load balancer, which is an L3-based service. And I'll demo using HAProxy as a VM. So with that, I should be able to move to the demo itself. So what we have done here is so as to not leave it to the demo gods. So in terms of that, we've actually launched the services. And then I'll walk you through the screens and step through the traffic tests. So I just wanted to very quickly run through what is the heat template that we use and call out what are the key aspects of the group-based policy here. So essentially, we define service nodes. So you can see that we have three service nodes for a TAP service, for an L2 service, which is a firewall, and for the load balancer service. We then define a service chain using all three services. The following resources are with respect to how the policy rule set itself is defined. So but I'm going to skip through that and just show you the policy rule set itself, which will show the layering of. So this part of it is where we are saying that there are two rules in this. The app guy specifies the allow HTTP rule, which is TCP port 80 traffic. And the infra guy will say redirect to the chain that we've just walked through earlier. So now let's go to the screens. Since we've already launched the services, so I will walk you through the tabs and group-based policy. And all the additional extensions that we at one convergence have provided in terms of operational visibility into the chain. So group-based policy itself, an open stack provides for only static configuration of the service nodes and service chains. So if you go to the orchestration tab, I'm sorry, if you go to the policy tab and you click on Groups. So I've clicked on the Group tab. And as you can see that there are two groups which have been created. The app group and the web group. The app group provides for the HTTP rule set. And the web group, which is going to consume that policy, has a consumed relationship to the HTTP rule set. And if you look at the rule sets themselves, the rule set itself has two rules defined in it. I'm going to click on the network services. So let's first look at the service chain nodes. So there are three service chain service nodes that we define. Tab for the IDS. Load balancer for the L3 service. And L2 for the firewall node. We then have specified a chain which says that all three services are included in the chain. We've covered this. So this is the static configuration of the nodes itself. One other point that I wanted to mention is that if you go to the service itself, you can see how the configuration is specified here via the heat template. And we will now look into the operational visibility, which is that this is the tab that we provide. These are our extensions. And if you go to the tenants tab and you click on Elastic Services, the services which are created are of three types. And you'll see that the first one is the firewall, which is of type security. Then we have a load balancer of type L3. And finally, the IDS, which is of type monitor. If you click on the service routing tab, this is showing what are the running instances of the policy. So the running instances that we have is, again, in terms of the group-based policy, we have two groups, the app group and the web server group. Each of these groups comprises, at this point, we have one VM in the web group and one VM in the app group. The classifies to, say, TCP port 80 traffic. And the actions which says allow this traffic by the app user and then the redirect action. And then the policy which ties all of this together. So let's now go into the traffic tests. So the first test that I will do is to do a W get to the load balancer VM. And I'll also, so when we said we enable port 80 traffic, so if you see here, W get goes through. The next test that I will do is to show that all traffic goes to the tap service. And the third one is that only port 80 traffic will, the firewall enables port 80 traffic and will deny all other traffic. So towards this, this is a W get to the back end server. As you can see, that goes through. And if you go, this is the console of the tap. So I'm going to do that again. And I'm running TCP dump here. As you can see, that all traffic is redirected to the tap VM. I will now try an SSH, right? And if you see SSH doesn't go through, because the firewall has taken the allow rule and then blocked all traffic other than port 80 traffic. And you can see that here, right? If you look carefully, you can see that there is only traffic from in one direction, right? There are no responses coming back for the SSH traffic. So that was an overview of the traffic test. I also wanted to say that as we provide a comprehensive solution in terms of the monitoring, the statistics, and logs for our services as well. So if we go back to our Tenants and Elastic Services tab, and let's say we click on the HA proxy service tab, right? We'll click on Instances here. I just took a while for the graph to load. So you can see we have stats about the CPU realization, the total packets in, the bytes in and out. And we also have a graph of how the traffic pattern has because we've been running the test for a while now. So the statistics are available for all of our services. So I think with this, I've covered the traffic and the showcase demo as well. And we have time for questions. So let me just go back. I don't have the slide where I invite all of you to actually come to our booth, which is, I think, T254. And we have a lot more demos that we can show you with different kinds of use case, security as a service use case as well. And we'd be happy to discuss the solution further with all of you. Are there any other questions from the audience? OK, thank you, everyone.