 Hello, everyone. My name is Kathy Zhang. I'm a principal engineer at Huawei USA. Mohan cannot make it today, but he has recorded a demo video, which I'm going to show in this presentation. So today's topic is about how we integrate OpenStack service function chain with Ono's SDN controller to realize the service chain functionality. So here the SFC means service function chain. So you may wonder why you're here. What can you take away from this session? So I'm going to first give a brief overview on the OpenStack Neutron service function chain feature. It's flexible architecture and how it can integrate with different types of SDN controllers. And what this API looks like is current code status and the second phase features. Actually, I already gave a presentation on this in the last summit. So here I'm just going to give a very brief overview on that. And then I'm going to give a brief introduction on Ono's controller. It's distributed architectures for scalability support. And then of course I'm going to talk about how OpenStack on service function chain can integrate with Ono's SDN controller to realize the service function chain functionality. So first, what is service function chain? So I guess probably most of you already know what service function chain is. What we mean by the service function chain is that through a centralized control management control platform, different tenants' traffic flows can be automatically steered through different sequences of service functions. And those service functions can run on VM, around containers, around physical device. So here is the service function chain, OpenStack Neutron service chain architecture. At the top is an OpenStack Neutron server, and at the bottom are a few on compute nodes. At the northbound of the OpenStack Neutron server, there is an OpenStack service function chain API for the user to specify the service chain requirement. And then we have a common southbound service chain driver API so that different controllers driver can plug in to realize the service function chain functionality. And currently we already supported the OBS service chain driver path and also the Ono service chain driver path implementation, which I'm going to go through in later slides. We're going to add more support in second phase. Like I said, we already started the ODL service chain driver. We're also going to write to the Dragonflow service chain driver and OVN service chain driver. And then on the compute node you see the OVN switches. So the OVN switches will be programmed by the upper layer control plan to set up the traffic steering path for the service chain requirement, for the service chain based on the user's specification on the OpenStack Northbound API. And so once that traffic steering path is set up, the traffic from the original source will go through each OVN switches and then to switch it to the required service function. So here's a service chain API. The service chain API consists of two parts. One part is flow classifier and the other part is an ordered sequence of service functions. So I first talked about what flow classifier. So the flow classifier will specify the classification rules that is used to classify the flow that will go through a chain. And the sequence of service functions basically is a chain that the flow will go through. So for each service function in the service chain, it is represented by either a unidirectional pair or bidirectional single pair. It will be a single port. This is a neutron port. As we know, for each service function running on VM, it's going to have a neutron port. So we specify the service function according to its neutron port, which is a logical port. So if there are multiple service functions, functional-like service functions, you can group them together for load distribution purpose. So you are going to specify a port pair of groups which consists of multiple service functions. So this slide gives information on this project. We already completed the first release in February. And we have an architecture API specification. And then this link, you can go to this link to get more information. We also have a project weekly page which describes all the project information. We have weekly IRC meeting, which is every Thursday. And if you are interested in what we have been discussed before in the IRC channel, you can go to this meeting log link. So what's the second phase features we are going to do in the Newton cycle? So we are going to add support for a chain of service functions hosted on containers. Currently, our functionality is a chain of service functions hosted on VMs. So we already can support service chain creation, deletion, and modification. So which includes, like, you can dynamically add a service function into the chain or delete the service function from the chain. So in the second phase, we are going to add support for a chain of those service functions hosted on containers. We are also going to add a service chain with service functions hosted on physical devices. We already started to integrate with a VF manager, which is called Tucker project. We are also starting, we already started writing an ODL service chain driver, and we are going to write, to work on OVN service function driver and Dragonflow service function driver to support the implementation path on three open source SDN controllers so that you can select, you know, whichever you want to use. And we are going to add support for IETF and ISH encapsulation. And we also will add support for symmetric service function chain path. So now I'm going to go to give a very brief introduction on OVNOS. So here is a very slight, shows a typical SDN architecture. The top layer is the application layer. The middle layer is the control layer. And the bottom layer infrastructure layer, which consists of, you know, either virtual network, virtual physical network device. So how we map this to the service function chain component. So the open stack networking SFC component will map to the application layer. The OVNOS controller map to the control layer. And the virtual switch or physical switch, as well as those service functions run on VM or container or physical device, map to the infrastructure layer. So here is OVNOS architecture. Its key characteristic is a distributed architecture. So it's based on... So by distributed, what we mean is that it has multiple OVNOS controller instances. So those instances combine together to form a cluster. So from external's point of view, you feel it's just like one OVNOS controller instance. But actually internally, there are multiple. And they coordinate with each other to sync information between each other. So like to support the scalability requirement. So here, like, you know, Northbound has, you know, a Northbound Core API. And then, of course, it has a distributed core, which is a key part. And then it has a Southbound Core API, which can interface with different adapters to support different particles talking with data plan components. So here is the service function chain component into this architecture. As we mentioned before, in the app layer, it's an OpenStack networking SFC component with the OVNOS service function chain driver. And then the Northbound, there's an OVNOS Northbound for service function chain functions. And then there's an OVNOS service function manager in the core. And then there's a Southbound API for service function chain provisioning on the device, support that on the device. So now I'm going to go through the demo scenario. And I'm going to show later. In order for you to better understand the demo, I'm going to show what scenario we are going to demo and also the demo topology. So the demo topology consists of a source VM, which is where the traffic originates, and the destination VM in VM4, which is a traffic ends. There will be two service functions in this demo running on VM2 and VM3. We're going to show two scenarios. The first scenario is, you know, the traffic is a pin traffic from source VM to destination VM without going through the service function chain before we install the service function chain. So you are going to just see, you know, the traffic activity on source VM is the nation VM. There will be no traffic activity on service function VM2 and service function VM3. The second scenario is pin traffic, same pin traffic, but that's after we install service function chain. Then you will see that, you know, the traffic will be forced to go through the VM2 and VM3 and then reach the destination. So you will see the traffic activity is shown on VM2 and VM3 for the second scenario. So for scenario one, as I described before, it's like from VM1 to VM4 directly. So for this scenario, you're going to see in the demo we are going to create a new transport, a six-new transport for that. And then we are going to create the VMs, which has source VM, service function VMs, two service function VMs, and then destination VMs. So we create these through the NOVA CRI and also associate the port with VMs. And then we are not going to do any service chain creation. So you just directly pin from VM1 to VM4 without going through the service function chain. So in the second scenario, we are going to create the service function chain using the OpenStackNow working SFC, CLI. So then after that, you are going to see the traffic go through these service functions. So first we create a service function chain port pair. We create a service function port pair for the service function one, and then we create another port pair for service function two. We are also going to create port pair groups, which means each service function type will have multiple instances. So once those are created, right, once those CLI successfully created, the neutron will send the create request to ONOS, and the ONOS will store this information into its database. We are also going to create a flow classifier. And same thing, ONOS is going to, once this is created, the neutron will send this to the ONOS controller. ONOS controller will store the flow classifier details in its DB. And last CLI is to create the port chain. So this port chain basically is associated with the traffic classifier, which is created before, with the sequence of port pair groups that's created before. And then once this is successfully created, the ONOS will store the port chain details in its DB. And also, only at this step, it's going to initiate an event to generate and download the flow rules to the data plan components, which is a switch to set up the service function chain traffic steering path. That's it. And then, so these are the process of creating the service chain, which is going to trigger the traffic, pin traffic from the source VM. And this time you are going to see the traffic go through the service functions, which is specified in the chain creation. Now I'm going to switch to the demo window, which MOHA has recorded. So I'm going to show here. Sorry, why there's no voice? Looks like a problem with the voice. Okay, so I just continue. Anything? Looks like some problem with the voice. Some problem with the voice. Sorry about this. I guess because this is a prerecorded with voice, but looks like some problem with the voice. So I'm going just to explain it. I hope I can time this better to match the slide. I mean the video. So let's see. So here this shows topology. This is like, you know, we have the destination VM, source VM, and then the service functions. So first we are going to create these service function instances. You see that you have a source VM, destination VM instance, and service function one, and service function two instances. And they show this IP address. And yeah, these are the... So once we create these instances, we show this window. The upper window is a source, source VM window, and then enables the TCP dump. So you can show an activity. And this is a service function, ingress port window. The right one is service function, egress port window. And this is an ingress port window for the second service function. And the egress port window for the second service function. And then this is the destination VM. So now we're going to pin the traffic. And then you see that the activities only shows up on the... Oh, here. Here's the pin. So once you enter the pin, yeah, you see the activity only shows on the source and destination window. There's nothing, no activity on the service function on windows. So now it's great to make it clean, make the window clean. So now we're going to go to the controller window. We're going to... So here it shows all the ports for the VMs, for the port for the VMs. It just shows the different ports for different VMs, for service function VM, and then source VM and destination VM. So we go to the ML2 plugin to set up the mechanism driver, to set the mechanism driver to onus driver. Here also. Just to show that it's hooked up to the onus. Now we're going to create the port pair, which is, you know, create each service functions port pair. So this opens that new function chain API syntax. So you specify the ingress port and egress port for that port pair, basically for that service function. And then here it shows that, you know, this port pair has been created. So now we're going to create another port pair for the second service function. So we create port pair 2, and then we specify the ingress port and egress port. So you're going to see the second port pair is created successfully. And also here it shows it's already created with port pair group and the port ID. So the next, we're going to create the port pair group. So we're going to create port pair group for the first service function type and the port pair group for the second service function type. I'm not sure how well you can see the screen, but basically that's what it does. And then port pair group 1 has been created successfully. Here it shows this port pair group has been created successfully. And then now we're going to create a second port pair group for the second type of service functions. Here it shows it's been created port pair group 2. So now we're going to create the flow classifier. Yeah, the syntax is Neutron flow classifier and then create. So we're going to specify the classification rules like source IP prefix, what is the source IP prefix for this flow? So it's 20.0.0.3. And then we're going to specify destination IP prefix for this flow. I think I'm going to scroll a little bit a little bit too slow. And then eventually we're going to create the port chain. Now we're going to create the port chain. So in this port chain create CLI we're going to specify what flow classifier will be associated with this port chain. And also we're going to specify what's the sequence of port pair group that will associate with the service chain. So here we're going to say this service chain consists of port group 1 and then port pair group 2. So there are two service function groups associated with the chain. And also there's this flow classifier which we have created before associated with this chain. And then we give this name called port chain 1. So you see that it's already created. So once that's created so now here it just shows that in almost database it's also created the port chain. The previous screen shows in the open stack screen show database is created and this screen shows in the owner's database is created with the flow classifier with the port pair groups. So now we're going to to pin to do the traffic pin. Okay. Oh yeah, first we enable the flow dump. I think here it's showing quite some details of the I mean in the data pass. But so now I think it's going to now we're going to go to the window that shows the source VM destination VM and now we do the pin. So once you enter the pin you see the activity is showing up on both destination VM and all those service function windows. So this shows that once we use service function chain service function chain API to configure the service function chain then when you do the pin the traffic will be forced to go through those service functions that's specified in the CLI. Yeah, you see all this traffic activity on this service function one window and service function two window is not like before it only no activity on those service function window. Yeah, that's it. Thank you. Let me see how much time I have. Okay, any questions? Did you have any dependency on the version of an open V-switch that you were running and are you using Liberty or Miigata? Yeah, it is. Actually we have implemented the pass on the OVS driver directly program the OVS. That's another that's what our reference implication but this is about the other pass which is through the ONOS controller pass. Yeah, so we have implemented both pass. Thank you. Any more questions? So if not that's all. Thanks everyone.