 Yeah, we're on. Hey, everyone. Welcome to our session on orchestrating forwarding graphs using Tacker, Neutron, and Open Daylight. Let me introduce myself. I'm Shri Priya Sitaram from Brocade. Today, I'm filling in for Shridhar Ramaswamy here, who would not make it to the summit this time. Right. Hello. I'm Brady Johnson. I work at Ericsson. Working at Open Daylight SFC, also an OPN of ESFC. I'm Sridhar Ramaswamy. Are you on? Am I on? I think I'm on. Yeah. Sridhar Ramaswamy, working at Parway. And I'm going to be talking about the Neutron Networking SFC. You're saying you're not on. All right, let's get started. On the agenda items today, we are first going to quickly go through some of the basic service-staying concepts. We are then going to look into the TOSCA data model, how we can use the TOSCA data model to describe some of the SFC constructs, and then use the TOSCA template itself to onboard and deploy a forwarding graph. Using TACRA as an NFV orchestrator, Lewis is going to talk more on the Neutron Networking SFC project. And finally, Brady is going to talk more on the Open Daylight SFC and the IETF SFC information. And finally, we will wrap the session with the demo and take some questions at the end of the session. So what is service function chaining? So given a source and a destination VM on the two hosts, now here, the source and the destination VM are connected directly using the OVS. Now, when we bring in a third VM on a different host, let's say it is hosting some network functions, such as firewall. And then when we use the VNF or the VM itself to connect between the source and the destination, for some NFV use cases, we want to classify and steer the traffic through this VNF and then forward it to the destination. This is called a service function chaining in very basic terms. So of course, the forwarding graph itself can be very complex. Here, you're just seeing a single VNF that is stitched together with the source and the destination. A forwarding graph can contain more than one VNF. The VNF itself can span across multiple hosts. And the VNF's themselves can be part of multiple forwarding multiple service function chain. And finally, the chain can have multiple classifiers based on different classification requirements. Here, we have used the etsy terminologies to represent the network function. We call it the etsy calls it as a VNF. And the actual chain that gets deployed between the source and the destination is called as a VNF forwarding graph. We can learn more about this confidence when we talk about the TOSCA forwarding graph descriptor. So how do we model this forwarding graphs? So the forwarding graph itself has complex requirements. There are different classification requirements and there are different path that needs to be created in order to deploy the service function chain. So how do we model this forwarding graphs using the TOSCA data model? So there are three main elements to a forwarding graph as defined by the etsy to create or to represent a service function chain. The first one is the VNF forwarding graph. It basically defines how certain traffic meeting certain criteria needs to be steered through a set of network functions in a given network connectivity topology. The second one is the forwarding graph descriptor. This is the main TOSCA template that is used to represent the SFC policies and to represent the networking forwarding graph itself in the template. And finally, we have the forwarding path. So the forwarding path is an ordered list of connection points belonging to the VNFs that are part of the outer forwarding graph. A forwarding graph can have more than one forwarding path and that is used to describe the service function chain. So here in the last slide, we understood some of the components that are part of the forwarding graph. Here we see the TOSCA forwarding graph descriptor, which is basically used to represent the service function chain using some of the components which we just saw in the previous slide. In this network topology, there is VNF1, VNF2, and VNF3 that are interconnected using the virtual links. And there are three traffic that is flowing into the service function chain. So there is the green and the blue lines which represent the first forwarding graph. They basically represent two forwarding paths. The first forwarding path goes through VNF1, VNF2, and VNF3. And then there is a second forwarding path, which basically just goes through VNF1 and VNF3. And then we represent a second forwarding graph, which we call here as a VNFG2, which just has one forwarding path. And this goes through the VNF1 and VNF3. Here we can see that we can represent different kinds of forwarding graph using the TOSCA descriptor based on some of the forwarding graph components. Now how do we describe this SFC policies in the TOSCA template? The two main node types what we use in describing the SFC is the groups and the forwarding path node type. If you zoom in on the forwarding graph, forwarding group node type itself, it contains the metadata information which you want to represent for the forwarding graph. And it also includes information such as the connection points, the constituted VNFs that are going to be part of the service function chain. And finally, it has the members, which is an important piece of this forwarding group node type, which here it is basically having one forwarding path, which is called as a forwarding path one. And when we look into the forwarding path one node type itself, it contains the actual SFC information here represented using the policy key. The policy key has the criteria section, which basically defines the matching criteria that needs to be applied on the ingress traffic. We can include many ACL matching criteria. I think right now the networking SFC project supports on eight ACL match criteria. But we can add more number of, any number of match criteria types into this section and ask that the ingress traffic be classified based on this requirements. And then we have the path section, which basically represents the actual connection points that are going to be stitched together to create the service function chain. They're represented as an ordered set of forwarder capability information. Here we have two forwarders. Basically, the chain is created using the two VNFs, which has the exposed capabilities using CP124 and CP224, which are basically the two connection points represented by the two VNFs. So now that we have seen the forwarding graph descriptors of the YAML template, where we have defined the SFC policies, now here is the end to end workflow where all the three projects are in action to create and deploy the forwarding graph. So the user first creates and onboards the forwarding graph template in Tacker using the NFE orchestrator component. So once the template is onboarded, the user can go ahead and deploy the forwarding graph within Tacker. So the forwarding graph is first validated to see that the node types are matched with the TOSCA NFE profile. The node types are belonging to the forwarding group type and to the forwarding path node type. Once the validation is done, the VNFs, which are represented as abstract types within the template, are then mapped to the actual VNF instances. This information is then internally queried. And the corresponding ports are associated, represented as capabilities for the VNF. The port ID information is also queried. And the request body is built from Tacker NFE orchestrator and sent downstream to the Neutron Networking SFC component. The Networking SFC component further sends the request to the back-end driver. So the Networking SFC component has a pluggable back-end driver where it can use any driver as a back-end. Here we are using the open daylight as a back-end component for networking SFC. So this request is then sent to the ODL SFC driver, which basically acts as a shim layer between Neutron and the open daylight itself. Does the call or the REST comp calls for the Neutron node prompt residing in the open daylight? The open daylight SFC and network components do the job of translating this request and creating the actual service function chain and also the forwarder functions. The SFC component is responsible for creating the service function forwarder on the VNF. And then we have the network component, which goes ahead and creates the tunnel for the end-to-end deployment of the service function chain. So here is the high-level overview of the interacting components for the deployment of a service function chain. In summary, from the NFE orchestrator point of view, in order to deploy a forwarding graph, there are just two commands required to deploy this forwarding graph. The first one is the tacker VNFGD create, which basically is used to onboard a template, what we saw. And then there's a VNFG create command, where you just specify a name for the VNFG and use the onboarded forwarding graph template to go ahead and deploy a forwarding graph. So that's on the NFE orchestrator piece of the whole workflow. Now, Luis is going to talk more on the Neutron Networking SFC side. Right, so I'm gonna talk about Neutron Networking SFC, which is currently a stadium project as part of Neutron. So the way we model this as an API is we have a set of resources that are available as extensions to Neutron, and essentially what they do is allow you to create what we call a port chain, which consists of an audit list of port pair groups and classifiers. So if you look at the bottom of the diagram, we actually have, we show the different instances of service functions or VNFs that run from the source to a destination. And the way we actually set up the service chain is that we have the port pairs that actually represent the service functions, and each port pair represents an ingress and egress to the service function. Those are then grouped together as a port pair group, and essentially this allows you to do the load balancing across all the port pairs in that port pair group. So there you see several stages of port pairs and VNFs in the service chain, and then in addition to that we have a classifier that allows you to select the traffic that actually goes into the port chain. So the port chains are a unidirectional port chain, rather, that represent the networking forwarding path that's shown in the previous diagrams. The port pairs are based on neutron ports. So as I said, there's ingress and an egress port. We group them together in port pair groups, which allows us to do scaling and load balancing across all the similar port pairs with service functions within a group. We can also do a dynamic update of those port pair groups. So we can add and remove port pairs or service functions in the group, so that allows us to address or varying traffic loads. We can also do a dynamic update of the port chains themselves to add and remove port pairs. So you can actually reconfigure the chain to add in or remove various service functions from that chain. We also support multiple classifiers, so you can actually add and remove and select the traffic that you want to have going into the chain. And then in addition to that, we have the abstract API, which is essentially the neutron API that supports arrest requests. And then we have the back end that supports multiple different drivers. So we can interface the southbound to, say, ODL, OVS, ODN, ODL, et cetera. So this is just briefly the resources. So this is the API at the northbound interface and handles crud or create and delete requests to allow you to create the port chain, to update the port chain and delete it and for each one of the different resources that are available. So we have port chain resources, port pair group resources to do the load balancing, and then also port pairs and the flow classifiers. We have a number of parameters that are associated with these different resources. For example, the port chain parameters allow you to select, say, symmetric chains or allow you to select the kind of correlation or encapsulation that you're gonna be using in the data plane. This is briefly the architecture. So the northbound essentially is the API that we have for port chaining and this would interface with the Tacker driver and support the requests coming from the Tacker driver, which these would be to add and remove port chains or any of the other resources. Sitting below that, we have the plugin, the neutron service chain plugin manager and into that manager, that common service chain API, we can add different drivers for backend implementations. At the bottom, it shows you the compute nodes and the actual chaining through, say, for example, an OVS switch. This is a bit more detailed. This is really looking at the data plane and this is how ITF has defined the data plane. Essentially, we have the service function forwarders. We have the classifiers. So service function forwarders sort of essentially implemented as OVS switch and it handles the traffic in the data plane. So what happens at the classifier, the network service header is attached to the packets. That's the network service header includes a chain ID and also an index to indicate which hop the packet or the service function is through the chain and you can see there, for example, you have a blue chain and a red chain with the IDs one and seven and the 255, 254, et cetera are the hop counts or the index that's decremented as you go through each service function. And essentially, you can see the chains there is a couple of interesting aspects that we have to deal with is the fact that you either have NSH aware service functions where the header is actually delivered to the service function itself or you have to go through a proxy in the case where the service function is unaware of NSH. I think that's probably what I have for now. I think Brady will talk to the ODL aspects. Thank you. Okay, so I'm going to cover the open daylight aspects of service function chaining, how it's implemented. Here I'm talking about what SFC is and we've basically covered that already. So basically you define an abstract ordered list of network services. These services are then switched together to create the chain and open daylight SFC is giving us the infrastructure to configure this chain and create end user applications. The main important point in this slide is this open daylight SFC is an implementation of the IETF, SFC and NSH RFCs. You have the links there. So what are the features that we get in the open daylight SFC application? So in the top left here, you can see currently we're integrating with the open daylight genius project and what that will give us, first of all, something we call application coexistence. We already have that implemented in previous releases but with the integrating with genius that will improve this integration. What application coexistence is, it allows multiple applications to exist on the same open flow switch. So you can have something like SFC, the classifier in our case would be net vert on the same open flow switch and we're coordinating the tables that get written to we're not stomping on each other. That's actually not a trivial thing to achieve. So we'll have an improved application coexistence. This will simplify the configuration for the service function, service function forwarders, dynamic service function insertion, load balancing, failover, all those sorts of neat use cases. Also with the open daylight SFC, we have what we call a pluggable classifier. And I think that's, it's pretty important to point out that it's pluggable. Ideally, I would imagine that the classifier would be some sort of like a DPI or something that's kind of traffic aware that you can do some cool things there. You can say, all right, bit torrent for instance, I want to put on one particular service chain and HTTP on a different service chain. And there's no point in sending bit torrent traffic to HTTP service functions, et cetera. So we can achieve that with the pluggable classifiers that we have. Basically, SFC, the internals of it doesn't really need to know about the classifier. You can have pretty much any sort of classifier as long as it can encapsulate NSH and send the packets to our service function forwarders, then we can work with that. So right now, as I mentioned here, we have as classifiers, we're using open daylight net vert, which is basically a neutron back end. We have group based policy as a classifier. And then we have an SFC standalone classifier. Two other important points here, switch independence. As of now, we are kind of tied to OVS, but that's gonna change very soon. We're gonna be able to use OVS and FIDO. FIDO is this fast data IO. It's a new virtual switch. It's an open source project. It's also called VPP, Vector Packet Processing. So it's not only just OVS. If you look in the OPNV project area, there's a fast data stacks project, which is basically integrating FIDO with open stack. And then in the current release of OPNV and VSFC, we're going to integrate with this fast data stacks and be able to use FIDO. So it's not just OVS. And as you see there, we can also have an iOS XE render. Then virtual infrastructure independence. There's nothing that says we have to be an open stack. I mean, currently we are only an open stack, but we could use any virtual infrastructure. So here's a bit more, some detail about the OpenDale at SSE Data Model. I tried to capture here, for instance, the first concept, the service function chain. You have an abstract ordered list of service function types. So if you look in the blue box here, you have SFType, SFType, SFType, that should be actually be SFType 1, 2, and 3. Whatever you could say, I want a DPI, a firewall and an act. Then in the service function path, you get into concrete details about that, service function chain. For instance, here in the second row of the light blue boxes, I mentioned concrete service functions. And then also in the SFP you would specify the concrete service function forwarders. Then you have the rendered service path, which is the actual service chain. It's a combination of the SSE and the SFP, but also can include dynamic runtime information if you're doing things like load balancing or failover and such. And then another important thing I mentioned here, the service chain classification. And I mentioned that in the previous slide, the pluggable classifier. But basically what you're doing in the classifier is just mapping from subscriber or tenant traffic to a particular service chain, to a particular pre-configured service chain. So we use different transports. We could use VxLineGPE, Ethernet, MPLS as transports. And then what we call the service chain encapsulation is NSH is what we have implemented now. So here's a kind of a complicated use case, something that kind of shows the power of SFC and where we can go with this. Notice I have three different service functions here. I have a DPI, a quality of service, and an HTTP. Could be whatever, header enrichment or whatever. And so I used to work in DPI. And the biggest push in DPI is to make it faster. And I always said that the easiest way to make DPI faster is don't do DPI. And here's how you can achieve that. So you could initially map all of your tenant traffic to the green service chain there, which would go through your DPI, right? Then it would go through the DPI until it's able to figure out what type of traffic it is. Let's say they're a bit torrent or HTTP. Once it determines the type of traffic that it is, you can send feedback into SFC. I call it, here I call it reclassification. I was scolded earlier by, there's an IETF author out here. It's not actually reclassification, but kind of a mix of reclassification. Reclassification is where the actual service function would be changing on the fly and then sending information service function forwarder. But this is a different way of looking at reclassification. You're sending feedback back to SFC, which then would feedback into the classifier. So once I determine that the traffic, for instance, is peer to peer, for that particular five tuple, for instance, that flow, I could send the feedback to the classifier and tell him, okay, he's no longer on the green chain, now send him to the blue chain. And then likewise with HTTP. So I think that's a pretty powerful use case that we can achieve with SFC. Yeah, so now some future improvements. I mean, this is actually really cool that what we can do here with attacker VNF manager and with OpenStack, integrating that with Open Daylight. But there's some things that we need to take care of in the future. And I have kind of a long list here, but it all boils down to the fact that we're starting with a feature rich API, which is the VNF forwarding graphs, which then the attacker comes down into, lots of times I call it like a funnel, comes down into networking SFC, which is a bit still getting started, but it has less information available. So from VNF forwarding graph down to networking SFC, you're losing some information there. It doesn't, all the information doesn't map directly. Then on the other end, you're coming back to Open Daylight SFC. And there's certain fields there that we're not sure exactly how to fill out. So these are some future improvements we have to take care. For instance, how do you specify the transport? Is it Ethernet plus NSH or VXLAN, GPE plus NSH, et cetera. And there's another list here, different things that we need to be able to specify. How to specify, I call it an SFC proxy, an NSH proxy. How do we expose the random service paths, load balancing and such. It's important though that we are, the different communities are working together on this. And is it Thursday or Friday? It's Friday. We have a meeting where we're gonna get together and say, okay, these gaps have been identified. We're all on the same page. I mean, this is the goal, that there's some gaps have been identified and work together to... Yeah, to add to that, there's a meeting on Thursday for next steps in the networking SFC. Yeah, yeah. So, okay. That's all I have. Should we open up for questions now or after the demo? I think we should do the demo. Yeah, go ahead. So this is the setup we have for running this demo. So we have like a simple client VM and server VM. The server is a simple HTTP server listening on port 80. And the client can hear the client and server are connected directly. And the client can talk to the server and send some get request. And once we have the, these two are already created the VMs. And now we will try to create a VNF, a simple VNF, which acts as a forwarder between the client and the server. And then we use this VNF to deploy a service function chain between the source and destination. And we can see how the packets are forwarded on the VNF as a forwarder. And then finally it reaches the destination. So we'll just run the no less command. So we already have created the client and server VMs here. So they're just basic, simple bare metal running the client and server instances. And now let's go ahead and, here we have the VNF decatalog list containing a simple VNF. We will now go ahead and use this VNF descriptor to create a VNF, which is basically a simple instance again connected to a network. Here we have all the three instances running. One is the client, the server, and the actual VDU instance. The VNF just contain one VDU. So that is spawned here in the NOVA. And we can see all the three instances running here. And let's go ahead and access the console of all the three instances now. And try to see if we can reach from the client to the server first. So here on the server we are simply running a simple HTTP server listening on port 80 on the server side. And then on the client we will try to do a call, get request onto the server. We should be able to receive the connection back from the server. And if we go back to the server itself, we are seeing that the get request from the client was received in the server itself. Now this is the, here the client and server are directly talking to each other. Then there's no forwarding graph yet. And now let's access the VNF console which we have just created. Here we'll just run a simple VX line tool which acts like a forwarder. Listen to the incoming traffic, dumps it on the console, and also forwards it back to the egress port. So we have the tool running here and basically to listen for the packets and dump it here on the terminal at any time it receives any packets and then forwards it back onto the egress. So now we have all the three VMs in place. Let's go ahead and now create a forwarding graph descriptor as a Tosca template. Here we can see the Tosca template for the forwarding graph. So the two main components as we already discussed here, the first one is the groups. The group contains the forwarding graph metadata information, and then it also contains the important piece, the forwarding path one, which is the only member here as part of the forwarding graph. It's a simple setup here what we have. So when we look into the forwarding path one information itself, we have the main policy section here which contains the criteria and the path information. The criteria basically does on what kind of traffic the forwarding graph needs to be acted upon and steer it to the VNF that has changed between the source and the destination. Here in the criteria section, we are putting in some parameters, the source port range, the destination port range and some other parameters like IP destination prefix and so on. And in the path section, we have the forwarder and capability information. Here we have a single VNF running which has exposed the CP12 as the connecting port for the forwarding graph and to create the chain. So now once we have provided these requirements as SFC policies in the TOSCA template, we can go ahead and onboard this template using the attacker VNFGD create command. This will allow us to onboard this created template into the attacker NFV or orchestrator. So here we see that the template has been onboarded successfully and now we can go ahead and use this onboarded forwarding graph template to deploy the actual forwarding graph. So we use the VNFG create command here and provide the onboarded template as an input and give a name for the forwarding graph. And here we are running the command VNFG create to deploy the forwarding graph and we can see that the forwarding graph has been initiated from the attacker side. So what happens in the background as the attacker validates the template as we already saw it finds all the information necessary to provide the port information to the networking SFC like validate the abstract VNF types find the actual VNF instances that are running and then find the port information for the capability and the forwarding information we have specified in the template and once this information is fetched it would create a DBN tree to perform for the lifecycle management of the forwarding graph and then forwards this request to the networking SFC and calls the underlying neutron commands such as the port pairs, the port pair groups and the SFC creation. Now once the forwarding graph is initiated from the attacker side we can now go into the client terminal and see if the packets are now getting forwarded through the VNF and the client tries to send some traffic to the server. So let's go ahead and initiate the curl command again from the client. So we see that the connection was received and we also see in the server side that the get request from the client was received and now in the VNF console itself we are able to see that the packets have now been forwarded to the, have been received at the VNF and then we see that the, we are basically seeing that the packets are getting dumped onto the terminal here but then these packets are received and just forwarded back onto the egress. So we now know that the VNF actually has been created as part of the chain and it is now receiving packets and forwarding back onto the egress. So now let's try to create, send more requests from the client and see if the packet count increased on the console here in the VNF itself. So if I do a curl request again from the client to the server we see that the packet count has now increased like you see the packet index has gone up to 10 now. So we see, we can, this way we can see that the packets are just forwarded from the client to the VNF itself based on some of the classification requirements we provided like the destination port range and the destination IP, IP subnet. With this information, the traffic was stayed into this VNF and it was able to forward the packets to the server itself. Now this, we saw now how the VNFG could create command could be used to deploy a forwarding graph in one single command. Now when we go in and delete this forwarding graph the system should be reset back to its initial state and all the open floor rules should be deleted and the client and server should be able to talk to each other directly which was the initial state of the system. Now let's go ahead and run the tackle VNFG delete command. What this basically does is provide all the necessary lower layer calls to delete the service function chain to delete the classifier functions and to delete all the open floor rules that were created across all these three machines. Now let's try to send a request again from the client to the server and we should not be seeing any packets now flowing through the VNF and the connection should be directly hitting the server. You see that there are no packets here that are received on the VNF and the client and the server were able to talk to each other directly and that's a wrap on the demo. Okay, so thank you. I'd like to open the floor for questions. Do you want to? Go ahead. I'm skipping the summary. Sorry, sorry. That's all right. Good. So to wrap this session, like some of the takeaways we want to provide for everyone is that deploying complex forwarding graphs with a simple, easy to read Tosca template, using attacker as an NFE orchestrator with just the two commands we can now do it. And again, when the user initiates the SFC request, the user themselves, the user need not worry about what is the backend driver that is used to actually create this SFC here. The networking SFC itself can handle multiple backend drivers like ODL, as we saw here, the OVS, and there are other backend drivers that are getting integrated into networking SFC. And finally, the forwarding graph descriptors of what we saw is based on the Tosca NFE profile. This is based on the open standards which is accepted by the NFE community. And this will help us to evolve this forwarding graph further in future iterations. Thank you. Then take some questions. Go ahead. Is there a microphone? Anyone? Do we have a microphone? Hello? Yes. Nice presentation, guys. A small question. In the Tosca YAML template and in the SFC APIs, there's a discontinuity about how you talk about things. Is there any work to try and make it similar? So, when building a Tosca template, you talk about port pairs or port chains or vice versa, it seems a little incongruous. Yeah, I think we're certainly gonna start looking at trying to resolve those differences in future versions of the port chain. So, like I say, there's gonna be a meeting on Thursday at I think it's 11 o'clock where we're gonna be talking about future directions for networking SFC. And that's probably part of the discussion. Okay, I have a question about the... Okay, you talked about the classifier and you said in a commercial deployment, it might be a DPI box. And in my organization, we are doing this guarantee and assessment how to utilize the service function chain. But we found that most of the vendors like doing the DPI or doing the gateways, they are not yet ready for the NSH. So, how do you comment on that? Who was cooperating with you? Yeah, that's an excellent question. I mean, that's the main problem. You know, OBS does not support NSH yet. And yeah, lots of operators aren't supporting it yet either. That's one of the main reasons we decided to start using FIDO. The FIDO VPP, which already does have NSH implemented. So, yeah, I mean, it is something that, I mean, it's new, it's leading edge. I wouldn't call it bleeding edge, but the thing is, if you don't implement NSH, you have to ask yourself, how are we gonna do service chaining without NSH? I mean, you can do it. You're just gonna have to classify it every hop. And depending on how many tenants or subscribers you have, that's not gonna scale very well. I mean, you'll have to classify it every hop. Every time you come into the SFF, every time you come back from a service function, et cetera. If you don't have some sort of, what we call service chaining encapsulation, you're gonna have to classify everywhere. And that gets expensive real quick. So, that's a trade-off. Yeah, I think it was, you know, that proxy that we saw there is really the device that's going to deal with any legacy service functions that don't understand NSH. So, hopefully over time, we're gonna see NSH implemented by service functions. So, the need for those proxies with the, you know, reclassification that each hop's gonna go away. Okay, just one comment, a long question, because normally we do the classification. I'm coming from Telco background, so, for a mobile broadband, users have different services, different packages. So, classification is not done based on the IP report. It's based on the subscriber or subscription profile. Yeah. So, how can we map this into reality? I mean... Yeah, also, ideally, I think I mentioned the slide, but I didn't explicitly say it. I mean, another way to do it is you can have, you can have PCRF interface, which you're referring to, right? I mean, the subscriber awareness is what you get with your PCRF, your policy control routing function. So, that's one way to do it. More questions, please. Go ahead. Over here. Yes. Yeah, so, great presentation. Thank you. Thank you. So, some of the slides seem to show the V-switch being controlled directly from network F and S and C, and some from ODL. Do you support both methods? And if you do, do you characterize the advantages or disadvantages between the two approaches? I didn't understand the question. You were cutting out a lot. So, the control of the V-switch, OBS V-switch. Yes. Some of the slides seem to show they were controlled via ODL, and some seem to show they were being controlled by networking at SFC directly without the ODL. So, do you support both paths? And what are the advantages and disadvantages if you support both? I think in a typical deployment, you should only be supporting one method. So, you'd likely maybe just go through SDN controller to control all the devices in the network. Maybe in a different deployment, you might go directly to switches controlled by directly from the networking SFC to the OBS switches. So, I think it really depends on each deployment. So, and what the carrier has in mind for their SDN controllers, is the usage of SDN controllers. Yeah, he also asked the pros and cons. Does that understand maybe the networking SFC driver? It's more of an API-driven and it's not as much implementation. Right, I mean, so. It is indeed tended to have some sort of an SDN controller back end. Right, right. Yeah, so the reason we have the OBS, we focused initially on the OBS back end is that it's needed as a reference implementation within OpenStack. So, that doesn't mandate that you have to have an OBS driver. You could just as easily use some other back end driver, say an Onus controller or an ODL controller. That's perfectly feasible. Yeah, I think it's just one driver at any point of time that is controlling the whole workflow. It's not like networking SFC controlling some component and the ODL controlling some of the OBS related actions. So, if we have the ODL as a back end driver, it's ODL that's going to manage all of this classifier functions and the actual service function chain that gets deployed on all these nodes. There's a question here. I have a question. I know we're in the age of abstractions, but this whole technology is made to improve and give us the ability to handle complex networking, new networking services, et cetera. And this whole abstraction layer where a attacker is talking to, what was it called, SFC networking and so on, means that we're hiding the functionalities that are available in ODL, OBS, Onus, et cetera. And essentially, I'm afraid it will dumb down, meaning that, you know, the differences between ODL, OBS, Onus, and so on will disappear. What is the reason? Well, I can sort of understand the reason, but I still find it a bit strange that attacker isn't talking directly to ODL and then exposing all the capabilities. Yeah, that's my question, Norris, whatever. Yeah, I agree that there is an issue there with not being able to expose all the functionality, but on the other side of that is the issue of why do you want to expose all the feature specific, you know, controller specific features up to attacker. So you have to decide, you know, what you want to do there. So we are aware of the capabilities that are provided by Open Daylight or any other backend driver. So for us, from the attacker side, it's how we expose these capabilities to the user in the templated cell. So right now, since attacker is going through the neutron, the networking SFC interface, which is still evolving, the capabilities what you can provide or the requirements what you can provide in the templated cell is very constrained. But going forward when the networking SFC wants to support more capabilities and to interact with more backends, which has all the potentials of supporting complex requirements, that's when attacker can also in coordination translate, get this request handled in the Tosca template and push down those requests to the downstream networking SFC to handle it in the lower layers. So from attacker side, it should not limit us from exposing any of those capabilities that are provided by the backend. So we are aware of those capabilities that ODL or any other backend provides, but it's up to how we can translate and provide it to the networking SFC itself because networking SFC is a interface between attacker and Open Daylight, given that it provides the neutron abstracts in OpenStack. Yeah, I would like to mention though, I mean, if you take a look at the Open AV SFC project today, attacker is indeed talking separately to Open Daylight and to OpenStack. And it's working quite well. There are other projects that we're considering for Open AV SFC. I don't know if we'll be able to get to it in the D release now or in the E release, which is Open O. And Open O is an orchestrator, which I consider more of a complete orchestrator, and it talks separately to attacker as a VNF manager, spin up VMs and such, and then to, it can have several different SDN controller backends. That's another possibility. And then you would have a little driver per SDN controller back end that, depending on how you look at it, sorry, seems to make a bit more sense to me to go with something like Open O where you'd have that division. Any other questions? More questions, please. Yeah. Okay. I think we wrap this up. All right, thank you guys. Thank you.