 So you just use the arrows. Hello. Good morning. Welcome for the telecom-way engulfing session. And Tim and I will give a brief overview about the service function chain name and technology analysis. And Tim will give a demonstration about what's been developed and achieved in the agency project in the OpenMV. So let's get started. And I'm being from AT&T, and the team is from Red Hat. So we will talk a little bit from a technology perspective different approaches to implement the SFC, and including the NPSGP-BGP VPN approach and the NSH approach. And followed by that, Tim will give a demonstration. And then we'll give some summary about the different projects that's been worked on in the OpenStack, Open Daylight, and OpenMV, and also some key takeaways from those analysis and implementation. So let's start with some key concepts. So those key concepts apply to both the different prevention approaches, including the classifications and the basic classifications policy-based the function to identify, select, and match the different traffic flows. And with a specific service function, the chain and the requirement. So the classifier is specific for the different service provided policies and customer policies that's needed for constructing the service chains. And another concept is called service function chain, also known as a service chain. And so itself is an audit list of the services functions, and especially those virtualized network functions. And the ordering constraints based on the policy, and that must be applied in order to the different packets and different frames, which are selected, and as a result of the classification function. So basically that's a traffic flow, and go through different functions. And so it's the chain itself. It can be as simple as a linear chain, or as complex as a service graph. And there's a project in the OpenMV that's dealing with the VN of first part of best service function that's led by Kathy here. And the third key concept is called the service function forward, and which based on the different approaches, it could be implemented as a virtual router, the virtual routing and the forwarding function, or the service function forwarder. And which basically the function is to forward the traffic and to one or more connected service functions and to transport the traffic flow to another service function or another virtual routers. And also at the end, it terminates the service function chain and go to the normal traffic flow. So as mentioned, there are two different approaches in order to implement the service function chains. And so one approach is based on level three and using the counter network functions and which based on MPLS and BGP protocols, and which already been deployed in the existing network infrastructure. And so in this approach, and we're using IPVPM, which based on the level three, so it's used as overlay encapsulation method and for routing and for traffic flow over the whole service function chain typologies and all the different VMs and attached to the L3VPM. And there is Comtula that manages the service typology and instantiation of different service functions in the chain and also create the virtual routers and configure the virtual router, the routing table and the egress ingress the VRFs and install the different routes and using BGP to identify those routes and to create the connected, the routing table through those connected VPNs. So it supports the use of existing protocols and deployed infrastructures and existing all the different provider edge devices with current capabilities on today's network. And so BGP is used for the route advertising and the net confyan or XMPP, the existing infrastructure can be used for configuring from the controllers and to the virtual routers and create a configure on the routers, the setup the route targets and install the routes into the service functions. And it supports both the physical and the virtual deployments in the existing infrastructure and it supports multiple control plan protocols and including the L3VPM and the EVPM and the layer three functions in the EVPM and also the multiple data planning and encapsulations for example that supports NPS over GRE or the VXLAM and different encapsulation methods. Here is a simple diagram and describes the chance that's based on the NPS BGP VPN method. So basically I can see that there is the controller and that's been responsible and for creating and building the service function chain topologies including different service functions, number of the instances and there's a whole connectivity on the overlay networks and also it's responsible for creating the virtual routers and for different service functions and in the routing system that the service function connected in the chain and so in each the routing systems it needs to create the route and egress router and also create the interface that connect to the different service functions and initially it needs to install the static route and basically the next hop will be pointed to the interface connected to the service function and the destination, the prefix with the destination, the prefix of the destination address and for the next hop and also you can see for the ingress and egress hop on the next connected, logically connected service function and then it needs to use the same routing targets, right? And the same route target will ensure that when you advertise the routes from the ingress, the VRF to the egress VRF they will get a match and the egress VRF will install the next hop and based on the route advertisement, okay? And so in the route advertisement and you can see that it will give the next route and it will be IP address of the ingress, the VRF and it will also advertise information of the tunneling and capabilities that will be supported in the next hop. For example, it could be MPS of a GRE typically or the VXL and also it will advertise the MPS, the labels which identifies the interface that connected to the service functions. So in this case, when the packets arrive to the ingress and right, it's just decapsulate the packets and based on MPS, the labels it will switch to the service functions. So to improve the performance and make it work in a faster way. And as you can see, and for the controllers that interacts with the virtual routers, right? It will be using the existing functions in the configurations. For example, Net Confian and with those virtual routers or I'm using XMPP for the exact routers or the Net Confian for the edge routers. And of course, BGP will be used for the route advertisement and using the R route reflector and to advertise the routes and between the different virtual routers and of the load balancing and could be used right to balance the different instance of the functions. So that's the L3 approach and based on the existing the network functions and to create the service function chance. And there's new approach and very innovative and that's called NSH, the network service header approach and which basically focus on the virtualized the service function deployment and everything's based on virtualized the deployment on the overlay and the encapsulation is based on NSH that's added to the network service header and tunneling is based on the VXLand GPE and for the L3 and the GRE or the Ethernet. And so VM is attached to the OVS in the L2 and assumes a proper setup is available and they need to be supported. And in the OVS, I think there's a patch in the OVS already been applied to support that and the multiple tunneling protocol can also be applied in addition to the VXLand GPE, et cetera. And they need the classifier and it's called flow-based classifications that's needed for flexible classification criteria. For example, classifier usually is used in GPP the group-based policy as a classifier for that. So that's a similar diagram and you can see from conceptually and architecturally it's similar to the MPS BGPVP approach and of course with the different encapsulation method and with different the way to construct the service function, the chance, the path and also it's based on the L2, mostly it's based on L2 and the different package headers. And so you still have a control plan function, right? And to install and encapsulate those packets in that beginning and then need the classifier and determine the traffic flow and the forward the CETA in between which is similarly to the virtual router but it's a OVS based the router that are L2. And so it also considers some legacy service functions which may not support the NSH service function headers. So it just introduce concept of the SF proxy. And so basically it decapsulates the NSH, the encapsulations, right? And forward the traffic to the legacy service functions and once it's finished and it will re-encapsulate the headers and with the NSH headers and back to the service forward and so on and so forth. And so the traffic will flow the whole chain until it reaches the end of the chain. And as you can see, there are different functions, right? And actions that's been defined in the whole path including the insert, the header, remove the header, select the service function path, decrement service index and update the container header, et cetera. And the different entities in the path will perform different functions. Let's summarize in this table. So that's how the service header is defined. So they have a base, the header and they have service path header and they have the context headers with all the different methods. And one very important feature of this method is the metadata that's been added or in the different service folder and so that you can have the more context-based information to allow for extending to the new services and which is very powerful and in this method. Okay, and so that's the brief summary and now I hand over to Tim and he will give the demonstration about what's been implemented. I got this one, yeah. Thanks, Ben. Good morning, everyone. So my name is Tim Rose. I work for Red Hat and I mainly focus on upstream projects like OPNFV and OpenStack and Open Daylight. If you're not familiar, if you've heard OPNFV but you're not familiar with what it actually is, it's an integration project where we take different NFV features like service function chaining or SDN VPN and different SDN controllers like Open Daylight, Ono's, Contrail and try and integrate that into OpenStack. And so in OPNFV, there's a sub-project called service function chaining project where we wanted to take Open Daylight SFC and be able to integrate that into OpenStack. And so to do that, we kind of came up with this proof of concept demo that we threw together and in order to integrate into OpenStack, we had to have an entry point into OpenStack that could actually create service function chains and talk to Open Daylight. And so we looked around OpenStack and found Tacker and Tacker is a VNF manager project in OpenStack that's Etsy Mano compliant. And we decided that Tacker gave us enough information about describing what a VNF actually is so that we could implement service function chaining to get properties of that VNF that we need to be able to create a service function chain. So a VNF description in Tacker contains attributes like what type of service it is, so if it's a firewall or an app, it can have properties like is this service function NSH aware? What encapsulation type is it using? Like VXLine, GPE, or Ethernet with NSH. And so we decided to go with Tacker and implement the service function chaining plug-in and extension into Tacker to be able to orchestrate this proof of concept. We're obviously using the Open Daylight SDN controller with service function chaining features loaded. For this demo, we're using OVSDV netvert features to drive neutron network creation into OpenFlow down into OVS. And then we're also using a netvert SFC feature which allows us to create a service function chain and classifier. So we need to classify tenant traffic onto that chain. We're doing all of this on the OPNFV Apex installer platform. So in OPNFV, there's a bunch of different installers. We're using Apex installer for this demo. It's a triple O based installer. And then we're using OpenV switch that's custom built with NSH patches from upstream. So currently in OVS, there is no official support for NSH. So we have some patches upstream that we've built a custom OVS version for this demo. And so I mentioned that everything is running on the Apex installer. And in the demo, I'm gonna do a live demo and a little bit to show traffic going through a service function chain. But the deployment, it's triple O based. And if you haven't heard of triple O, it's installing OpenStack on OpenStack. So you're actually, it's a little weird at first, but you're using OpenStack to install your OpenStack. So in this case, they have the concept of an undercloud and an overcloud. And an overcloud is your target system. That's what you're actually interested in installing. So that's gonna have the tacker, open daylight, custom OVS install on it. But you use the undercloud, which is a VM. You can just picture it as the installer VM to install that overcloud. And so for this demo, we're gonna have a single controller node and a single compute node. And so that's just kind of, you'll see it when I run the demo, you'll see that there's the undercloud we'll log into and then we'll go into the overcloud. So just good to have that background. For a tacker, creating service function chains, how do we, what's the workflow and how do we integrate this proof of concept? So these first three items here, creating a VNF descriptor, a VNFD, creating a VNF instance from that VNFD, tacker VNF manager uses heat or Nova backends to actually create that VNF instance. And so those first three things are already in tacker. We didn't add any code to do that. So tacker's already great at managing a VNF. The VNF descriptor itself is a Tosca template. So it's a YAML file that describes what a VNF actually is. I mentioned you can define in there the service type. If it's NSH aware, those types of things. It does a lot more than that. It can do a monitoring policy to make sure that the VNF is still up. It can have auto scaling and auto healing properties. And so those first three items, as I mentioned, are already done by tacker. The following items after that are the pieces that we added. So in tacker right now there's this VNF manager extension and we've added a service function chaining and classifier extension next to that that's able to talk to the VNF manager. And so in addition we've added, you can see step four here is create the chain with CLI. We've added to tacker client CLI to be able to say SFC create or SFC classifier create to create chains or classifiers. So your step four here is you invoke regular tacker client CLI to be able to create the service function chain. Step five is it goes into a REST call down into Open Daylight. So we have in tacker an Open Daylight driver that's able to talk REST to Open Daylight SFC and say create me a chain, create an RSP and an RSP in Open Daylight terms is the rendered service pass. So that's what actually pushes flows down into OVS. And so in the demo, we don't specify a certain SPI or a service path ID. So when we don't do that, Open Daylight automatically gives us one back that's dynamically generated. So you'll be able to see Open Daylight after we create the rendered service path that will give us back that ID. Step six is then we create a classifier using CLI. And so that talks to a different NetVirt SFC driver which then we'll talk to NetVirt SFC and say create this classifier and the classifier is really an access list that says match on this different tenant traffic and put it onto this RSP, this rendered service path. That also pushes the classifier flows into OVS so that our traffic can actually match on that and get punted up to the service function chain. And so this is just kind of an architectural overview of what we just talked about. The numbers there correspond to those different pieces of the workflow on the previous slide. But you can see up at the top, there's tacker on the left-hand side. It has its VNF manager extension on the right-hand side as a service function chaining, classifier extension and plug-in. It's using its own database. It has its own drivers to talk to SFC and NetVirt SFC. So step one here is you come up with these VNFD template, YAML files, these Tosca inputs to define what a VNF is. You import that into tacker, tacker puts it into a catalog and you say create a VNF of this description of this profile. And so then that makes a call to a heat driver. Heat then calls Nova and places that VNF onto the compute node. We have one compute node and one control node in this situation. And so for this demo, the compute node there, that's an actual accurate representation of what a demo's gonna be. So we're gonna have one VNF that's a virtual firewall and we have an HTTP client and HTTP server. So we're gonna use the HTTP client to send Crow requests to the HTTP server and see if traffic actually goes through the chain or not. And then on the other side of that, you have the SFC driver talking to ODL SFC to create the chain and then the NetVirt SFC driver talking to NetVirt SFC to create the classifier. And so now we'll go and try to do the live demo here. And so, let's see. So I'm running this all on a single host server. And so we've talked about the undercloud and the overcloud. In this instance, I hope you guys can see that in the back, the Instac machine there is actually our undercloud VM. So that undercloud VM has installed these other two bare metal BRF VM machines. So one of those is our controller, one of those is our compute. So if we log into the undercloud, switch to the stack user and source his credentials, you can actually see this is from the undercloud's perspective, right? So the undercloud has installed this controller and this compute node. And so by that token, I can SSH into those, into these control and compute nodes. I've already done that down here. So I'll get rid of this guy. And if we go and we look at our controller, I can do NovaList here. And you can see there's an HTTP client, HTTP server, now we're on the overcloud. We were on the undercloud, now we're on the overcloud. This is our client, our server and our attacker VM. So that long, weird UID down there is attacker VM. And so if we go over here, this tab right here is my compute node and I can dump out all the flows on VRN. So Open Daylight has, with OBSDB network, has installed the regular tenant flows to be able to allow the client and the server to talk to each other. But if we grab on NSH, you'll see there's no NSH flows here at all. So there's no service function chain set up. And I can also go into the controller and I can do attacker SFC list. And you get nothing because there's no service function chain created. But if I do attacker VNF list, you can see that there's a test VNF one there which is our virtual firewall, right? And so for the purpose of the demo, it's not really a firewall. It's a VM that's running a Python script which will take an NSH packet, decrement the header appropriately and send it back out. So we'll be able to see that because if you look up here, I have the NoVNC console set up. I might have to re-log it back in. So on this left-hand side, this NoVNC console is actually just a simple Python HTTP server. So I'll kill that and restart it so you can see. So I'm just running an HTTP server there on port 80, right? And then on this other window, this is actually our VNF. So remember that weird UID for the instance. This is actually our console to our VNF. And so it's important to see here. I don't know if you can see it or not, but this packet 673 up here. So that was the last packet from when I was trial running the demo that the VNF received, right? So if a packet comes to that VNF, we're going to see it pop up here. And so now if I go over to my HTTP client and I curl for that server address, you can see there's nothing hitting the VNF. If we look at the HTTP server itself, you can see that it's actually getting hit. It's getting a response back. And so we have no service function chain right now. And so what we can do is we can go on the class, on the controller and do attack or SFC create, create, name, my chain, chain, test, VNF one. And now you'll see that down here there's some outputs as pending create. You can see the instance ID there is path, my chain, path 394. So I had mentioned that we didn't specify a specific SPI or service path identifier. So Open Daylight generated one and gave it back to us. So that's our instance ID we're getting back from Open Daylight. So if I do attack or SFC list here, you can see that the status is active. The driver is Open Daylight. And there's another attribute here, symmetrical false. So symmetrical is, you can set that to true if you want to. What that will do is when you make that pro request of that HTTP server and it goes through your chain, the return traffic, if you define the chain as symmetrical or return traffic will flow back in the reverse order through that chain. If you don't, it'll flow back just like normal tenant traffic. And so now if we go over to our compute node and we don't close again and grip for NSH, you can see there's NSH flows here now. So we've created the chain in OVS, essentially by talking to Open Daylight with tacker. And so now if we go and we curl again, you'll see the curls are coming back. There's nothing going through the VNF because we don't have a classifier yet. So this command's a little long, so I'm just gonna use one from history. Yeah, so in this classifier create command, you see we're making name my class onto my chain, which is the chain we just created. We're matching on source port 2005 and desport 80 and protocol TCP. So that should be enough to classify the traffic from our HTTP client and force it to go through the service function chain. So I'll go ahead and create that. So I can see it says pending create. Here's the match criteria up here, which is what we just talked about. And then the infradriver here is netvert SFC. And so if I go and look at OVS now, I can grip on the TP desk. So looking for port 80 there and you can see there's a classifier flow here now for port 80. So you can see there's no packets that have hit it. It's the first action here is to move the tunnel ID into the NSH context header. And so that allows us to preserve the tenant traffic after it gets through the chain. So when tenant traffic goes, if you picture you have a bunch of different tenants with different traffic being classified onto a chain, how do you know when you get to the end of that chain where that traffic should go? So we store the tunnel ID, which gives us the tenant network information so that once we egress out of the chain, we know where to put that packet back onto its tenant network. And so now if we go back to our HTTP client, now that we have that and we curl again, you can see over here packets are hitting this VNF. So I don't know if you guys can see it in the back, but it's incrementing the packet there. You can see that it says NSH base NSP 394, which if you remember was the exact same ID that OpenDale, I gave us back for the service path identifier in when we did the tacker show command. And so you can see here the NSI, which is the service index is 255. And so what the service function will do is it'll decrement that SI and send the packet back out. And so you can see here we got the curl request back. Sometimes this can be a little slow because we're running nested vert with QEMU and this Python script is not particularly fast, but it worked. And if we go and look at the compute node, we can see that if we grep on TP-desk 80, we can see there's 19 packets there that hit that rule. And if we grep on NSH, you can see that there's also 19 packets there that hit the NSH flows. So our service function chain worked, right? And so that's just kind of a proof-of-concept demo to show how we can do service function chaining with OpenStack using IETF, SFC, and SNH. So if we go back to the slides now, the next question is after doing a POC demo like this, how do we get this upstream? So we talked to the tacker folks and the neutron folks and we tried to figure out how can we get some of this work upstream into the respective projects. And so neutron already has a networking SFC extension and a project and that networking SFC project is supposed to solve this problem, right? To be able to do service function chaining. They don't do NSH right now because it's not officially supported in OVS but they're moving in that direction. So we felt like if making this a proper solution that the service function chaining piece and classification piece should actually be handled in neutron by networking SFC. So we wouldn't be making calls from tacker directly to Open Daylight. We'd be going through neutron in a proper flow down into networking ODL and talking to neutron northbound in Open Daylight. And so then that left us with, well what value then does tacker bring, right? Because for service function chaining you have to know properties about the VNF. And so in the Etsy Mano doc, I mentioned that tacker was Etsy Mano compliant. They have the conception of VNF management and then network service orchestration and management. And so a piece of network service orchestration is doing a VNF FG which is a VNF forwarding graph. So if you picture a bunch of VNFs all connected together with different paths through the graph, that's what a VNF FG is. And there's also a TOSCA definition and YAML format to do this exact same thing. So we felt like it's much more proper and correct to be able to create a VNF FG descriptor. So these first three items here are the exact same. We created a descriptor using a YAML template that's already specified in TOSCA. We then have a VNF FG plugin and extension. The process is that input. So when you instantiate a VNF FG, it figures out what service function chains to create by making calls down into Neutron to say create these different service function chains and these different classifiers. And that allows you to do some different things. It allows the VNF FG to figure out common service function chains or common paths through the graph to be able to optimize service function chain creation. So for this first iteration in Newton, we won't have anything like that, but it's just down the road there can be chain optimization there. It can choose, you can have algorithms to choose what VNFs to use in your forwarding graph. So you could say, is this VNF overwhelmed? Does it have too much stress on it? I should use this other one instead. Or should I choose a VNF based on how many nodes it can scale? So how many instances can it load balance across? So that's kind of the value that putting the VNF FG there into tac or makes it Etsy compliant and then it has a proper flow into Neutron. So this is that same diagram, is that, well it's not the same diagram it's modified, but it's the exact same workflow we just talked about as we showed previously with the demo. So you have here the first steps are the same, you have the VNF FG descriptor, which is a YAML template, you feed that into the VNF FG stores as a catalog, you then say create a VNF FG, that makes a call into Neutron, a networkance of C driver to say create a chain, networkance of C then has drivers to talk to Open Daylight. And so that's what we're planning to do for Newton with Tacker and Neutron and the networkance of SFC project. And so with that, I'll turn it back over to Ben to talk about summary between MPLS, BGPVPN and NSH and go over some of the projects. Thank you team, thanks. That's a great demo and hopefully we enjoyed it. And so in the last few minutes, I just give a little bit summary and about the different approaches. And so that's a summary table about the BGPVPN approach and NSH approach. And as you can see that they have some similarity and some differences, right? And for example, the classifier is required in the NSH approach, but not required for the BGPVPN, it's destination based to the chain. And one is for the L2, the other one is for L3 and one is for the virtual deployment, the other one is for both the physical and the virtual deployment. So that's a kind of a very brief summary about the approaches. And as the team and introducing the demo and lots of different projects have been involved in implementing the SFC as well as there are some other projects that will be beneficial and will be or could be benefit and to create the service function in the different type of implementations. And for example, here on the open stack, there are different projects including the Neutral MPS VPN as a service project which basically provides APIs to create and delete listing the different VPN services and the VPN access the services. And there is another project called the Neutral MPS service function chaining and which the team also just mentioned and will be used for the next step, right? And that upstream in which basically defines the two different concepts, right? Part chain concept, which is all released of the chain itself and also the flow classifier which basically the classification for functions and also provides different APIs into create chain. And also there is the MPS VPN extensions for the BGP VPN extension for Neutral Networking and which basically provides functions to create the BGP VPN and those APIs and also the TAG project for the orchestration and bring up the VMFs. And also in the open daylight and group as policy can be served as a classifier and service function chaining projects that's been the backend implementations of the service function itself. And the VPN services also provides APIs to create different types of VPNs including the L3 VPN and L2 VPN. And obviously the network projects as the team just mentioned and will be used to implement those the virtual functions and also can be used as a classifier and in the setup of the service function chaining and in the demo. And in the OPMV and basically will primarily focus integration and so we have the service VMF folding graph and provides the graph description and requirement so that it can be implemented in the OpenStack project and we have a service function chaining project which basically integrates the backend components from the ODL and from the API from OpenStack and also SDM VPN projects also do the similar role but focus on the L3 VPN part. So after we see those the different projects and different technologies and what are key takeaways here and from the operator perspective from the telco perspective right in the we see we're very glad to see that there are different diversified implementations in this space and provides all the different options and which are open for the innovations and for the new services functions in this area and but not fragmentation and diversity means that it's interoperable, compatible and not silo. And from any user perspective and we are more interested in first common APIs that can leverage different the backend implementations whatever approach it is right you need a common API common the data model and a method that I can support inter-demand or end to NSFC use cases across different heterogeneous networks and which means across the access network call network, backband and the back call network that we can have this type of the end to NSFC function chain I think that's China Mobile also asked this question in the last presentations. So that's a common goal and we wanted to see all the different types of solutions to solve that. And we also wanted to see the deployment model that can leverage existing network capabilities right and to minimize the total cost of ownership as very important and because we don't want to make the existing the investment in the network today and to be the out of date because we have to serve our own and users give the users an enhanced user experience and without giving interrupting their services. Okay, so that's all for the presentation today and thank you for your attention and questions. Tim, come. Questions? Yes. Hi, thank you for the presentation. I'll start with my first observation then the question. The service function chaining is mainly defining the involvement of the VNF in order for a particular traffic, right? It could be multiple VNFs, it could be one or two VNFs which will work one after another for the particular traffic coming in. So in that sense, it's a path, yes. You're creating a path by inventing a new header and new encapsulation. That's why you need to modify the OVSTB. That's why you depend on the existing underlay, under overlay network. So this solution will not gonna work without an overlay network because there's this new header right there. None of the top of the rack, the spine, all those switches we're gonna see what this is and we're not gonna wear what to do with it. That's why the overlays must have. And in reality, how many service function chaining paths that we will be using in the field that we need a new header is another question. So why, it's kind of for me, don't get me wrong, it's kind of, we are hunting a turkey with an RPG. So why not using another approach to reuse the existing headers to create a path such as in Neutron we assign VLAN ID ranges for tenants and some of them are kind of idle mode. You can easily use those, create a on the fly VLAN tunnel for those on the fly service paths. Thank you. You wanna take it or you want me to take it? Okay. So if you think about a path, like you were saying, a service function chain can just be, you can talk about it as it's a path of VNFs, right? If you have this path of VNFs, the beauty of NSH is it preserves integrity from those VNFs through those VNFs. So if you know that a packet comes into your third or fourth VNF in your path, you know exactly where it came from. You know that it's been hit onto the other parts of that path because you use the service index to be able to decrement and know where you are in the chain. You also know based on the path ID which path you're on, so you can preserve, you have more metadata information about the chain than you would if you just sent a packet to each VNF and just set up manual flows to make it go in and out. In addition to that, it helps you distinguish chain traffic from tenant traffic. So if you have like in the current networking and sustainability implementation, they're not using NSH, but if you have multiple VNFs on the same OVS node along the side with tenants and traffic comes into OVS, how do you know that that traffic coming out of that VNF belongs to what service function chain? You have to reclassify it every point. So with NSH, you classify once and send it through the chain. To your point of VNFs that are not NSH aware, so that means everyone has to come up with an NSH aware protocol or whatever for their VNF that they have to add that to their product. So in NSH, that's the concept of having an NSH proxy. So that you don't have to have, you have an OVS, it can be an OVS, it's an NSH proxy. So he reads off the NSH header and strips it off and then he sends the package of the VNF behind it so that the VNF is able to function as it normally would and the proxy in front of it is able to do the NSH work so that your VNF doesn't have to have NSH awareness. That's kind of the solution why NSH, I guess. And I think in addition to that and we can also use the BGVPM-based approach in creating the routing table and the connected L3 VPNs for the different path and along the chain. So it's different about that diversified the solutions that you can use. So forgive my ignorance, just a couple of quick questions. You know, one of the things that I'm curious about is how SFC chains might be load balanced between different applications. I'd like to hear you talk about that just real briefly. And the second situation is when I have the potential for per user SFC chains, can you talk about that a little bit? So I have an application where I have users sitting on a VPN and they wanna be dropped off into a OPNFV network and then each user is treated differently with their own service function chain. So can you just talk about those two possibilities a little bit? Sure, let me take it. So the load balancer part is a problem that we're gonna solve later down the road, but how you can do load balancing is as I mentioned, you can ask those two different ways. So you can ask for the VNFFG extension can ask VNF manager to give me a VNF that's scalable. So you know that it can auto scale to a certain amount depending on load. You can monitor those VNFs that are part of that chain. So maybe you create a chain that's part of many paths. So you have many paths to this graph and maybe the first couple hops are always the same for every path. So you create one common service function chain and maybe you're worried about those VNFs not being able to scale due to heavy traffic load from certain paths through the graph. So in that respect, the VNF manager already has monitoring systems installed to be able to monitor the usage and to be able to scale out VNFs as part of that chain. And so that's kind of my thinking along the lines of being able to load balance the chain. The second part was being able, this is your second question I think was being able to do different service function paths based on tenant information. And so in open daylight, you can match on tenant ID, you can match on the tenant network, different things like that. So you could do that in classification, but once the packet gets into the chain onto its path, it follows the path until it egresses. There's the concept of being able to do more graph choice so that you reclassify inside of a service function. So maybe a service function figures out, oh, this packet is from this tenant, I need to reclassify it to go through a different path. And that's something that NSH also provides for, but it's something that we haven't thought about solving yet in this initial implementation. And also in the BGVP approach, and when you do the load balancers, right, and you can install the load balancer in the egress ERF and so that it can be, and the traffic can be scheduled to the different type of the next hop of the SFs. Or you can have the different instances and that's connected to the single, the ingress VRF and the ingress VRF can install the load balancer and the two balancers load to different instances of the service functions. So that's how it's being solved in the BGVP and based on the service function chains. Okay, Mr. Jenner. Hi, good morning, thank you. I have a couple questions, or really just one major question. What is the strategy for in the NSH approach to deal with path MTU discovery for the applications? Like what, who, is it something that's kind of reset? You know, is the NSH header like expandable in size? How does that affect the, you know, the path MTU that the end applications experience? And if it's path MTU discovery, who's gonna, what component of the NFV infrastructure would kind of be responsible for that? Or is it everyone kind of for themselves? At this point, it's kind of everyone for themselves. So we have, you have the NSH header and then you're gonna have some type of encapsulation header because NSH isn't a transport header. You need something on top of it to transport the packet and the overlay, right? So you're gonna have additional header bytes there. And so modifying the MTU based on that, that's not something that I've thought about at least for now, so I'm not sure. Okay, it's, I mean, it's a very common problem in like a non-virtual environment of MPLS, backbone, WAN, just, you know, and especially if the VNF itself might be a third party, like we don't really, like the NFVI owner doesn't really own that, it's a partner providing that processing, like how do you enforce them to do something? You know, what are your, who's customer is it? So I don't know, I just see that sounds very challenging. Yeah. You know, basically. So thank you very much. Yeah, thanks. Cool demo, by the way. Thank you for that. Oh, thanks. So, I mean, the concept of an SFC controller, so basically your CLI when you instantiate at that graph, is there anywhere in OPNFV or NFV manual where they're gonna standardize kind of where that function is? Is it gonna be part of Orchestrator or is it gonna be BSS? Do you have any thoughts about that? It's not gonna be, so in OPNFV, there's many different solutions, right? We're not gonna standardize one way to do service function chaining and you saw the other approach here, alternatively to NSH. For, and so in OPNFV, you can do pretty much what you want, the types of solutions you wanna do. So if you wanna try out this service function chaining, POC that we did, you can go download the Apex installer and run it on your Sena 7 machine. But there's no standardization for it. In an open stack, we're hoping to be able to do this VNFG Etsy Mano orchestration into Tacker. And then as I mentioned, Neutron will handle the networking as of C part, the service function chaining create part and that could have different backend drivers so it could use OBS or ODL or another SDN controller. Does that answer your question or? Well, I was looking more northbound to like. Oh, to the OSS? Yeah. I mean, that instantiation of a subscriber clicks on a portal somewhere, says I wanna firewall in my chain now. And does that function standardize anywhere? It's gonna be probably some custom software at some point. Yeah, that's not gonna be standardized, I don't think. So they would have to integrate into Tacker's REST API to be able to do that. Thanks. And for now, I think it's open V and currently it's focused on infrastructure part, but we are expanding the scope to cover the Mano and it's all the different projects if we see the need from the market. And I'm sure that there will be some projects being proposed to solve this problem in this area. Yeah. Thank you. Any more questions? Okay, all right. Thank you. Thank you.