 So we start okay. Okay. Let's get started My name is Kathy Zhang. I'm a principal architect at Huawei Silicon Valley Center My name is Paul Carver. I'm a principal member of technical staff at AT&T My name is Art Fratica. I'm an architect at Deutsche Telecom caring about data center networks So today we are going to go through the following topic first. We are going to Introduce what is service chain and then we're going to talk about what problems the service chains are of and Then we're going to give a brief introduction on the open-stack service chain solution And later we're going to kind of deep dive into some examples of your service chain use scenario After that we are going to summarize the benefit of the new open-stack service chain solution And at the end we're going to give our project information and do a demo So how many people know what is service chain? Reshian. Okay, very good Okay, so by service chain what we mean here is through a centralized open-stack based service chain management and control platform and different Tendance flows can be automatically Provisioned to go through different sequences of service functions So service function include like night for our intrusion detection service video optimizer load balancer Etc. So these service function can run on the VM or run on container or run on the physical box Service function chain generally just consists of several what we call virtual network functions These can be things like firewalls load balancers or they can be more specialized things So from my perspective coming from AT&T We're very interested in telco functions And we're not going to get into the details of what those are in this talk But we have a need to Provision a flow of traffic that goes through a variety of very specialized Applications and these applications are generally doing Multiple functions packet handling functions, whether it's security caching and other things and we have to handle a large amount of traffic So what we've illustrated here in this diagram is a flow of traffic is not something that we can simply pass through a Load balancer on a VM because a flow of traffic is many tens or hundreds of gigabits per second It's not as simple as I'm going to spin up a load balancer and run that on a VM and run all my traffic through that VM and let That VM load balance other VMs We need to direct the traffic at the network layer in order to have the capacity and we need to be able to scale that out This is the the same picture But I what we've shown here is that each of these functions VNF 1 2 and 3 virtual network function 1 2 and 3 is really consisting of multiple virtual machines and At each hop in the chain those virtual machines may be doing different amounts of CPU work and Therefore we may need more instances at one hop Then we need another and we need to be able to scale those into independently And we need to be able to force the traffic to go through consistent paths through these Service chains because some of these service functions may be stateful and may need to May need to see both sides of a flow So we we may need a consistency where one flow Passes through as just shown here the first VM in the first function the second VM in the second function the first VM in The third a different flow of traffic flows through a different pattern now a number of AT&T people have have spoken about our initiative called domain 2.0 and This is to us. This is the problem that service function chaining Helps us address we have published a white paper on this domain 2.0 strategy And what you see here are just a selection of physical network functions We have historically engineered these boxes into the network using Cableing physical cabling between the boxes to make the traffic go the way we want Sometimes we substitute V lands and just provision V lands around from box one to box two and then back to box box three and Our domain 2.0 vision is really about migrating from physical to virtual Well, we can't do that if the virtual network doesn't have the constructs that the physical network does We also use the term policy based routing. So anyone from a network background will know what that means If you're not from a network background You can think of policy based routing in simplest terms is Any time when you want to direct traffic on something other than the destination IP address Directing traffic to a destination is easy Anytime you want to direct traffic based on other criteria and and control the flow So it goes other than where the destination IP address would imply That's policy based routing and that again service function chaining allows us to address that problem if we can't Fit the traffic to the the physical network topology. We're out of luck. So I'm gonna go to the other slide. I don't want to talk too much about that slide, but Real quickly the way we used to do these things is network diagrams Vizio is a big part of that Spreadsheets showing cabling plans or VLAN plans. How it is all going to be oriented organized run books or network description documents long lengthy documents describing in Laborious detail how to configure each piece of equipment so that the traffic it goes the way we want emails at Infant item discussing it PowerPoint presentation to explain to people how it all works and then these documents get passed around people to implement Okay, so to solve the Problems which power just described On the networking SFSA project was initiated in OpenStack Neutron So with this project through a few API calls Then you know the service chain can be automatically set up and the certain parts can be automatically provisioned So the API I will not go into very detail on how the whole model works and the API detail because we have another session at 530 What deep dive into the architecture the technical details? So here I'm just going to give a very high level introduction So the API consists of two parts one part is flow classifier Basically, we need to specify, you know, what flows will go through this chain There could be multiple flows that associate with this chance of multiple flow go through the same chain Then there will be multiple flow classifiers The other part is a sequence an ordered sequence of port pair groups so by that each port pair group is a group of Functional like service functions for example, if we want on the bottom It shows this chain if you if we want a traffic to goes first goes through IPS And then firewall and then video optimizer Then we need to specify a port pair group for the IPS Port pair group for the firewall and port pair group for the video optimizer So for example, if we have two IPS service VMs providing IPS service Then this IPS port pair group will consist of two IPS service VMs so that the traffic can be automatically load balanced to the to the specific IPS Same for the port pair group Firewall prepare group and video optimizer for pair group So here is you know the API cause there are four API to set up the chain So the first API is create it's called neutral port pair create Basically, you we need to create a port pair for each service function For example, if there's a firewall service function, we need to create a port pair for that for our service function We need to specify its ingress neutral port and egress neutral port of that service function and then after that we need to create the port pair group and This prepare group can include one or more port pairs with each port pair Corresponding to our service function. So this group the functional like service function together And then we need to specify the flow class file And we have there are two ways to specify the flow class files. Why is you specify the n tuple of the flows? It could be five tuples seven tuple Another way to specify the flow class phase, you know, you specify the The neutral port the the source neutral port and destination Destination neutral port or you can just specify one neutral port one source neutral port So what that means is any traffic that originating from that neutral port will go through this chain If you just specify destination neutral port that means any traffic that exit that neutral port will go through this chain Of course, you can specify, you know, source and destination That means the traffic goes through these two ports will go through the chain So after that then we create the port chain in this port chain. We specify the sequence of port pair groups In the creation of port chain, you can specify a flow class file or you do not need if you don't specify a flow class file This chain will be created, but it will not be active And later you can associate one or more flows to this chain by update this port chain Again, we're going to talk about this in more detail in the in another session okay, so now we have heard a lot of theory and How the model looks like? Let's look now at some examples So now first an example from the old world assume that you are running a web server And you want to place for example security in front of it an IPS and a firewall Everybody knows maybe what that means if you have a physical data center You need to bring in devices. You need to To create VLANs you have a lot of actions to do so If you're talking to a network administrator, okay, you have to create VLANs You have to connect firewall and the IPS you have to connect the firewall to the segment where the web server is And at the end you have to Reconnect the router to another VLAN So that the traffic can flow from the router to your web server So that costs a lot of time Is very inflexible Yeah, you need to hope that everything works as expected because the fallback costs really a lot of time There are other ways to insert such service functions in the old world like paul mentioned it You can use policy routing for example And What's really missing you cannot select on the traffic That should go through such a service such a service chain So these devices yeah, they are a bump in the wire and They help you to to solve your problem, but they don't help you how to speed that up so with the proposed model Of the service chaining api The same stuff can be done with a neutron But in a much more flexible way Instead of putting the devices directly into the connection into the path between the router and the web server You are creating a path besides And you can select the traffic that passes on on the Lower side through the IPS on the firewall. So in new if you take the open stack neutron approach Okay, you create Like casing showed it on the open stack api stuff You create neutron ports You create the port pairs which specify the ingress and egress for each service function You create port pair groups. I will come on the next slide for that Yeah, you need to boot your firewall and your IPS You should create your classifier that classifies traffic that should pass through the service chain and at the end There you create the port chain Which means that the classifier is deployed in the network And the traffic is redirected through the firewall and the IPS to the web server And if you do that only for port 80 only port for port 80 traffic is redirected If you need to do that for example for port 443 you could add a second service function chaining path With complete different elements in it and you can do that without changing your application And the network attachment of the application And that creates a lot of flexibility because If you deploy such a thing you can have also an easy fallback So yeah, we have that So then the next thing Yeah, scaling you can also use scaling on the path of The service functions. So what happens if one IPS is not enough or one firewall is not enough Then you can use the port The port group chaining You can place multiple VMs into one Group and then the traffic is redirected On the load balanced on the network level To these service functions So there's no need for a load balancer That you have to deploy on the application side You just use what the network provides but the network has to provide such service Such an sdn function to distribute traffic to different Service functions. Yeah So continuing the the theme I had started earlier in this physical to virtual migration Um, this enables us to move away from those those documents the myriad of Diagrams and spreadsheets and so forth to machine readable code And this is sort of the the vision that we're pursuing in terms of Um physical to virtual migration. We want all of these elements So various parts of open stack figure in very importantly here We want a catalog of service function software as opposed to A order list of physical boxes We want the network topology to be described in in Code and that is largely working out as yaml templates using using heat And describing the physical topology as something that a a computer can parse rather than a human being Parsing a word document to implement it and the service function chaining Is is a critical component of allowing us to do the policy based routing and the Create the complex network topologies that we see that we already are doing in the physical world And we're unable to do if all we had were simple networking constructs if all we had were flat networks or Tenant networks behind a floating ip you can't build the kind of robust network topologies That we need in order to replicate What we have in the physical world in order to complete the physical to virtual migration So to summarize the benefit of the new search and solution First is we can see it's low operational cost and the high agility So we do not need to do any advanced business planning anymore because this is replaced by real-time provision Also, we do not need to mind your configuration to do recovering, you know all the network boxes Because this is replaced by automatic provision Weeks of provision time is replaced by minutes of provision time Another key is is application aware now you can go to the granularity of the application level So even for the same tenants tenant different application flows can go through different service chain paths And another is low capital expense We do not need to allocate For peak time peak capacity because we can allocate resource as needed And we have got requirement for this feature, you know from What they use to be in public cloud hybrid cloud or wireless access to internet That's back the gr line scenario And voice over ip and video conferencing Excuse me So here's some information about OpenSync Neutron Service Chain Project. This is an officially approved Neutron project So it has Public repo code not working as SFG So the code has been uploaded to that official repo We started the work in Libertas cycle. We're going to release in the AMP cycle And so we already implemented codes CLI horizon heat. Yeah, all these codes for support the service function chain We have also implemented codes on the Neutron server, which includes api db driver manager Common driver api and ovs driver. We all have also implement the ovs agent on the compute nodes Maybe when I talk about this you're a little bit confused on which module is which So in the 530th session, we're going to have an architectural diagram Which shows very clear what are the components and how you can, you know, plug in different driver to that different SDN controller driver or just ovs driver to implement the The data pass and the control plan and here are some information links If you want to know more about this you can go to hit to this link to get more information We have weekly project meetings Yeah, welcome to join if you would like to know more So now I'm going to do a demo of this new service chain solution so the demo will show three Three chains the first chain is for ICMP flow From this source to that destination It's going to go through idea service function and then a firewall service function And the second chain is also for ICMP flow, but the source address is different from the first flow It will only go through the idea service function The third chain will be for HTTP traffic over TCP port 80. It will only go through the firewall service function So let me let me go to the demo now So this slide shows the topology of the first chain So the traffic will originate from the source client vm It will go and go to through the Port group one which consists of two ideas service VMs And then it will go through the Let me turn on the speed to be a little bit faster Okay, and then it will go through the second port group which consists of one firewall vm And it will reach the destination server vm So we're going to turn on the TCP dump on the firewall Console so that if the traffic goes through the firewall vm We will see some packet showing up on that window We are going to turn on the snort console window on the ideas So that when the traffic flow goes through ideas, we are going to see packets showing up on that window This traffic will be triggered by a pink mount to the destination server vm From this source client vm Now let's go to the To the horizon green So we're going to first create a port pair for the ideas vm We give it a name and description And then we specify its ingress new transport It's egress new transport And then we do create So we see that this ideas port pair has been created for that service vm Now we're going to create a second port pair for the second ideas service vm Again, give it a description We specify the ingress new transport We specify its egress new transport Then we do the create so we see that this second ideas On port pair has been created Now we're going to create a port pair for the firewall service vm So we give it name description specify its ingress port And egress new transport So we see this firewall Port pair is is created Now we're going to create the port pair group. We're going to group the service function together So first we create the port pair group for ideas So we get to give it a description Then we're going to put the ideas one into this port group and ideas two also put it into this ideas port pair group So now we see this port pair group is created with two ideas port pairs Now we're going to create the second port pair group, which is for the firewall port pair group description We're going to put firewall one port pair into this port pair group. So only one instance for this port pair group Yeah, each port pair group have been one or more service function instances Now we see two this file File or port pair group has been created to Now we're going to create the flow class file So we create the first flow class file We give it a description It will be a pin traffic It's a icmp pin traffic from that source to that destination So specify its source address and destination IP address So now we see that this flow class file one is created Now we're going to create the second. Oh now we're going to create the port chain So we give it a name port chain one and then description So we do create we're going to specify the port pair group associated with this port chain So the ideas port pair group and then firewall port pair group And then we specify the flow class file associated with the chain which is a flow class file one So now we see that The port chain has been created with two port pair group ideas and then firewall and then a flow classifier So now this is a window. So this is client VM window this server VM window This is ids service VM window And this is a firewall service VM window Now we're going to pin The traffic will trigger by the pin command from the Client VM to the destination server VM So we see this pink command And then we see the traffic showing up on the ideas service VM window And also the traffic flow also show up on the firewall service VM window Now we're going to stop this pin So we see that 24 packets has been transmitted and received And then we see that this is a pin to the destination server VM And then we see that this packet flow it goes through this ideas service VM We also see that this packet flow goes through the Fowl service VM Yeah, here it shows this packet Now this is a second Port chain. So again, this will origin for another client VM So we can see the source IP address is different from the previous flow Go through the ideas prepare group will skip the firewall and then go to server VM So only go to ideas service VM This this chain We're going to again turn on TCP dump on the firewall VM And then we're going to turn on the snort on the console output on the ideas service VM The packet will be triggered by a pin command To that server destination VM from this source client VM So here we're going to create a new flow class file for this second port chain Yeah, give it name and description And then this will be again. It's also the ICMB flow We specify its source address. This source address is different from the first chains on flow source address So we create this flow class file Now we see that the second flow class file is created Now we're going to create the chain So then we are going to create second port chain We're going to put the firewall the ideas. Sorry the ideas prepare group into this port Port chain we're going to put the second flow class file associated with this port chain So we see that the second chain Port chain is created with only ideas On port pair group Again, this is a client VM window. This is source client VM window destination server VM window ideas service VM window Then firewall service VM window And the traffic will be triggered by the pin command From to the destination server VM From a new client Vm Now we see that this traffic Shows up only on the ideas service VM window Now we're going to stop this pin command We see the package 23 packets has been transmitted and received Yeah, this packet is go to the destination server VM And we see on the ideas service VM window. We see this package there. So it goes through this and we didn't We didn't see any packet showing up on the firewall service VM Which means the traffic flow does not go through the firewall just as the api is specified So this is six third port chain the last port chain we're going to show again It's going to go so this time it's going to go through the firewall service VM It will skip the ideas service VM And this is so will be a htp traffic We're going to turn on htp server on the server VM Again, we turn on tcp down on the firewall VM and snort console output on the ideas service VM The traffic will be triggered by w get command to that htp server Now we're going to create a third class file associated with this third chain So we give you a description. It's a htp traffic over tcp channel Give it the source address destination address And then we give it the the port the tcp port is 80 So now we see that the third flow class file has been created Now we're going to create the third port chain We're going to put the flow Port pair group into this port chain and then flow class file three into this port associated with this port chain So now we see that the third port chain has been created with file war Port pair group and the flow class file associated with it Same as before client VM windows server VM window On the server VM window we're going to turn on the htp server And then this is the ideas service VM window This is a file war service VM window And we're going to trigger the traffic through a w get command on the source client VM So this w get command is go to the destination server VM So we see that the the server received this htp traffic And also we saw this traffic shows on the file war service VM There's nothing showing up on the ideas service VM Which means the flow only goes through the file war service VM It doesn't go through the ideas service VM just as specified by the api Yeah So we're going to Do a w get command again same thing. Yeah We see that all these showing up That's it So now we are open Yeah, thank you Now we're open to questions. I hope we have time Yeah, okay And yeah, what's your question? How do I tell how do we test? Kathy if I can repeat the question because I'm not sure it'll show up on the recording So the the question is how do we attach the service functions to the chains? And What was done there? You may not have quite got it from the the description was we're attaching neutron ports To the chain So the service function would be a VM with either one or two ports The ingress and egress port can be the same if the service function supports that But in the in this typical scenario If we might have a service function that has an ingress port and an egress port So you would you would create that through nova through your heat template? Whatever you create the VM you get back Two neutron port IDs and then you pass those two neutron port IDs in the correct order to the the service chaining api that way the The sfc knows What port to send the traffic into and what port to send it out of and there it's it's all it's all neutron ports Representing VM interfaces. Yeah Our bond is what? Oh, so um, so for that one, right you um, you say it's outbound is multiple Um So let me repeat let me repeat the question again So so the question is what what if the if you have a service function with one inbound port and and multiple outbound ports So my question back to you. We're talking about neutron ports representing v-next Are you talking about a load balancer where it actually has eith 2 eith 3 eith 4 and it'll send traffic out um Different ports I I think the short answer is we haven't really considered that use case. I think you know, okay Let me take that question. So the thing is if you think load balancer is one of the service function, right? So the load balancer how it distributes the traffic for to different It's downstream like, you know application, right that thing is internal to the load balancer We are going to think of a model the load balancer plus is It's downstream entities all together as a whole as a whole use base for the in into the load balancer It balance, you know to what you bring whichever application, right? That's internal and then it has when it goes out So that out port will be the out neutron port and then it go to the next service function So in our model, right the load balancer the ingress port will be this The ingress neutron port and then the egress will be the the port that exit the load balancer goes to the next Connecting to the switch The the load balancer connection to all these different application. That's Internal transparent. Yeah, actually this this problem has been discussed in the IETF community Okay, sure Oh, okay Good question could be two ways you can centralize it in our reference implementation It's distributed because we do not know their situations, right? For example, if we specify the flow comes from a neutron port Then we know where it is located, right? But if you space at the flow classifier as a five tuple, we don't know where the traffic will come from So it's distributed. Yeah, we implement actually in the for the reference implement in the ovs But in the but that's implementation, right? You can implement as Centralized but if you implement central you have to really know that traffic will really go through that centralized place Like for the in the g r line scenario, you know, there's a p g w So you can implement there but for some like in data center Sometimes, you know, e-suit traffic. It could be original anywhere. Okay This lady Like insert a new service function Yeah, okay. Okay. I'll take that so first question Oh repeat the question. Okay. So the question is about, you know, in the course, there's an initiative of using a Flow classifier, right? And we here in this project we implemented the flow classifier. Is there any plan? We put this back to like a neutron core. That's a very good question Yes, we have plan to do that because we actually have discussed this, you know, we are going to, you know To actually it's already very comprehensive But we can evolve to make it common will be could could be used by service function chain by cause or by any other feature There there is a spec review for what's called the common classifier, right? And it's yes Yes, it hasn't happened yet, but absolutely There there is an intent that these various different Functions that need to classify flows that those should be consolidated. It's just from a time frame perspective This project implemented at QoS implement, but yes Yeah, so that's why we put that flow classifier as a separate plugin, which is very independent from service chain So it's very independent. You can transport Oh, sorry. Second question. So second question is about whether we can dynamically Oh change the chain path based on the service functions processing result, right? Yes So that's why yeah, it's supported although we haven't implemented yet On how to support it is, you know, the service function, right when it has processing some result can serve feedback through the metadata So that brings up another topic. We need to support new service chain header, which is defined in ietf And then, you know, we can, you know, have defined different types of metadata The phrasom of a firewall, right? The firewall can feedback say after a few packets You process a few packets. You can say you don't need to send all the packs to me again You know, it's already passed, right? So it can just bypass this Firewall so it can send back, you know through the either metadata or through a bypass bit Which is defined in the ietf draft that we're working on that draft too So it can it's a lot of, you know Reach functionality that can be enabled through this metadata passing mechanism It's important to note. This is an initial release that this is I mean, it'll really be in the metaka Time frame that this this becomes something you I don't think you would want to deploy it right now In fact, uh, kathy referenced that all this code being in the repo not all of it has merged yet So when we say it's in the repo Right, it's on review dot open stack dot org Not all of the pieces of it have merged although all the pieces of it are open for review So if you do have specific code comments, um, the the The reviews are still many of the reviews are still open. Some of them have merged, but yeah, it is still open But I would like to emphasize the api has, you know, considered all You know the questions has placed. So that's why we have a service function parameter specification You can extend it to include, you know, those new functionalities But just because we don't have time to implement it yet. So we didn't really define it yet But in the uh, so again in 530 talk, we're going to talk about talk about the next roadmap. Yeah um, okay, so this lady Flow policy This is service. I'm not sure what you mean by flow policy I don't think this is Related to that no because uh in open flow like the only f we have a l4r 7 work group on service changing And part of that group and the author of that specification. So it's separate. I don't think We have flow classifier Yeah, that's different. Okay I'm probably not, you know, because open floor is more a southbound api So we have northbound neutral api and then translated to southbound to program the switches That's open flow. So yeah, it's different. Okay, last question Maybe I think I'll take that Gentleman we got here has raised over time So the question to repeat the question was can can we classify on things other than ip address? Oh, yeah, of course Yes, and the answer is yes the examples all showed classifying based on ip address um, the full list of of what you can classify was Visible but possibly not easy to to tell from from from back there in the back of the room Whether it's a comprehensive list we would we would welcome feedback if there Are things that we have not included as classification elements that you think need to be included Qos flags are are not one of them currently So that's that that could be some some feedback But the intent of the flow classifier is to be able to classify on some combination of criteria And you don't have to specify them all in the demo You saw a bunch of fields in the horizon dashboard were skipped over That was because we weren't classifying on those things in that particular flow classifier. Yeah, I want to add We have a classifying up to l7 level you can classify some url Okay, but there could be some you know, I I think they're kicking us out. Oh, yeah, I think okay So, okay. Thank you everyone