 Hello, I think we can start Welcome to our session today We will discuss about how to use Cata containers in order to deploy an nfvm PLC VPN My name is Marianne to the soil. I am a system architect at one and one you know One and one you know, it's the largest hosting company in Europe with more than 90,000 servers in operation and Around 12 million domains under management We are a member of United Internet Group and We are in this in this industry for more than 20 years now Last month one and one combined its web hosting application and server product lines with the profit bricks Cloud infrastructure becoming one and one you know, so this name. It's quite new I have next to me my colleague, Alexander Bogdan pika I will let him introduce himself and after that we can move forward with our presentation Hello everyone, my name is Alex I'm a software developer with within one and one iron us so That that's about it That's all I can say about it So here are some Topics which will be which will be covering today First of all, there's Cata containers. How many of you know what Cata containers is? Okay, well for the others Cata containers claims to provide the security of the VM the Cata containers claims to provide the security of a VM With the boot times of a container, right? So it's some sort of light VM We are going to show you We're going to show you how an FR route router can be run inside of a Cata container and We're going to use an FR We are going to use FR as our NFV to in order to build an AVP and topology between two open stack data centers and Well, this is this is going to be the scenario where we'll be covering in our demo We will also then discuss about other other possibilities where Cata containers could be could prove quite useful Okay Cata containers mainly consists of Realize on two components one of them being a Kernel image right a kernel a Linux kernel and then a root file system image so in order to In order to be able to use a different kernel that you let's say compile yourself with certain Flags enabled or whatnot as with it today all you have to do is just change your configuration file as you can see the on the screen so Yeah, according to the docs this is what you have to do in order to Create a guest kernel, right? so Cata container allows you to have to to gives you the tools to build a Custom kernel right the instructions. So this is according to their docs. So let's see Let's see how how that looks in In real life in order to in order to account for a platform specific inconsistencies, you know We we decided to compile our kernel inside the of a Docker container by using a script that we We've put together So we also had to adjust a few a few of the kernel flags Namely to enable the MPLS and vex LAN support and for the future we're also planning to integrate this This script within the Cata container cycle system. They have a repository called OS builder that provides this tool This kind of tools. So we're gonna put it there probably Okay, as you can see here The kernel is now compiling so this kernel is going to then be pushed to the compute nodes And here are kernel images. Okay, bye. So That's that's about it In order to be able to run the kernel we built earlier We also had to enable some CCTL flags in our light VM, right? so in order to make that persistent we We enabled them within the exactly within it directly within the root file system image. So Here's how that looks like Then after enabling them enabling them here, they're persisted when when you spawn containers within the by using that kernel image. So they're persistent in order to do that we use the guest file guest fish tool and That that's about all there is to it Okay so About the FR routing I will invite My colleague Marian to tell you more about it as he's more accustomed to the networking side of things. So I'm more of a Automation kind of guy. So thank you Okay, so The the kernel flags that we enabled inside the Cata containers are as Alex specified related to MPLS and big lands mostly and We we've done that in order to integrate inside Cata containers our FR routing NFV What is FR routing? If I routing it's a collection of IP Routing protocols that can be installed inside Linux and UNIX platforms FR routing it can be integrated very easily with native Linux and UNIX IP networking stacks and can be used to provide different network functions like IP route I Related to layer 2 and layer 3 routing and forwarding Also if our routing it's an open source project and it's part of Linux foundation Some of the features that are available in FR routing you can see on the on the slide right now You can see features like MPLS, BGP, VPN, OSPF and so on The features that are highlighted with blue color in this diagram are the ones that we used in our demo and You will see later How we configure that What what is this demo about actually it's about the interoperability between Cloud-ready infrastructure and infrastructure legacy infrastructure We will want to create an EVPN network between two open stack data centers Using FR routing Installed inside Cata containers Okay, this is the topology for our demo as you can see we have two open stack data centers Interconnected using an MPLS over GRE network the Compute on the zoom compute nodes we will provision the Cata containers the spine and leaves for our infrastructure and On top of that we will create the EVPN setup The VMs provisioned on the open stack compute nodes will be able to communicate to the with the EVPN setup using open v-switch and And the the goal it will be to have VMs from Vegas land 1001 to be able to from data center a to be able to communicate with VMs from the other data center in the same Vegas land similar for the other two Vegas lands. We I added on this diagram Okay, so let's take a look on how this can be implemented. This is a script that basically it's It's it's provision. It's doing the provisioning for Cata containers As you can see we use for that zoom The images they far routing images are taken from glance We are using Cata runtime as you can see in this in this output The containers are started with privileged Flag because we need to add different Interfaces for Vegas land and the GRE interface as you will see later Of course we attached to each container the network interfaces in order to create the topology as shown in the diagram before and At the end we will have a setup with Four Cata container nodes the two of them in the data center a two of them in data center B with With all the naming host name and so on and IP addresses, of course This will take Couple of minutes because as you may know in Cata containers we we are not able yet to Attach network interfaces Hot hot plug-in for network interfaces and we need to restart the The containers once we add some interfaces and for this For this last container as you can see where we added three interfaces And we will wait for for that to that container to start To restart Once we are we we are waiting for this For our setup we created some dedicated availability zones only for this that that are used only for NFV Compute nodes as you can see we have all the nodes started now and we will after that we apply the Network configuration the GRE tunnel is configured after on the spine between the on the spine creating the link between the spine interfaces of After that we apply the IP the vex-lan interfaces on the leaf Towards the VMs and at the end we push the FR routing configuration Specific for our EVP and setup let's take a look on on how this Let's take a look on that how this looks on on all Cata container nodes. So what you see on this on the screen now are For CLI's the top ones are for spine nodes and The at the bottom you we are we have the leaves As you can see I SSH on those and we are running OSPF in order to advertise the Lubbeck IP addresses The MPLS it's up. It's enabled. We are using LDP in order to discover the MPLS labels between the nodes And we are running BGP between our leaves the IP address is used for BGP sessions are the Lubbeck IP addresses and We are advertising over that BGP the EVP and Address family as you can see we are advertising all vex-lan network Identifiers over that BGP session as I mentioned for BGP session. We are using the Lubbeck IP addresses as You can see in this in this output What else now if we are looking at the Vex-lan Configured in our network as I mentioned we have three vex-lans 1001, 1002 and 1003 that are advertised over our BGP Session and The vex-lan configuration looks like in this output. We have the vex-lan ID We have the vex-lan tunnel endpoint local the local one the remote one it's discovered Dynamically using the EVP and set up and if we are looking at the Routes that are advertised we can see the prefixes for our For two MAC addresses Which will you will see later are the MAC addresses of our VMs one in one data center the other in the other data center if we are looking at the Next hope the routing for the next hope IP address I will stop here for for just a couple because Here is the information about the MPLS labeling as you can see from data center B We have the next hope Advertised with label 19 after that the label and send to Spine in in the same data center on the spine the label 19. It's swapped with the label 16 and sent to data center a and there because the That it's the Lubeck IP address of the leaf the label on the spine It's it's removed and send the packet it sent to the data center to the Leaf in the data center what you can see here is the MPLS table more or less the same information the label with 19 label label or not inbound on spine in data center B. It's swapped and The label 16 in the data center a it's removed Okay now if we if we look again at the Prefixes that are advertised over the Over the BGP session as I mentioned we have two MAC addresses the one with 8a7f it's the one from data center a as you will see in this output. This is the open stack environment so 8a7 e that is the MAC address from this data center the other MAC address is the one from data center B e1 I Didn't catch that but we'll see here so a One 5f is the the MAC address in the data center B As you can see this is mapped to IP address 94132 That that's actually our VM in this data center and as you will see now I I will be able to ping from that data center from that VM To the other data center to VM to a VM in a in the other data center that it's in the same baseline The IP address 94.37 and if we go on the other data center, we will see the IP address on the VM there What what other things we can see here we can see the containers that are created inside the Open stack as As I've mentioned we have one leaf and one spine in each data center And as you can see the runtime for our data center for our containers, it's cut a runtime Okay, one more thing Okay, so the on this slide we we will be able to see some traffic capture inside Between on the interface between the spines as you can see So as you can see we have MPLS encapsulation there We have the MPLS labels label 16 for this packet We have the vexlan encapsulation so this is the label we have the vexlan encapsulation and with the vexlan tunnel endpoints and vexlan identifier and in top and encapsulated the IPv4 packet with the MAC address source and destination and also the ICMP packet For for that big that that was running similar output this time in wireshark the MPLS with the labels we have the vexlans IPv4 with source and destination and of course the ICMP I Think this was all about our demo What we want to discuss Now it's about other use cases that can be implemented using kata containers with focus on NFVs The FR routing it's also supporting VRFs so a similar setup with what I already showed can be integrated And We can have VRFs at the end of of that setup. We can run different other different protocol routing protocols like I don't know Isis or Protocols for IPv6 and so on Of course, we can integrate inside kata containers other NFVs like for IDP intrusion detection intrusion prevention like openSense and other open source technologies and of course we can we can integrate NFVs for fire rolling and here I will mention also via s or why not other Commercial solution for NFVs That that was all from our side if you if you have any questions, please let me know If not, please send us send us your feedback using the application. Thank you Hello, can you hear me? Is it working? Yes, my question is you said earlier You were using a custom kernel image for the kata containers Yes on an earlier slide you mentioned that you were using up you needed kernel version 4.18 for EVPN support Is that right? I mean I've been using EVPN and all the kernel versions. I'm just curious Why did you have to build your own kernel image? by default in kata containers the kernel The kernel flags are not enabled for most of the options They are available in the kernel but are not enabled. So we needed to enable it enable them Using that script that Alex showed and at the end we we had a compiled kernel with all the Options we needed enabled that was the what kind what kind of flags were you enabling that just to give an example for MPLS for example We enabled for big slan some flags we we played with Dot dot on that one Q But for this use case we didn't use it But we for other use cases we enable that also and stuff like this mostly the flags related to networking That makes sense for our use case. Thank you In your demo where you are linking up its in uteroom to some extent or was this also standalone Running on it to learn the MAC addresses of the end systems Because you have MAC addresses learning at the VXLan tunnels, but the MAC address need to be learned somehow Was dynamically learned using the so at the end I I just needed to ping from one VM to the other And that works. I didn't need to I don't know add MAC addresses manually or stuff like this. Everything was done by the EVP and Technology in behind That's behind, but you didn't integrate with neutron specifically for that. No So the fish component will manage the VXLan In this demo yes, so the neutron So each purchase network has a the VXLan identifier, right? Yes, who will manage each purchase machine have on the fish The VXLan identifier so so for For this demo the VXLan IDs inside the open V switch Can be different than the VXLans we configured for Our VXLan our EVP and solution are two different two different technologies let's say and two different Software Softwares there one it's a far outing the other one. It's open V switch So for for this setup we added them Manually and we decided to be the same with the ones in open V switch, but was done manually this this was done only for proving that we are able to To communicate between something in open this or in open stack and The real world meaning some legacy network infrastructure viral switches MPLS nodes and so on Thank you for the presentation I have two questions actually Did you use this for production or do your plan to use it for production? And if so, what are the performance figures? Yeah, so as I mentioned this was done at this phase only to prove that it's possible as I mentioned the Right now in the Cata containers we have that limitation regarding the Hot plug of network interfaces Once that will be solved and I saw some work is done is is done in that direction once that will be done I think we can consider this to to take in in production and Regarding the performance we didn't Perform any any testing that direction so far any other questions. I think that's all Thank you for your presence here And enjoy your summit