 Okay, let's get this started. Hello, everybody. I'm Rosela's blended oh in Susie I lead the team that takes care of SDN and an avi and I was a core reviewer for Newton few years ago today with me Miha, yeah, I'm Miha Rostetsky. I work as a software engineer at Susie nowadays I'm mostly working on psyllium projects, but I used to also be active in open stock cola some years ago and today we are going to tell you about improving security in your container infrastructure on the stock using cure and psyllium and About our plans to integrate those technologies and the little experiment we Did nowadays to prove that there is a good path to try And we will start our Presentation from explaining what is BPF filter. Are you familiar with it? Who knows BPF? and eBPF okay, so BPF is Berkeley packet filter and this Mechanism in kernel by which you can run programs on the virtual machine inside kernel and those programs Monitored Cisco's and they can monitor and filter packets and in our case in case of our presentation BPF is used mostly for Filtering out packets based on rules we defined before and BPF is a very programmatic very general you need to write a source code in C and then by LVM and C-Lank compile it by code Which is which can be loaded by the kernel by its jit and verifier and then after Getting the program via jit. It's running in the kernel virtual machine and Celi is a project which brings BPF to containers so Celi um has several components The biggest of them is Celi um agents, which is ironic on the on each note and It translates rules regarding some container or some network namespace To a BPF program which filters traffic specially for for some container So and Celi um also provides a CLI for the user to define those rules and There is also an integration with Kubernetes and Kubernetes has a concept of network policies by which you can define which IP addresses or Which labels or which namespaces can connect with each other? so Celi um has a functionality of watching your Kubernetes cluster and translating network policies into BPF programs Celi um's as a project started in the times when Kernel still didn't have a defined path to Start replacing IP tables with BPF But nowadays there is an intuitive insight in kernel called BP filter Which aims to be an alternative both for IP tables and NF tables and Replace all far-owing stack in kernel with BPF programs, but Celi um started earlier and it's fully ready and usable solution to Bring security to your containers by BPF programs instead of using IP tables So how many of you have heard about Celi um before? Okay I will try not to scare you with this microphone the volume is pretty high So let's now introduce the other component of this integration that it's a career Actually, the idea for this talk came out during a team brainstorming You know me how has been contributing to Celi um for a while and the rest of the team is pretty familiar with open stack So we just thought it would be great to use the power of Celi um in open stack How can we actually do that and then we thought okay, we can try with career so we'll just explain you today how we did that and Yeah, the future work that's still we need to do So going back to career Let's take an historical perspective Many of you might remember a few years ago like containers were not as popular as they are now and you know open stack it's been great in dealing with the amps and their metal but Yeah containers were a kind of new thing so there was There wasn't a unified Infrastructure to be able to connect VMs and container So one of the earlier solution to do that was the approach that Magnum took So basically to run a container inside the VM Which you know gets the job done, but it's not optimal Because you know some as the unsolution they are they are already native for container containers and you lose that when you run them in a VM and The other drawback is that you know Like in neutron we already use a tunneling to take care of the network isolation and most Container networking solution are also using tunneling like for example flannel So you ended up having a double tunneling like the neutron tunneling and then inside that the Tunneling by flannel for example And we'll see in the next lines how career avoids that and I'll go back Sorry. Yeah, I just wanted to say a few words about the main components of career So we have the career controller that runs on the node where the Kubernetes API is running and its main task is basically to watch for every crowd operation that it's performed through the API and Taking care then of creating the corresponding neutron resources like for example You create a pod in Kubernetes and then the career controller will make sure of creating the corresponding neutron port the other important task is that It will then annotated the Kubernetes resources with the neutron information and This is done because the other component the career CNI Which I think it's outdated now. It's the CNI demon This component will work will be in on the worker node So where a cubelet is running and it will watch for annotation on the resources So when it sees a new one it will then perform the Interface plugging and we'll see in the next lines how that works Thanks So in the graph here, I put two containers. They are on different networks The blue one and the red one and they are running on the same VM So how do we achieve the network isolation using career? So career is Utilizing the trunk port which is a relatively new concept in neutron was introduced two years ago To be able to get target traffic up to the VM because before that it was not possible to get Tucked traffic up to the VM. So using the trunk port You define the trunk port and then you can define several sub ports and every support is associated with a specific Milan So in the case of career So in the network topology, we have the VR int which is you know The usual neutron bridge that takes care of you know, tagging and untugging the traffic according to the network Then we have the trunk bridge which is the bridge created by neutron to represent the trunk port on That trunk port Career will create two different sub ports One to take care of the traffic of network one and the and the other one for nectar network to and they will use Different villains. So what happens when the when the packet goes through the trunk bridge and Exit one of the support it will be tagged Correctly, and it will then reach the VM Tagged and I will be delivered to the proper container So why are trying to integrate Serium of career? first reason is that we Like to bring support for network policies I'm currently career has support for network policies, but it's using mapping for security groups and We think that's trying out Integrated with Celium which will filter out the packets on the host or VN where containers are by BPF programs is an interesting Idea to try out And so we did some experiments And in this experiment Curio Kubernetes controller is running as a deployment. We didn't use curio kubernetes CNI or CNI demon, but instead we rewrote all functionality which curio kubernetes CNI is using to see Liam CNI plug-in And in Celium we use the direct fruiting mode. So we thought Celium not to take care of Transitioning the packets between nodes why we did it like that because of course Copying functionality from curio kubernetes CNI to Celium CNI doesn't sound good, but at the time we tried this experiment Celium Was pretty monolithic and it still is but in the next version will change so Celium CNI takes care of IPan and creating the network and then creating an endpoint and Taking care of generating BPF programs for that IP address and we had some challenges with that approach Yeah, we needed to Inside Celium CNI plug-in Read the information which curio saves in the annotation about networking and creates open-v-switch VIFs and the schema of How the whole solution works looks like that so when user is creating a bot It's gets created in kubernetes API as an object to keep that is creating sandbox and then cause the CNI plug-in to set up Network namespace at the same time curio controller which observes Objects curating kubernetes requests the port creation for the pots then Neutron is called and Neutron creates a port and then curio controller gets the information about that Neutron port and Saves those data in site pot annotations in kubernetes API and That's how at the end CNI plug-in is aware of all the networking configuration But I mentioned that I Monolithism of Celium is going to change it will happen in version one point one point four which Will be the next version the idea is the couple Celium into three layers networking load balancing and policy and and make it able to use only some bits of Celium so Celium will allow to use any other solution for providing the networking and Will allow to provide even just load balancing or just security policies on top of that There is also an ongoing work in Celium to Use that kind of the coupling for Celium flano integration because there are a lot of people who are using flano in their kubernetes clusters and they would prefer rather to Use only those parts of Celium which deal with network policies instead of replacing the whole network stuck in their cluster and After that the coupling is done in Celium that will be a perfect opportunity to implement also cure support upstream and In clean way which should look like that So the mechanics of creating port and how cure controller talks with neutrons stays the same but then curious in I would take care of Creating VIPs and reading new from data and then Celium on top of that Getting the data about what IP address was assigned by Curious C&I will be able to create Bpf programs for free filtering packets and observe the kubernetes API for network policies and Now Rosela will talk about Packet from special yeah, it's the same graph with some modification So just to get you a visual understanding of what's different when using Celium So as I was saying a few minutes ago With vanilla career Basically the packet traversal is you know, let's say the container is trying to send a packet outside Then you know from the container the TCP packet will be created It will go through the virtual interface. It will go to the trunk bridge. It will go through the support will be tagged It will go to the integration bridge and at that point it will be filtered like either using IP tables or the obvious firewall If you're using Celium Then as Micha was explaining You have this BPF program that it's hooked in the kernel. So when the container issues a Cisco, so let's say a connect That program will be executed immediately So you see that we are jumping a few steps and the other advantage is that The BPF program is much more flexible compared to IP tables, for example, and It's also extensible like you can basically Stitch few programs together and then you could also make use of xdp to you know increase the performance farther And that this is the time for demo in our demo we have Kubernetes applications which simulates Star Wars the station death star and star fighters tie fighter and x-wing and Those tie fighter and x-wing will run as a spot as spots in Kubernetes will label them accordingly to make it clear that tie fighters and Empire star fighter and x-wing is Alliance star fighter and From those spots, we would try to use death star API to land the ship and Celium will take care of preventing Alliance star fighters in landing on death star so This is the Installation on death stack. We have installed death stack with cure and Kubernetes Which is running along so and that's Kubernetes installation should be considered like Bermetal installation But using open stack So we created pots We are waiting for Those pots to be created. Is that font size good for you? Okay, x-wing is running death star is running tie fighters running and All of those pots have IP addresses Assigned and we will check now whether each of them has a corresponding new from port So first you are checking whether there is a new from port for death star pod and as we can see We have such a port now, let's check the star fighters and They all also have new from ports So the communication between cure and Neutron was successful and Now we have no network policies applied. So everything is permitted But firstly, we will try to ping All the pots between each other to see whether we really have a network connectivity and now we will try to land x-wing on Death Star application and It's successful because we have no network policies yet. We can land any shape But now we are going to create a network policy which will prevent it So let's try to do this URL again from tie fighter It was it's successful, but from x-wing it will hang Because the pf program is filtering out a traffic to from x-wing parts to the death star application so This is the end of the demo I will show you the young files definitions of applications So the death star is a savings of the employment and tie fighter and x-wing are Just normal pots. They have according labels or organization empire organization alliance and The definition of the network policy Allows to Connect death star only if Organization label matches empire So what's without that label cannot connect to death star and the future work we are going to do is to be because in that's very simple experiment We didn't touch the topic of load balancing and we will of course after the coupling of our bits of Cillium I mentioned we would try to implement everything That's all what we wanted to show you today. Thank you very much for listening And yeah, do you have any questions? Yeah, I've seen that the kind of network policy that you're using doesn't look like a default one But it's a silly network policy. Is that some kind of CRD you are applying yourself or what's the difference with the default one? It's a CRD. You can use normal network policies. It will work as well with Cillium But Cillium also provides a CRD with network policies which also support L7 rules because Normal network policies in Kubernetes Only support L3 and L4 rules, but with extended Cillium CRD you can also Provide rules for L7 but career is there any kind of modification you have to on for instance today a default network policy driver To run this or how did you implement this? Well, I Didn't use CureCni at all. I rewrote everything to CilliumCni and I used the version of Cure which didn't have network policies at all. I used this table Okay, then maybe we'll get to speak about that later because we have basically refactored all the network policy support You know, anyway, so we'll talk offline after that. Yes as for as far as I know nowadays Please correct me if I wrong but Cure has an implementation of network policies based on mapping to Mapping to security groups in OpenStack. We do definitely have that but we have refactored that So we are now using CRD. It's basically to minimize the amount of coastal neutron And I was thinking that maybe you could we could be making that but that's a depth. I mean that's A depth session if they've talked out we might be writing this offline because maybe not everyone would be interested Okay, now so that's the gate and thing Yeah, that sounds like a good idea to consider. Thank you Are there any other questions? Okay Hello Um, I have one question you're using pot annotations for transporting basically the neutron information Is there any security implications like that if you modify the annotation as a user that you can like Move into other networks or something because I'm not sure how If the user is able to edit pot annotations, then yeah, there is a release I think so it's all about setting up airbag and possibility to edit things Yeah to prevent unauthorized users to modify annotations But I think that's a little problematic because you can't with airbag you can't prevent people from Modifying annotations and they have to modify the pot. I mean they are the ones creating the pot Yeah, so everyone who who can modify the pot unfortunately Okay Thank you So that's yeah, that's that's a good question and good problem to consider in future Just one question Can we use other neutron plugins along with cilium as a layer seven policy? Are you planned and that experiment we were using only OBS plugin But of course when implementing upstream We'll try to support everything which curry supports But that experiment was obvious only and other questions If not, then thank you again for listening to our presentation Thanks, everybody