 So, yeah, hey folks, so it's me, Shahryan, out here. So this is one of our students, that's what it is. So yeah, welcome to Cloud Native Cloud, and where we will dive into the code behind Cloud. I am your host, Shahryan, Mr. Chemicals. You can call me Abdul Shahryan in socials. I'm a science ambassador, and I will be your host. So every week we bring in new set of presenters to showcase how to work with Cloud Native technologies. They will build things, they will basically break things, and they will answer your questions. So in today's session, I am stoked to introduce Paul. And we'll be presenting on securing your classroom with CNM network policies. So this is an official livestream of the CNSF, and as such is subject to the CNSF Code of Connect. So please do not add anything to the chat or question that would be in violation of the CNSF Code of Connect. Basically, please be respectful to all of our fellow participants and visitors. So with that, I will hand it over to Paul and his staff of the session. So yes, hey Paul, how are you? Hi. I'm all good. Are you doing today? Yeah, I'm fine too. So yeah, I'm really stoked to have you in the session. So I think you can introduce yourself and we can just start the session, right? Yeah, feel free to introduce. Great. So my name is Paul Aira. I work at Isovelia into the company behind Celium. I'm a community builder focused on security. We have a couple of security focused products like Tetragon. And basically, my job is basically to get the community involved in everything security related with you. OK, great. So yeah, let's now, I think we can hop into the session, right? Let's start. OK. All right. So today, we're going to be talking about Celium, securing your clusters with Celium network policies. And Celium, so my name is Paul Aira, like I previously said, I'm a community builder focused on security at Isovelia into the company behind Celium. And basically, I help basically put out everything related security out to the community. So you would see me writing technical blogs, doing live streams like this, and helping out in the community. So if we ever cross parts, anyway, be sure to say hi. So before we just go ahead and talk about Celium network policies, right? There's Celium in the network policies. So I think it would be fair to start with Celium before we even jump into the network policies, right? So what is Celium? Celium is an open source cloud-native solution for providing, for providing and securing and observing basically entire cloud-native network. And what Celium does differently that hasn't been previously done is introduces a new revolutionary technology called EVPF. So basically, what EVPF does is EVPF allows you to program the canal. So there's a very common analogy that people like to use. And I think it takes very perfectly. We basically say EVPF is what JavaScript is to the browser. EVPF is the same thing, all right? So EVPF allows you to program the canal. So before we just delve into Celium network policy in detail, I just want to take a step backward and just talk about the Kubernetes network and model, right? So when Kubernetes came out, some designs, decisions were made from a networking perspective. And this totally made sense. So if you think of it right, so if you're familiar with Kubernetes, basically the Kubernetes network model basically specifies that every port in the cluster can talk to every other port in the cluster, right? And without NAT, so NAT is network address translation, right? So this sort of translates into simplicity for an operator. So if you think of maybe back in the day when you would have VMs, you have maybe two separate VMs. And you would need to figure out the networking that talks a bit how these two VMs will talk to each other. But the Kubernetes just has a very flat network topology. Has a flat network topology, allows every port in the cluster to talk to each other, right? So this is very simple for operators because operators don't have to think about all the complexity within, right? And also each port in Kubernetes gets a unique IP address, right? So if you think of, say, you have, let's say, a MongoDB database hosted somewhere, and already you have a web application that connects on port 80 and ports maybe, or port 3 for HTTPS, right? So this model basically allows you to transfer the same thing into Kubernetes, right? So you can have the port using the same very familiar networking model that we are all familiar with, right? So you can put a port, and then allow the port connect on port 80, and port 443 just like the same way you would do if you were using a regular VM. All right, so why do we need network policies? So if we, a bit back to the Kubernetes, to the Kubernetes networking model, right? This basically allowed us to, a bit back to the Kubernetes networking model, we could basically, every port can talk to each other, right? Then what this also introduces is like a problem, right? You basically have a place where everyone can talk to each other, right? So if you think of an office complex, right? There are certain places, for example, you're not allowed to access. But because Kubernetes has a flat networking model, basically you can access everything. So this is one of the reasons we need network policies, right? We want to be able to isolate traffic from certain ports. And there are so many reasons you want to do this, for compliance reasons, for security reasons, right? You also want to prevent lateral movement attacks. So for example, if someone compromises one part of the cluster, you don't want them to be able to move from one part of a cluster to another, right? So just in case you have a potential security bridge, you want to limit the blast bridges of that. And also you want predictable and controlled network traffic within your cluster. So these are one of the things that network policies help us do, right? So basically, network policies, they provide a construct for us to define how traffic moves within our clusters, right? Between the different ports and our clusters, we can say, hey, we only want traffic going from port A to port B, right? So then what does Selium bring to the table? I think it would also be nice to give you some background on what Selium sort of does separately. So before, and by the way, if you have any questions, just put the questions in the chat. I will be listening. I can always pause and answer your questions. But Selium basically does some approaches, how approaches are density in a very different way. So previously you would have, if you're familiar with Kiproxy, right? Kiproxy has like, Kiproxy has a, Kiproxy is based off IP tables. And this IP table rules basically allow, basically define how, define how like traffic moves between like one node to another. And basically it is like a whole virtual networking underneath, working underneath, right? So Selium brings a new approach to this and basically decouples the whole of networking, decouples network addressing from a density, right? So Selium, for example, previously if you, let's say you had a node, you had a port, you had a port label, the port backend, right? And you had a group of port label, port frontends, right? And if these ports want to talk to, say another group of port, right? You need like say IP table rules on each of the nodes that would say, hey, this port has the specific IP address and I want you to allow like this IP address to assess the specific, to assess the specific, this specific service, right? So but Selium takes that away and basically uses identity for basically identifying like your workloads. And why is this, and why is this very important? When you have IP tables, you've got like this whole like a lot of IP table rules in your cluster. And if you use that for like identifying your workload, this becomes like various unscalable like enlarge clusters. And also then for every time like you create all, you start, you turn off a new port, you have to like update this IP table rules all over on all every node that's in the cluster, right? So this can be, this can be sort of introduces like a scalability bottleneck. And that's one of the things that Selium does differently. So basically the way Selium approaches this is, each port basically is, basically is an, think of a port as sort of like an endpoint. Endpoints is basically a way like a port is basically represented in the network, right? So you can sort of think of it as sort of like one-to-one mapping, right? So if you think of something like a service, right? It's the same sort of concept, the way the same way you can think of an endpoint, right? So, and then this, basically you have for each endpoint an identity is generated for this endpoint. And then this is the coupled and stored in a separate key value store, right? So for every time like you have a change to a new port is added or you have a change, costs just have to resolve the identity from a key value store, which is far much simpler than having to like update IP table rules across every node, right? That's one of the things that's the main thing that Selium does. This sort of removes like the scalability bottleneck when it comes to addressing. And we're gonna see like in a minute, like why this is important. So, and just sort of take a mental note of this. You will see the concept of endpoint come across again as we talk about Selium network policies. So just at a glance, you're basically two CRDs. You define Selium network with Selium network policies and CRD means Kubernetes custom resource definitions. And basically a way to extend Kubernetes and create your own like custom Kubernetes resources. So the two major CRDs you see Selium network policies defined with is the Selium network policy. Selium and the Selium cluster wide network policy CRD. So the Selium network policy CRD basically has all the, you can basically find Selium network policies using the Selium network policy CRD. And the Selium cluster wide network policy CRD is basically the same thing as the Selium network policy CRD. The only difference is this now applies cluster wide. So you take the same Selium network policy model you're familiar with and extend it to the host, right? So we'll go through that like in a detail. So a couple of ways types of policies. So you have policies based on endpoints and I think just a good way to just map. So map this is basically think of this as policies based on like ports, right? So and the other way you can also define your policies of policies based on service names. And this is if you actually look at, if you've taken a look at Kubernetes, the limitations of the standard Kubernetes network policy. So one of the things that's missing is being able to define policies based on service names, right? So you can define Selium network policies based on the service names. So entities, so entities based policies. So entity based policies are basically a way to group like a group of a think of a group of entities that do not have like a specific IP address. So for example, if you say world you can't possibly say, this is the IP address of everyone that's gonna come into my, that's gonna access my cluster, right? You can get the IP address of everyone. You can account for IP address of everyone in the entire world, right? When you put out a public facing application, right? So it's just like a logical grouping of basically entities that you can't use like IP addresses for, right? So you can also have policies based on IP, CDIR rule sets. So basically you have an IP address and you can say, hey, this IP address is able to access this. This CDIR range is able to access this. You can also have DNS based policies which basically this DNS based policies basically resolves to or you have this DNS names that get resolved to an IP address and you can define your policies based on this. All right. So this is at the layer three. So at the layer four, you can define policies like based on ports, right? You can say, hey, this service or this port should be able to only access this port using this specific port, using this specific port. Every other, if that's not the case then drop this policy, right? You can also define policies using ICMP and ICMP v6 rule. So this is basically like the same protocol for the same protocol, ICMP that applies to IPv4. This ICMP v6 is like the same thing, but for IPv6. So at the layer seven level, you can also define rules based on a higher level if API protocols like HTTP, Kafka and GRPC. So you can define policies based on say, hey, I want a specific HTTP, therefore we can say you should be able to only make a port request or you should be able only to make a post request. And if you make this post request, maybe this post request should have this specific header, for example, the photorization header or maybe a specific referral header or something like that, right? So you can define basically a higher level policies, right? So with Kubernetes network policies, you can define policies, but this mostly happens at the layer three and the layer four, right? So you define policies based on the specific layer three protocol, the specific low level protocol like TCP and UDP. You can define the ports, but you can't define higher level protocols like HTTP, right? So Selim network policies basically just extend that same model and allows you to do more complex things. And we know from like, if you think of a typical Kubernetes users, right? You would want to do something as advanced as you want to do something like this. Because in reality, this is like at the scope most applications operate at, right? So this is one thing that, that's one thing that Selim network policies offer. So the cluster wide network policy. So it's the same consistent, the same network policy model we're familiar with, it just carries over to the host. So think of a use case, you could do something like this, for example, you would restrict SSH access to a node or say, okay, I haven't known that I don't want, I don't want any like incoming HTTP at all. I want all the every connection coming into this node to be HTTP PS, right? So that's something you could do. So it's simply the same Selim network policy we're familiar with, it just now applies to like the host. So just to make like some comparisons between Selim network policy and Kubernetes network policies. So you could, yes. So Selim network policy extends Kubernetes network policy but provides like more advanced features, more advanced non-linearity and more advanced features, right? So there are some similarities but there are also very major differences. So for example, in the traffic direction, you can specify ingress, egress traffic, egress traffic, right? So the same, if you have a null row, it goes into like a default deny and there's no policy, it's a white list there. So you can access any service by default. So you also have some low level, like the layer four protocols, you can do that, that's some of the similarities but I will mostly focus on the differences, right? So one of the very first differences, the Selim now has the concept of endpoint, right? So you make your selection when you select a network policy, you do that based off endpoint, right? You can also apply policies to node, right? That's something you'll typically not be able to do with the Kubernetes network policy, right? You can make up, you can filter your traffic based off entities, you can filter your traffic based off DNS names, you can filter the traffic based off services and you can also do layer seven protocols like HTTP, JRPC and Kafka, right? So you also have the ability to do advanced row querying. So if you have like maybe for example, you have a specific use case where you want to be able to accept HTTP or HTTPS only from a very specific domain name, you can do that with network policies, with Selim network policies. Okay, so I think all the question was like from Taylor, like so Selim network policies supports L3, L4 and L7 layers, right? Network policies. So that's from Conrad, yeah. So Selim network policies support with L3, the L4 and L7 policies, yeah. Is Selim or Calico used more? This will be, so this is not because I work, I obviously, I work for exavilions, but also if you look around like all the three major cloud providers of all standardized on Selim as their preferred CNI. So for AWS, for Azure, and for Google, you can get like Selim straight out of the box with them. So this also shows that Selim network policy is Selim like by default as a CNI is becoming like the preferred choice that everyone uses. So I don't know if the answer is your question. Yep. I think, I hope so, yeah. So that's such, at a glance, that's basically a summary of what Selim network policies and I'm just gonna be working you through like a demo of basically how Selim network policy works. And I would say, so we've got a couple of resources where you can learn about Selim network policy. So we've got like the documentation is a very good starting place. We also have a couple of equal lifestreams. So equal lifestreams are sort of, they happen like every now and then on the Selim YouTube channel, you can check them out. They have like very good resources for learning about network policy. We also have the labs from, we have the labs from Isovolent where you can, we have the labs from Isovolent where you can also learn about Selim network policies. So you can like try them, they have like a very hands on approach for you for learning about network policies. So if you don't wanna go through like the whole stress of like bootstrapping like a local Kubernetes or the Kubernetes cluster somewhere in the cloud, you've got that, you can just play around with the lab. So you can, they have like specific features of Selim. You can basically get hands on practice with the labs. So let me, like we have two questions, I guess, for that. From Kafka, we have questions like, does Selim able to integrate with service mesh like Istio? Yeah, so you can use Selim alongside a service mesh like Istio, but also there's Selim also, we also have a Selim as a self, there's a service mesh component of Selim you can use. So basically one of the goals of Selim is to reduce something called tools sprout. That's basically to reduce your, you're using like different tools. So you have maybe you have like a service mesh separately there. You have maybe like a different tool for runtime enforcement. So Selim is trying, one of the major goals of Selim is trying to like consolidate all of that into like one single thing you have to work with. So that reduces complexity for you and you just have one tool Selim to deal with and you don't have to like deal with other external tools. But if you wanna use Selim and Istio together, you can very much do that. Okay, so yeah. Another question from Conrad, Conrad. So the question is, are there any examples of how to use GRPC with Selim network policies? Yep, so a good place to look at the docs, you would find like some very good examples on the Selim documentation. And on the Selim repository, you've got like some examples embedded. There's like a specific directory where you can find like examples for specific use case you're trying out. Okay, so again, from Grace's way of question, like to clarify, if I use Azure C&I, it is using Selim or Amazon BPCC and C&I. That's a bit. This is Azure and this is Amazon BPCC. I think you're sort of like conflicting the two things, but yeah, by default now most Kubernetes is like from the all the popular cloud providers they come with Selim by default. And I think we've seen adoption from also other people like a digital ocean. Yeah. Okay, so like from Andrea's way of question, how about service network mesh? I think Selim can work on the correct. Yeah. Oh yeah, he just was clear about that. Okay, so I think we've done with the question, so I think you can just want to do it again. All right, so I'm just gonna walk you through like a demo of how Selim network policy works in practice. So to start with this, there's a free tool from the same creators of Selim called the network policy editor. And I'm gonna be sharing that with you. So you can find it on the URL, editor.networkpolicy.io. So I'm gonna switch to a different screen now. So the network policy editor is basically a tool for writing Kubernetes network policies and Selim network policies, right? So you've got this, you've got this network policy editor and what it does is it does quite a couple of things or some of the major features so you can visualize policies, right? So it provides you, if you've used Hubbell, Hubbell is the observability component of Selim that allows you basically, that provides observability to give you stubble before this UI will not be unfamiliar to you, but it looks very, very similar visually. So the network policy editor allows you, visualize network policy, allows you create an edit network policy. Why was this thing, why did we even bother creating this thing in the first place? So if you've worked with, and this is not even a network policy thing, right? If you generate a work with Kubernetes, right? You would have, you're writing, working with YAML files, right? One day you're battling with YAML and maybe some incorrect syntax and something is breaking somewhere and you're not exactly sure what's breaking or you have some mistake in your syntax. That gets very complex and cognitively, cognitively demanding to work with, right? So this was basically created to ease that cognitive overhead of working with network policies. So you can easily create a network policies that are less prone to error. You can see exactly what the network policy is supposed to be doing and you can also validate a network policy. So I'm just gonna work it through like a couple of features the network policies, the network policy editor has. So you can create a new policy. Basically, you can start from scratch. If you wanna start from scratch, you can create like a new policy. And so what makes this quite interesting is, so as opposed to like your wrangling YAML files, right? So on this, if you look at your rights on the right pan of the editor, right, you scroll down. So the first thing is we've got like very good tutorials. So if you read through this, if you can see my screen, if you read through this, you would see what part do you wanna secure. So as opposed to you maybe manually type in this in a YAML files, let's say we have a, let's call this a demo policy, right? And we wanna just secure like the default namespace. We have a worker deployed on the default namespace. And we wanna apply this policy to our front end ports, right? So as opposed to manually like putting this in the YAML file, you can just simply input this in the network policy editor. And you can see like on the left pan of the editor, like the corresponding like fields for the network policies automatically generated for you. And so if you walk through this, you could see, if you wanna specify your ingress rule, for example, you can, if you enable default deny, you have a default deny. If you enable default deny for ingress and egress, you all have that done for you, right? So this sort of like makes it very, makes it quite relatively less stressful to work with network policies, right? So, and this is something that affects even experienced Kubernetes users, right? So you might be very experienced with Kubernetes, but you can spend like some minutes just wrangling YAML files. And this sort of like helps like sort of do with that. So you've got different tutorials here, right? That's one of the feature. Another feature we have on the policy of Detroit is you can, if you don't like creating a policy like this, you can download the policy. And as you can see, like the policy is automatically downloaded for me and I can just take this policy and apply it to my Kubernetes cluster, right? You can also allow, you can share a policy. So you can share a policy. So this will require you to go through like the GitHub authorization flow, right? So you would have to like link your GitHub account to this and you can share a policy as a GitHub gist, right? So another thing you could also do is this allows you, the network policy of Detroit allows you to translate between Kubernetes, network policy and Cilium network policy, right? So sometimes you might have like a Kubernetes network policy you want to migrate from Kubernetes to Cilium. So if you click on the tab here, you can see the same exact Kubernetes network policy but then it's translated now to Cilium network policy, right? So the only caveat with this is currently because Kubernetes network policies do not, because there's some sort of divergence between Kubernetes network policy and Cilium network policy, some of the features don't map one to one, right? So for example, layer seven policies, right? So you can specify layer seven policies. So in Kubernetes network policy, so you won't be able to like directly translate that on their Ditto to Cilium network policy, right? So another thing the network policy Ditto has is policy rating, right? So you can see like everything about how secure on a scale, like in the scale, this is on a scale of one to five, how secure your network policy if either created manually or created on the Ditto, right? You can see how secure it is. So there's a policy rating on their Ditto. So that's like the major features of their Ditto. So another feature I forgot to mention. So if you have like flows from Hubble, so Hubble is the observability component of Cilium. You can basically take your flows from Hubble and upload them to the network policy of Ditto. And network policy of Ditto will basically analyze like this flows from Hubble and see how like traffic moves within your cluster and then generate like specific policies based on the flow, right? So that would just ease you, that would take away like sort of easier, the work you would have to do to start crafting all of that network policy from scratch. So I think that's like a very, if you have maybe like an existing cluster you have Cilium installed and you have like things going on in the cluster, that's like a very good starting point for creating network policies, right? So you can start with the things you already like your, your like previous, your penultimate like traffic and then like move forward from there and start, yeah, ticking up all the security boxes. All right, so that's sort of like the network policy at Ditto in a nutshell. Yeah, you should check it out and we open if you have any feedback for us on what we can do to improve the network policy at Ditto or you have any specific features you'd love to see. Just hop on the Cilium Slack channel and just give us, let us know. All right, so today I prepared a very tiny small demo for us and you can find this demo on GitHub. It's on the, we have a specific GitHub repository. So I'm gonna share my whole screen now. We have a specific GitHub repository that's dedicated to, that's dedicated to network policy and I'm gonna show you that in a minute. All right, so this is, you can find it on GitHub, networkpolicy.io, so GitHub network policy, right? So this is a specific REPL, we keep everything network policy related. So examples, demos, you can find them all, yeah. So also community resources related to network policy, you can find them here. All right, so today's our example we're gonna be working with. You can find it here in this specific REPL called demo. It's actually part of a blog post series we've been working on about network policy. So you can find all the examples I'm gonna use here today and yeah, you can try them out. All right, so we have a demo application. This is like a standard 3D application. We have a front end, we have a front end deployment. We have a backend deployment. So we have a database deployment. We also have a corresponding services for each of these deployments. And I'm gonna be using kind. So kind is sort of like a minimalist way for you to have a local Kubernetes cluster running. And I'm gonna be using that for this demo today. All right, so just to check if I have a kind later. All right, so the first thing we're gonna do is first create a kind cluster. Create a kind cluster. So if I show you what my, I have a kind config here that basically has a two, it's basically a two node Kubernetes cluster. We have a control plane and a worker node, right? And then we have the, so kind comes like a default network plugin. We are gonna disable that so we can use Celium instead. All right, so I'm gonna create a kind cluster, create a cluster and then point this to my config. So basically what kind does this kind actually obstructs like your Kubernetes clusters, but then using Docker, right? So you have like each, instead of having like a VM, you would have like a Docker container as the nodes with kind. So it's a very easy way to just get hands on with Kubernetes locally. And you would see kind being used a lot in Celium. So even on the Celium repository, you see like kind being used for like integration test, you would see kind being used on the labs too, because it's just a very good way to get started with Celium. All right, so the next thing we're gonna do is we're actually gonna install Celium. So Celium comes with, it's a CLI. And if you look at the docs, you have the instructions on how to install the CLI, but I'm just gonna, but for my use, for my case, I already have like the CLI installed. But if you wanna install it, you can just go through it like the docs and install the CLI locally. So I'm just gonna install Celium. So the CLI just makes it easy for you to install Celium. So yeah, yeah, all the ways you can install Celium. For example, you can install Celium with Herm, but the package manager for Kubernetes, but in this case, I would use the CLI because it's very easy. So all you have to do is just do it with Celium install. And the CLI will detect what kind of environment you have and basically install Celium in your environment without you going through any hassle. So it takes like, you can see here, it takes like, we're using like a kindge. And yeah, and that's easy. So you can check the status of your Celium installation with a Celium status. And this might, this takes like maybe like a couple of seconds to get like ready. So you might just add like the weight flag just to be sure that everything works correctly. All right, so while we are doing that, I'm just gonna put this, we're gonna basically deploy like the demo we have. We have here. So it's the same, it's everything is online and openly accessible to anyone. So if you wanna follow along, just go to the network policy GitHub repo and clone it and get started. All right, so I'm gonna apply this. So I've got this deployment. It's not currently on my mission, it's remote. So I'm just gonna apply this to my Kubernetes cluster. So if I check now, I should get my deployments. I should get deployments. Yep, let's, I think I get deploy. Yep, so we have a deployment of a backend. We have a database, we have a frontend if we check our services. So K is just me having an alias for Kube-CTL. So if you get the ports, you would see we have a two-port spread deployment and if we get the services, you see we have a backend service, a database service, a frontend service and the default Kubernetes service. All right, so we'll start crafting network policies to see how you would secure like a typical like three-tier application. So this is like a very common deployment like most people have. So you have some kind of a backend HTTP API, you'd have a database somewhere, you have a frontend deployment, right? Maybe you could have something like, I don't know, like Kafka or Broker or something running, but this is like a very typical, this is like a very typical scenario for your application deployment. All right, so just to go back here and see that you wanna see that Selium is properly installed. So you might have to wait a bit just to be sure that Selium, this might take a while depending on your bandwidth to pull like all the containers and have like the Selium agents running properly on each node. But yeah, you might have to like just give that a minute to be sure like that works correctly. All right, so we'll just go back while we're waiting for that, we'll just go back to the network policy editor and that's the preferred way, that's my preferred way for creating network policies and you should also adopt that. All right, so let's think of a, we have basically, we have a three-tier application, right? That we wanna secure, right? So from a typical application, right? On the frontend side, you want your frontend to be publicly accessible to anyone that wants to try and access your application, right? And on your backend, but you typically would not want your backend API to be accessible to anyone on the internet, right? And also you wouldn't want your, maybe if you're exposing your backend API, you'll maybe be exposing it to like very specific services that need to interact with your backend API. Same thing with your database, right? You wouldn't wanna expose your database to the old world, right? You would want like, say for example, you want only, for example, your database to be accessible by the developers, by maybe like your database administrator, for example, or only specific people that's supposed to be able to access the database, right? So that's basically, that's the scenario we're gonna try and work through with this. So in summary, basically, we want a traffic flow that basically looks like this. We want the front end to be able to accept traffic from anywhere basically, right? Because we need our front end accessible to anyone that wants to access it. We want the backend ports to be only assessed by the front end. And we want our database to be only assessed by the backend and very specifically set of people with a specific IP. So it's typically like in a company, you would proxy like people's access through like a single point, like a VPN and then use that VPN for like defining how people access like resources like within the cluster, right? So it's the same scenario we're gonna try and implement here. All right, so let's start off with the front end, with the front end policy, right? So let's call this, let's call this, let's start a new policy from scratch. Let's start a new policy from scratch and this time we'll switch to a Syllium network policy, right, on the network policy editor. All right, so let's start off with which ports do we wanna, so if we go back to the, I'm just gonna put it here on this tab, if we go back here, you can see like a, on our front end deployment, we basically have a label that basically labels, that basically labels a front end deployment as a app, as a tier front end, right? So we're gonna be applying like a policy base based off this label, right? So we have a label here called a tier front end, we basically use this, right? So back to the network policy editor and let's call this, let's call this front end policy. And if you look back at this, so we don't, we haven't specified any namespace, so by default Kubernetes deploys this to the default namespace, right? All right, so this would just be in the default, the default namespace, and I think if you keep this field empty, it will automatically be the default, it would be the default namespace. We'll specify the label and we want this, we basically want this to select to be applied on the front end ports, right? All right, so that's the first thing, that's the first thing we do, all right? So we've done this. So the first thing we also, another thing we wanna do is, by default, both Kubernetes and Selim network policy, if you haven't specified the rule as a white list, has a white list model, okay, right? So if we check now, we've got our ports, right? So if we pick one of the front end, I don't know, we'll pick one of the front end ports, for example, we'll pick this, and let's say keep CTL exec, exec, I think it has to be like an IT flag, I think, just to, just to, yep. So just give me a minute, just gonna put this, yep. All right, so, yeah. So we wanna pick one of our front end ports, right? And let's say we're going to actually try, we're going to try heating the database service, right? So if we, I'm just gonna put an S flag just to, right, if we ping the front end, if we print the database service from the front end port, we actually see this goes through, right? So we just ping it and it responds with the Pong, right? So this is sort of like, it's a sort of like stripped down HTTP version of Redis running on the database, just for like the demo purposes, right? But this isn't supposed to be the case, right? Your front end is not supposed to be able to talk directly to your database. I know like that's what the whole, with the whole, with the way the front end, the way the web development world is moving, that's sort of like, it's becoming like a bit obscure, but then that's not the case in the specific scenario, right? We don't want our front end ports to be able to directly talk to our database, right? If the front end ports need any data or anything, they need to like, maybe query like a specific endpoint on the backend and then before the backend talks to the database, right? So we don't want this, we actually don't want this, right? So a good place to start is we wanna enable defaults deny. And if we actually take this policy, the way it's currently defined, right? If we enable default deny, what's egress and egress policy, right? So I would just call this a front end policy, policy that's a YAML. And I'm just gonna paste this here, right? And if I apply this, right? So if we, so CNP is just basically a short prompt for serial network policy. You could write out the full name, but as you can see, I prefer like doing things like a bit shorter, but you could go ahead and write out like the full name. So if you get front end policy, you'll see like a front end policy as an applied on this board, right? So this is like every other Kubernetes resource. So we can also like do a describe on this and see this. So you can see like this is a front end policy, right? And it's applied on the, it matches like the label of a tier front end. Just to be sure, I think there isn't a typo there. I just wanna check it and see. Yep, all right. So the same, let's try access the same, the same backend service like we previously did. And you can see like this time, the request actually doesn't go through because now we have a front end policy that by default, we have like the default deny for egress and egress traffic. So you can access the database, right? So in our specific case, that's actually not what we want, right? We basically want our front end to be accessible by anyone that tries to access it. So we would make like a slight modification to this rule. So one other thing is, so POS will typically query if I do this, I think kind uses a core DNS, right? So kind uses core DNS. This is mostly if you have a Kubernetes because the bootstrap to QBDM, you'd have a core DNS as the DNS solution. So kind would typically reach out to like the DNS ports so like resolve like the DNS names, right? So in this specific, it would resolve this to the IP address of the database service. So we want that to be the case, right? So we would allow egress to Kubernetes DNS. And so now let's first, let's allow, so we want from any pod, right? So from any pod, right? And we want to be able to do this on ports 443 and 80. We want to be able to do this on what 443 and 80, right? So this is like an example of how you will specify a policy. So let's just take, you can just easily copy this and the CNP. So we're gonna delete this one or we can replace it. Whichever one works for you, but I'm just gonna remove this and basically do this from scratch again, right? So this is how we'll specify like a policy that allows us to assess our front end ports. All right, so for the backend ports, we basically want to allow like, so I'm just gonna clean this up and start from scratch. We basically want a policy that basically allows traffic only from the front end port, right? So it's the same thing, we will just go through like the same policy, the same process we can call this back end policy. Workloads are deployed on the default end space, we just call this default. Now we want this to apply to the back end ports, right? So same thing here. You can enable the default deny, enable the default deny for ingress, enable default deny for egress. So that's like usually that's like a good starting point and enable traffic to Kubernetes DNS. And so now we want only traffic from our front end ports to be, we want the back end ports to be able to receive only traffic from our front end ports, right? So we will do this and apply this to only our back end ports. And I think a backend takes, I just look at the Kubernetes definition a bit. This should be, yeah. This happens only on port 80. So just give this on port 80. So we have a, now we have a policy that has like a rule that allows us receive traffic only from the front end port and to the back end port. So we can just take this, take this back to my terminal, call this back end policy, yep. And if we apply this, now, so let's actually try hitting the back end ports. So a small caveat with the, with some of the images used in the ports is some of them use like a very minimal image and you don't have access to call, but then you can use a busybox and W gates that's like a slight walk around. These busybox and W gates, that's a slight walk around for that. All right, so, but I see we're running out of time, but in a nutshell, this is basically what you can do, we can do with the natural policy. And there's also another part of the host policies. So call this the host firewall. So basically the host firewall basically allows you the same, the way the same way you would write a signature policies, you can actually take this and apply it to like the, to like the hosts. So I was actually playing around with this like a couple of days ago on my small home lab and I could, for example, replace like something like, instead of having like using uncomplicated firewall to like define rules on each node, you just take the same, Kubernetes, the same natural policy model you find there with an extended to the host. So instead of having like separate tools, maybe to apply separate firewalls on each of the nodes, this actually applies like, this actually applies like the natural policy rules on the actual nodes itself. So not even the parts, not the parts, but the actual nodes in the cluster. So to give it a roundup, I think we have, correct me if I'm wrong, but I think we have like three minutes left. Yes, we have. And I think we have a few questions also. Yeah. So, are we ready to take the questions? So I will just read those. Hey, can you hear me? What? Yeah, I can hear you. Yeah. Okay, so yeah, I think, so let me start the questions. So like from Basant, the questions from security standpoint, are there any selling features that Cilium has covered over Calico? Yeah, so I could pull that up for you later, but this I think there should be like a comparison side by side with Calico, but yeah, I can't like list out all the different things. Oh, sure. But yeah, there should be a comparison, but Cilium offers like a couple of things, like beyond network policies, right? Service mesh, are a load, not sell load balancer you can use. And if you actually go to the Cilium websites, you go to all the use cases you could use Cilium for, right? So you can, you have a replacement for like Qproxy. So this sort of like eliminates like using like IP table rules. You also have support for BGP. There's the gateway API, there's the service mesh, there's the egress gateway that's fully compliant like with the new Qibonet is, yeah, you've got like a service map, you have the ability to export this, exports like your basically a flow logs to maybe like an external tool. And yeah, so you can just check out the Cilium website. It has like all the details and all the different use cases that Cilium has. Okay, so another question from Basant, like if I'm using macro segmentation, solution to secure NS and EWB traffic, how does Cilium policies help or contribute? From Basant, yeah. You can see the strength, if I'm not wrong. Yeah, I can see it. Just share that here. So for not-south traffic, for not-south traffic and for east-west traffic, you can define like, so within your cluster, how things move, how traffic moves within your cluster, you can define network policies that do that. Also for like outside your cluster, for example, people have like multi-cluster set up where their cluster set up is like a bit more complicated. So you actually have, I could pull this up for you. Yeah, this is basically cluster mesh. This is basically cluster mesh feature that basically allows you, basically put different, put, for example, you might have like a cluster that's on the, and we actually see this very, very often. People have clusters maybe on-prem somewhere and maybe for compliance reasons, they have the cluster on-prem and then they have like certain workloads that require less compliance and they have that on the cloud. So you can actually mesh all of that together and the same multi-cluster like networking that you're actually familiar with just simply, that same model just simply applies to like all the entire cluster, the entire set up. I'm sure that answers your question. So I hope so, yeah. So I think we have one more question, last question here from like, are there any plans to allow one to apply or create a policy for our cluster from the select network policy editor? I don't think so. I don't think that's the case at the moment. I honestly, yeah, I do not think that's the case for the moment, but if you actually create a policy, so I can, let's see. So the closest to that I can think of is maybe if you're, if you're using Minikube, for example, you just get this and you can copy this, but directly doing from the network policy editor, I don't think that's the case. Yes. Yeah, I think that is what we're done with. So there's no questions left, I guess. So yeah, thank you so much. So yeah, one final note. Yeah, just one final note. So if you want to see, we couldn't go through like all the examples on this live stream, but there's the network policy repo has all the examples and the different scenarios you would want to apply policies. You can go and look that up. And we're also currently working on a recipe for network policy. So we're going to go ahead and look at some of the you can go and look that up. And we're also currently working on a recipe for network policies. So just ready to make examples you can take and just apply to your cluster. So that's also open for contributions to the community. And if you want to help out with that, you can just give us a whole lot of Syllum Slack. Okay, so yeah, thank you so much. Ah, any questions again, like from Dushan? Can Syllum enforce network policies based on both L4 and L7 attributes? Simultaneously? Yes. So you can have a network policies that actually work but the L4 and the L7, and that's actually very, very common. You would have a, it's a very common use case. You would have a, you probably want to secure both at the L4 and the L7. You can, you can basically use all of them either individually or you can use them together. Okay, so yeah, I think we're done. So yeah, thank you so much Paul for being and like talking. So you have given us a lot of insights and it was very insightful and yeah, that was really it. Yeah, thank you. Thank you. And okay, okay. So, so it was really awesome with Paul and he actually mentioned a lot about like how you work with these network policies and so on. So thank you so much guys. So thanks everyone for joining the latest episode of Cloud Native Live. We enjoyed the interaction and questions from UN obviously and thanks for joining us today and we hope to see you.