 Yeah, also, yeah, welcome everyone to the enterprise track of the commit, the expected to be Africa, and yeah, I'm super excited to introduce our first speaker, I think, you know, here is Madhu, so Madhu is a DevOps engineer at Demo's Cloud, oh, sorry, okay, okay, cool, I actually thought I mispronounced something, so yeah, Madhu is a DevOps engineer at Demo's Cloud, and Madhu will be speaking to us about this topic, which is basically reducing the attacks of issues in the top classes. Yeah, Madhu, the floor is yours, so please feel free to share with you. All right, good afternoon everyone, thank you for having me, let me know if you can hear me, all right, cool, so I'm just going to start presenting now, so yeah, good afternoon once again, welcome to our very first edition of Kubernetes, Kubernetes Days Africa, so I'm here to be talking about reducing attack surface using network policies. All right, so first of all, very briefly, my name is Manasi, and I work as an SRE at Demo's, at Demo's we work on infrastructural design, security and implementation, so yeah, for those of you who don't know what Demo's do, Demo's is a DevOps company that helps to guide other companies into their path of cloud adoption, so we offer a range of services from migration to cloud, to set up on infrastructure, and we also offer Google WorkSpaces as a service. All right, so cool, today we'll be talking about security, which is a very interesting first set of infrastructural management, and as much as most people don't like to hear it, it's one of the basic things, one of the major things that affect your performance, the performance of your application and the security of your application, so basically security is just ensuring you adhere to confidentiality of data, integrity of data and the availability of data to your customers, that's what basically security is, and yeah, we get this question a lot, how do you make your system secure, so there's no, I think before we just proceed, I just want to state this out that there's no single solution to security, you just have to keep implementing layers of security defenses, and that would improve your, the security of your system as a whole. All right, let's dig deep into this, nice, security, cloud security basically is you trying to implement a whole range of policies, services that's going to help you protect your data from the cloud, you want to protect your data from attackers, you want to ensure that your data is constantly available, and all these methods which you apply to help make all this possible is just basically cloud security, you've already been cloud secure, it goes from managing your IAM permissions, ensuring that the appropriate user has their appropriate permissions to set up firewalls, all of this, like all these, they constitute security in the cloud, and one of the major ways of improving your security is identifying ways which you can reduce attack surface, so basically an attack surface is what an attacker can do when he gains access into your system, so for example, when you're working to a compound, what can you do, you can, you see a lot of rooms, you can work into any of the rooms, so the number of rooms you can work into is basically like an attack surface, an attacker has access to when he compromises your security and gains access to your infrastructure, so one major thing to do in order to improve your security is just to try to reduce what this attacker can do, so we'll be talking mostly about kubernetes, so the basic components of the kubernetes is the pod, or the containers as most people would see, so yeah, what can a particular attacker do when he compromises your pod or your container, that's something we want to basically know, so one of the major things is he can have access to the qbpi server if the service account is mounted, so this is very important because the qbpi server contains, accessing the qbpi server means the attacker can have access to, you know, understanding what is deployed in your cluster, knowing the namespaces, knowing the applications you have, and everything about your cluster basically is available once you have access to the qbpi server, so you really don't want to give him access to your qbpi server. Another thing is if your pods are running in privilege mode, the attacker can have access to, you know, the underlying node that the pod is running on, we've seen cases of what we call container escape, so the person, the attacker can choose to, you know, compromise the node, which the particular pod is running on. Another thing is, you know, if you have access to a pod on the cluster, the cluster is also in network, so the pod can have access to, you know, other workloads and other components of your network, so if you have database on your network, you have other workloads running on your network, the attacker can, you know, have access to all of this once he compromises your pod. Another part, which is very important, is data exploration. When an attacker compromises a particular pod, he can be able to, you know, export data to another location and, you know, based on what data he needs and the data he has access to, that can be very dangerous when maybe he exports user information, or let's say credit card details, for example, to a particular location. So what can we do? We've been able to identify, you know, what this attacker can do once he has access to a pod. We want to be able to reduce the attack surface, so that's what we said basically what this talk is about. So we want to limit what the user can do once he compromises a particular pod. So most times, architectures are in such a way that we have funds facing applications and then we have applications at the back. So your fund-facing application can be probably your front-end and then you can have, like, you know, back-end applications that, you know, power those front-end. So we want to limit the pod or the services that this user can access once our funds-facing application is compromised, for example. So yeah, end-tas-network policies. So yeah, network policies are basically a set of rules, that determines how our clusters would communicate. So if a pod is in your cluster and you want to restrict it to be able to communicate only to a particular subset of codes, you know, for example, the case I gave earlier, you have your front-end, you have your back-end, I have your database, for example. The front-end communicates the back-end, the back-end communicates the database. You don't want a direct communication from the front-end to the back-end. So you can implement things such as network policies and these network policies would restrict the communication a particular pod can do when it is in the cluster. So you can also look at network policies like IP tables. For those of us coming from a Linux background, so we have IP tables are just like a way of setting firewall rules on VMs on Linux and network policies are like came to IP tables on Kubernetes. So why do we need network policies? Some people might say, okay, we have IP tables. We could probably just, you know, get an inspiration from IP table and do something around it. But one thing with Kubernetes is the IP addresses are quite dynamic. So your pod, when your pod spins up, it has an IP address that would change when it spins up again. And we know from deployments that this is bound to happen. So when you scale up, you scale down, new pods get created, all of these pods have different IP addresses. And yeah, that's why one of the major reasons why we needed something that is tailored to Kubernetes specifically. And yeah, another thing we have to notice is most people don't know this, but Kubernetes by defaults don't give you all the security features implemented. So you have to do this yourself. And by default, all the pods in every namespace can communicate to each other. So you have a test namespace, you have a staging namespace, you have a production namespace, staging can communicate to production. So that's how Kubernetes works by default. There is no isolation of the pod traffic at all. So yeah, now you see why we need network policies. So how does network policies just basically work? So network policies, they use labels. So from Kubernetes services and deployment, we've seen this when you're defining your deployment or your service, one thing you would notice is we use what we call selector labels. So selectors are just labels that target specific pod. That's the same thing with network policies. Network policies use labels. And these labels target specific pods to apply those policies to. And another thing you have to know with network policies is it is implemented by Kubernetes. So it is designed by Kubernetes, but it is implemented by your network plugin. So if your network plugin doesn't support network policies, then I'm sorry, you wouldn't be able to use this functionality. So if some example of network plugins that do this, you have Calico, you have WaveNet, you have Celium, and you have Qrout, and there are others. These are not just the four limited network plugins, but you have to be sure if you want to implement network policies, you have to confirm that your network plugin actually supports network policy implementations themselves. So you also need to know that network policies are namespaced. That means they are limited to namespaces. If you apply network policy in a particular namespace, it doesn't affect other pods on another namespace. Although you can have extensions, the basic network policies are namespaced. But as we've said, network policies are implemented by the network plugin. So Calico can choose to extend this by adding support for global network policies. But just know that network policies by default are namespaced, except you want to go opt-in for customized network policies offered by network plugins. Those ones offer other enriched features for network policies. So yeah, this is a quick link to a site you can use. It's built by Celium. And just it helps you design a network policy and visualize it properly so you can easily work with network policies. Cool. Let's look at the basic definition of a network policy. So we're not going to run through how to create a network policy, which is very important. So the first and important part of the network policy is the pod selector. So the pod selector determines which pods these network policies apply to. They are akin to selector labels on services or on deployment. So the labels you define as a pod selector target some specific set of pods and then apply this network policies to those pods. And the policy type. So we have two types of traffic. Traffic, we have ingress and egress traffic. So if we want to target only ingresses, there's traffic coming in. We specify a policy type of ingress. If we want to target traffic going out of the pod, we target egress type policies. And generally, you can also add both of them. You can define both ingress and egress policies for your network policy. So you can, after you've defined the policy type, the next thing is you need to specify the rule that this policy is going to use during communication. So as you can see here, your rules can have four basic types. So you can define an IP block. So an IP block is basically saying this is a particular IP. I want to limit the communication from O2. That's if you're using network ingress or egress. Namespace is just saying I want to limit communication to a particular namespace or from a particular namespace. This is very important. Like when you are deploying isolated environments, say you have a test environment and you want to isolate test environments from say demo environment and they're all on the same cluster. What you want to do is you want to limit traffic that can that can enter a particular pod in the test or in the demo namespace. So yeah, then pod selector, which are very granular. So this, sorry, this, this specifies the pods that you want to target, particularly like this is specific. You want to target a specific set of pods in your particular cluster. So when you specify this pod selector under ingress or ingress rule, any traffic coming to from that pod only will be accepted. So any other traffic that is going to any other pod or coming from any other pod would be dropped by the network policy. Another thing is pods. So pods are rarely specifying the pods that we want the policy to apply to. So for example, you have my SQL database and you know, my SQL normally listens on 3.3.0.6 for 3.3.0.6. So you don't want the policy to allow access to other pods. So you limit the pods that it can communicate to. So yeah, as a rule of thumb, as a rule of thumb when you're creating your network policy, you should start with the default deny or. So what this policy just means is drop every traffic in your cluster or in that particular namespace, no traffic comes in, no traffic goes out. So once you've done this, you can now build up on this gradually and define granular network policies for your pod. So you want to do this because you don't want to miss out on any traffic. It's just like permissions on you want to specify. You want to give granular permissions. You don't want to give access permissions. So by default, you first drop everything and then you give them specific policy definitions to allow specific traffic. All right. So yeah, I've talked about this briefly when I was talking about isolating environments. Network policies are very important in that they help you isolate environments. So environments, a couple people, a couple architectures would have them on staging environments on the same cluster. They can have staging like some can even have all three dev staging and production on the same cluster. So you don't want a vulnerability or an attacker to be able to access your production environment from your staging environment. That would be very weird and very, very insane. So you want to be able to shut down traffic from dev that is going to production from dev to staging. So yeah, you use network policies to isolate this particular traffic and it helps you isolate your environments properly. That is if you go for a multi-tenants kind of approach in deploying your cluster. Okay. So we're just going to see some basic demos. I'm not sure we'll be applying them. We'll just look at how they work. So I'm just quickly going to change this screen to another screen. All right. So all right. Good. All right. So I'm just going to share this again real quick. Um, I think, I can see. Can you hear me now? Yes. Yes. Yes. Well, nice. Good day to have a bad network. All right. So, um, yeah, so I'm just going to just quickly run through this. I think we've wasted a couple of time trying to resolve network connectivity issues. So, um, I was just talking about the default deny policy and we are just noting that, you know, when applying network policies, you want to start with this, drop all traffic first, then boot up on this. So from there, you can go on to, you know, allowing DNS because DNS is very important in Kubernetes. So, uh, then you want to allow DNS resolutions in the port. So this is the next one you go for. So as you notice, we are only allowing an egress policy. That means the port should only be sent, should be able to only send traffic to port 53. And you notice here we are not specifying any destination port labels or anything. That is because, as I said earlier, um, network policies are namespeed. So the DNS port is in, is in another namespace entirely. So we cannot target it from here. So all you can do though is we target port 53 and say, okay, let us allow all traffic from port 53 to go out of this particular port. The next thing you do then is you build up on this. You just build up, you keep building on these existing policies. So say you have a backend, a backend port like this, a backend deployment as this, very basic. Um, as you can see, it has these labels here that just say, let us target this particular backend and, um, you can now build up on this and create other policies. So here we are saying this is another policy that says, um, we should allow backend access to port. So this is, we are, we are selecting any port that has this particular label. This is, um, so if a port has this label and it's working, allow backend access set to true, we are going to allow it to be able to send egress to any port that has this tier called backend. So remember, we are, we are dropping both ingress and egress policies. So that means, um, we want to, if you want to create a network policy after, if you want to create a network policy, you have to allow ingress and you have to also allow egress. So I want to communicate to the backend port. So you first have to allow me to be able to send traffic to the backend board and you have to also set the backend to be able to allow traffic to come from me. So this is where we are setting traffic to go to the backend board. We are allowing any port that has this label to be able to send traffic to the backend port. And then, and in this policy now, what we are saying is we are going to target the backend port now and it should allow traffic from any port that has this label set. So this is just basically, um, something you can do. So by doing this already, we just limited who can communicate to the backend and the only part that can communicate to the backend is a port that has this label networking allow backend access. So you can set this label now in your front end port. For example, if we just go down to front end real quick, this is the front end manifest and you will see that it defines this for this particular label allow backend access. So this front end port now should be able to send traffic to the backend port. And yeah, this is a very basic example of, you know, building up on policies. So you do the same thing for your front end application. You do the same thing for your backend application. So what happens is when an attacker comes into your system, by the front end port, he can't access a database. And even if he tries to compromise your backend port, your backend port can send egress to any other traffic, apart from your backend port can send egress to any other services, except from the ones you've whitelisted that the backend should communicate to you. So you apply this granular accesses or this granular traffic permissions to reduce what port can communicate to you. You don't want everything to be visible to every other port on the cluster. This is very risky in terms of security. That's when you're talking about security. So yeah, I'm just going to roll back quickly to my slide and just run down through the remaining part of the presentation. Okay. So all right, cool. So that was just a quick rundown of basic policies and how you can build up on this. Very important, stability for deny, you allow DNS and then you build up on this for each individual service. We are running, if you're running critical systems, it's very important to ensure that you don't allow access to every port or you don't allow access to every infrastructure on your, on your, to every resource on your particular, on your infrastructure, for example. So yeah, let's talk policies are a holy grail in this aspect, but yeah, they have a couple of limitations. They don't enforce TLS, they don't love request, and they don't enforce policies on lookout rules. So you can just look at this and build upon it. There are solutions here. You can use link ID or elastic search to log and you can use link ID for the service mesh to enforce TLS and elastic search to elastic stack to, you know, log request on everything on your particular net of policies. So yeah, kubernetes as I've said earlier, they don't come pre-installed with security measures. So the additional security measures, which we're not really going to go into because this talk is mostly on net of policies, but you can check the link in the slide and then you will find all the security policies that you can apply in your, in your kubernetes cluster. And very importantly, you want to ensure that you've integrated security policies in your CI pipelines, scan your images for vulnerabilities, scan your kubernetes manifest that they adhere to best practices, and occasionally you want to scan your cluster as a whole to ensure that everything is fine and secured on your cluster. And as a final keynote, please remember that security is only important as a person implementing it. So if you, the person implementing it, make your credentials, you know, are free to everyone, you are definitely not going to be able to secure infrastructure. You are basically giving them the key to just go ahead and risk your attacking infrastructure for free. So yeah, you need help because security is a very wide thing. It's a very important part of your infrastructure. So if you need help around securing infrastructure, you can always contact Demos. We want SecOps, SecOps offerings and we will help you, you know, we, we want audits on your cluster and infrastructure and then we check where we can, we can help you secure your infrastructure better. So yeah, thank you for listening. I think that's the end of, of the slide, that end of the presentation. So yeah, this is time for questions. Yeah, exactly. Wow. Thank you so, so much, Madhu. I mean, like, personally, I get a lot from your presentation and, you know, security is a huge part of, like, infrastructure and being a DevOps engineer. It's very important that you pay very key attention to the security aspect. So yeah, thanks a lot for that presentation. And yeah, this is question time, question and answers. If you have any questions for Madhu, drop on the chat section or the platform, if you are streaming live, free to also drop your questions on YouTube as well. Yeah, let's wait for an extra minute. And if we don't get any questions, then we move to the next speaker. But yeah, so basically, I have, I don't have any question, but I'll ask, can you willing to make your slides available to the attendees? Because I think, you know, the resources and information to just share is highly valuable and maybe it could be nice for the attendees to have access to them and then go back to those slides, you know, anytime you want. Yeah, sure. I'll be making these slides available. I'll be sending them to you as well. Awesome. Oh, nice. We just had a question here. Is there a link on how to get started with DevOps? Sorry, there is no link in the slide on how to get started with DevOps. I think you could probably check on GitHub. I like using the awesome list. You can check or some GitHub or some DevOps on GitHub and you can get to start it from there. Yes. Let me help out and just share the awesome DevOps for you. This is the name of the awesome DevOps. So yeah, from here, you can actually see helpful links to some of the tools and technologies you need to know. Yeah, thanks a lot, Madhu. I really enjoyed your session. It was really insightful. Yeah, and I know for sure my attendees are also excited and we get a lot from your session as well. So yeah, thanks a lot, Madhu, and we'll see you some other time. Thank you very much for having me.