 My name is Nipendra Khare and we are here for the session on security consideration while deploying the containerized application. So I'm assuming that you are aware of containers, a bit, and using Docker, at least. How many fusing Kubernetes? At least heard about most of them, some of you are using it. Or some of you would eventually reach there because that's going to be the destination what we are seeing as a trend now. So if you're not there, you're eventually going to use in Kubernetes either knowingly or unknowingly even. That's what I would kind of see. Quick intro about myself, I run a company called Cloud Yuga. And we provide consulting and training around containers, Kubernetes, and so on. I'm also a CNCF ambassador. CNCF is the Cloud Native Computing Foundation, which currently owns and hosts Kubernetes project. I have a course on edX on Kubernetes. It's a free course you can take it. Before starting Cloud Yuga three and a half years back, I worked for AdHat for around six years and then five years in between the startup step. And I also run a Docker beta group here in Bangalore of up to more than five years. And just kind of before we start, how many of you are developers or just admins and just admins? OK. No managers, right? OK, managers are there, OK. Fine, so we'll just quickly look at what is a container image is made of. And then we'll kind of try to relate to things. And then I'll tell you the consideration of what you might take it up. So any container image, for example, if you say I'm going to use a container image of Ubuntu, Fedora, or any of distribution, what happens is that particular distribution image contains a base container image. So when you say I'm going to use an Ubuntu container, we kind of say that we're just going to use the user space component of that distribution, not the kernel part. So in containers, you don't kind of deploy the distribution kernel along with the container. Your container just contains the user space component of that distribution. We would then deploy, put our application along with the application dependency, like it's runtime, it's whatever library it requires. And then we can kind of make a container image. And that container image, we can go and run on a container host. So for example, I could have an image for my Python application or Java application. That application I'm going to go and put on to a host where I can in containers. Now, this container runtime, it can be Docker, it can be anything. So there is a standard now for container runtimes. It is called as Open Container Initiative. It has specification using which you can define. Or there's a set of rules out there that a container image would run with a specific format. And it would run by the container runtime, which would have a specific format. So I could build an image with Docker, but I could take that image and run it on some other container runtime, for example, Rocket or RunSea, whatever. So, and then I'm going to deploy that runtime on the Docker on a host. That host can be again Windows, Linux, or Mac. And then because that image would have a pre-compiled or whatever the everything what you need, you can deploy one or more containers of the same type on that container on that host, which has a container runtime. Now, you are going to have the orchestration eventually, where you would deploy these applications on a cluster. So here, what you see is we have a three different hosts and we kind of bound them together with a software called Kubernetes, which is an orchestrator. Similarly, there can be other ones as well, like Docker, Swarm, Marathon, or whatever. But what I'm trying to say is you have taken an image, you have built a container, then it can deploy on a host and eventually you would have a cluster in which you have kind of put multiple nodes together. These nodes can be a VM for bare metal. And once they have a cluster, then you can deploy containers and then they would be able to communicate to each other. This is a very kind of, very small, I mean kind of quick intro about the container world. Any questions so far? Now, let's talk about what we can do when we deploy the applications on the cluster and then what kind of security decision we can take. First of all, that image that you are building, that image need to be scanned regularly. So today you might have built an image with some softwares. For example, you have taken the bash version of XYZ today and then tomorrow you get that hard-bearing and work and so on, right? Then you need to kind of make sure that when the image is run on the third or fourth day, it is being scanned for the vulnerability what has been detected few days back, whatever. So you need to kind of scan those images at regular intervals. So there are softwares. So I must assume that you know about Docker registries or image registries like Docker Hub or any other like Artifactory or wherever you want to put a Docker images there. Now those images or those softwares have inbuilt a scanning mechanism. So either you can use their scanning mechanism to scan the images or you can use the software like Twistlock or Sysdig and so on, which can scan your images and give you a result or to your upstream signal like this is the vulnerability there. Then you need to kind of make sure that what image you are using, it has been signed or it has been built by the right person. So what you can do is you can sign the images and those images would have a signal like this image is really built by the person whom I trust. And then you're going to kind of execute that. And then again as software like Sysdig and that Twistlock, they will give you a compliance and audit report of that image for how good that image is where it is being used in environment and then you can kind of fix them. Well, this is not a security per se but you need to make sure that your image size what you are going to use is minimal. So for example, here's an example we have a Docker file using which we are building images. So here you see that I mentioned about slash there but that in the from first line but it's not actually real syntax. It says that either node colon 10 image as a base image or node colon 10-alpine. So there are two different images are there so like two different Docker files. And as you can see the second image which I use Docker, node colon 10-alpine the overall image size is just 70 MB. And if we have used the other one is like 673 MB. So while you're building your Docker or the container images, make sure that you know what base image you're using. Don't just pick up any image and start building it. It might create some problem. So just to make sure that you kind of do that because the lower the surface you have the better for security. Other than just kind of having a big image and then you can have scan everything. So your transport time, your scanning time everybody would kind of come to that. That's something you need to kind of take care. Similarly, when you are running a Docker program with your Docker containers or any container in general what you can do is by default if you start a container inside the container the user is rude. So for example, if I kind of quickly show you there so I am on a machine here which has Docker installed and if I run a container here just to kind of show you that. So as I run the container my ID is rude. So I am kind of logging into the container as the root. Hopefully the font is okay now at the back side. Can you see the font? Yeah, so here you can see that the UID and the GID. So what I did in the previous step is I just started a container with just this command and now in this container if I look at the ID this ID is like a root user. So I mean though you are inside the container I'm not exposed to the outside world now but as I'm in the container I'm still a root user. So it's a good practice when you are building the image so here the image is on point. So what I could do is when I'm building my Docker image or the container image for my software I could add a user there and then when I'm executing my particular program because I mentioned that syntax called user app runner the eventual Python program is going to run with that app runner instead of the root user. So that way even if somebody kind of comes to your container you will not have the access of a root. Though I still have a layer of a host system there but it's a good practice to use a some non-root user to run your application. Hopefully it makes sense. Any questions? Okay, so this is like when you're building an image and as I mentioned that while running the container if you run it it is going to currently is coming like this but if you have used that option in your Docker image and if you build it it will not run as root user. But you can also kind of I think there's a command in the Docker container run help. If you look at the help it can have an option of UID and GI like a user string here, right? For example if I say like user similarly I could do like this instead of kind of putting and breaking into that image I could say user and say thousand movie, right? And if I look at ID, now if you look at my ID it's changed from root 2000, right? So my user what I'm running it is not a root user anymore. Similarly in terms of Kubernetes when you deploy your application on Kubernetes this isn't a manifest of a part definition I'm assuming that you know at least a bit of Kubernetes but so it's not that problem. So apart is a collection of one or more containers and think about it as a container for now only. If you're deploying on Kubernetes you can again the example I gave you for Docker similarly while you're running on the Kubernetes you can kind of go and specify run as user and run as a group. So that way the program also run as the non-root user there. So no, no root user is fine on the host system what I'm talking about is so what had happened right now? See this command, what does this command does? This command started a container. See I am on the system I'm on the root, right? This is the root of my VM but as I run this command this command I'm running the command sort of container as a root user. Now but I am inside the container now and in the container my ID is 1000 now. So for example if you're running an application in a container somebody hacks the application he comes here. So for that particular user the UI to be 1000 not be a root user. Sorry? Oh yeah I haven't changed that, right? I haven't changed that. I will say that I would change it, right? Like I will say the UI definitely is GID also. Sorry? No, no it's not like that. See generally if somebody comes to your application container or VM, right? What he can do whatever the permission that user might have as a program they are running. So now if I want to take away like for example if I take away the GID also, right? I mean I'm just giving an example here, right? It's not that I just want to make sure that group is also 1000, right? So I don't know the GID what something would be there. We need to check the option there. But anyway we can try with the Kubernetes world and then I can show you there. So anyways this is, I'm going to show you our demo with our platform where we deliver training but that's not, I want to kind of focus on. So this is, so I'm going to just show you a few demos there which I was mentioning here. So for example here is the lab we have for the running as a non-doute user. So I'm going, this is the same thing what I mentioned earlier. This is the EYH. So here we have an example of the Kubernetes thing and as I execute this one I would run a pod which is a Kubernetes pod. And if I'm going to exec into the pod and now if I do, it's still starting I believe, it's still creating, let it create. So what we're saying is once we log in inside the containers if somebody comes to your container and he would not be able to do any of the root operations for that particular user, he could just do what a user can do if the hack happens to your application. Makes sense? This is not about the root of the system like it's about the container in which you are running in what's there. So let's just, I'm inside that now. And if I do PSA, so the ID whatever. So you would see that now the programs are running as the user thousand. So that's not a root user for that one. So they cannot do much of a trouble to us. That's a good practice to have where you're not running your application as the root user there. Another thing you can do is you can mount your file system in the read only mode. So when you're deploying your container, you can say this particular container root file system would be in the read only mode. So that even if somebody comes in, they're not able to write anything. Right, so they're not able to kind of do anything. So again, this they can do the same thing at the level of Docker. And similarly you can do at the same level of Kubernetes. So we're specifying that your root file system in a read only mode. So that it does not kind of a security risk there. But if you mount a volume inside that, that volume would be writable. So that, so your application can write the content into the volume. So you have an application which might be sharing a shared volume. That volume would be writable. But your whole system's root file system is in the read only mode. You can't write anything there. So let's quickly see the demo of that as well. Either we can just see that and then continue from there. So I'm just gonna say that mount my system in the read only mode. This is similar for Docker or any container or container operation there. There will be something which would be like that. So we just wait for it to get started there. This one. How many of kind of believe that if a root user can do everything? Can you believe that? A root user, if I say ID zero, that would be able to do everything. How many of you say yes? Anywhere on the whole system also. If I'm saying that if I can see my ID zero, I should be able to do everything. Anything. Super user, can do everything? So that's good to know. Let's just see that in action. So let's just kind of finish the previous one. We are inside the pod of the container. And if I do touch that file, touch, you can see that it's not, it's a read only file system. That way I kind of made my file system only the read only. Okay, so what we are saying is a root user can do everything. That's just really kind of assumption we have. So again, I am on a system where I have Docker installed. Now I'm going to run a container again as the last time. But before that, let me show you a few things so that it makes sense. So I have my host, I have a host system which has some network interface attached to it. Now if I run if config command and kind of, what does this command do? I assume we get this particular interface to up. This container, this command, if you kind of run that command, this is going to create an alias network interface for me. It is zero colon one and that's why we've given to it. So if I do a config, you would see that on my system, I would have one more interface called that ETS 01 with that IP. This is like an alias thing which I have given now. Now, I mean, you can think of any command there. Now I'm going to go to a container where I'm going to use an option called network is equal to host. What this command does any yes, sorry. With this command, if I start a container with this particular command, the container shares the same network stack of the host. So if I do a config inside the container, you would see that my container has the same interface as the host, correct? So this looks perfectly fine. I have an ETS 01 interface inside my container there, correct? And what is my ID? My ID is root, which means if I could do, now I'm a root user, I have the interface of the host inside the container. And if I do if config, if config ETS 01, let's say bring it down. And now it says go to fail. It is failing. So my ID is root, I have the network stack of the entire host that was still I can't do it. Now, why is that so? In Linux, there's something called as capabilities. Capabilities is like a group of permissions which makes a what a super user can do. So if I look at the help of man capabilities, this is like you are dividing the super user permission into subgroups. And then you can take out and take away that groups as you kind of go and run and program from there, right? So what we are saying here is these are the different groups are available to us. And for example, what I'm talking about this particular group, for example, if I'm starting a container or we have a container also. So what, let's take an example movie. So here is an example of let's say ch1, which makes your, you can do a ch1, ch1, change ownership of a file or something like that, right? Now, assuming that you are running a program as root. So you log in as a super root, everything is there. But then you fork a program. What do you mean it's getting a new process, right? And in that process, you can see, you can tell what kind of permission you can give to that particular new program. And there you can take away or give ch1. So even that program would be a root user but can't do the ch ownership change. So let me see a demo of that that will make much more sense. So I'm going to give an example here, where I'm going to do like this. I'm running the container with the option of call capability drop call net row, okay? Now see, I am the root user. I have the entire host system thing. And what I can't do is, I can't even ping the internet. It says, are you a root? Well, my idea is root. So a root user, when you say ID root, is not actually like it can do everything. So there are groups given to that thing and you can kind of, unless you have the complete group, you are not super root, or super user what you say in general. So this way you can kind of control what kind of things containers can do by default. So like for example, because I could not change the, that if config, right? That command field, right? For us, if config, it is zero colon one down. This command field to me because I don't, so when you start Docker or Kubernetes, they don't give you this permission that you can change the network network offer from the container itself. But they could give away. So for example, similarly, if I run a container now with our option called privileged mode. So I'm starting another privileged mode now and with the same option now, but now this command would work. Now, because of privileged mode, I have given everything to the particular container. So you need to be careful that, okay? When you are running a container, for example, if you just download a ML file of Kubernetes of the Docker from the internet and if you're in the program, the privileged flag on by mistake and if that program was supposed to be a kind of an hacker or whatever, they would rule complete your system. So when you're deploying containerized application, make sure that you're not putting this privileged flag on either with Docker or with Kubernetes as well. So with, so this example of Docker now. So while deploying the with Kubernetes, you can kind of differ that privileged option there. So you can have these option here where you can kind of say disable my privileged mode escalation and so on. So with that, we can kind of control that how the application is deployed and can it kind of switch its permission from one user to the other level and so on. So I'll just give you a few examples here, using which you can kind of build a security, whatever the guidelines for you. So generally what will happen is in your Dev Q and Ops cycle, Dev, they can do anything in general. Ops is, Q is also fine. But when application goes to production, there need to be some kind of a checks which has to be performed. And these are some of the checks which you have to be done. They can be done manually like we are doing right now. Or then there are some kind of a benchmark that tools are available, which can help you out to kind of do this kind of checks. Saying that you do not kind of opening up this for the world there, right? What about this one? Then when you deploy your any cluster or orchestration there, you can create the partitioning in the same cluster. For example, it's an example of Kubernetes. Same thing you can apply for other orchestration also, in which we would have different groups or different teams working on the same cluster. So here we have like a teams here, like a QA team and a production team. And each can have their own namespace. Namespace is like a project. So you have a project on project two. Now what you can do in this project, you can kind of first of all, talk about security. You would also kind of control that how much consumption of resources they can do. So you can say, okay, how much consumption I want to give for namespace. You can set the quota limits on each namespace based on the object count, based on the CPU memory count and so on. You can put those kind of details for namespace. And that way you can control that, how much consumption each namespace can do. On a further line, what you can do is, you can, so we have now, as in the containers or the pods are there in different namespaces are there. Now, you might consider, you might say, okay, if though I have a namespace in the same cluster, so I have a cluster of 10 nodes, I have given two nodes for my dev, two for QA and the rest for the production. So out of 10, this is my divination is there. Now you know what to kind of say that, okay, QA just by mistake comes to the ops machine and so on or ops container and so on. So what you can control there is, you can control with the help of network policies and say how the containers can communicate to each other or the pods communicate to each other. How would you kind of put a role there? So for example, here what we are doing is, here we have two namespaces. One is the namespace called default, second is production or whatever you can call it. And in this production namespace, we have a namespace pod call for pod x, but there's something called as app, is there's a label given to it called app is equal to front. That's kind of a label we have given there. Now what we are saying is, this particular pod can only be accessed if the pod has the app is equal to black label and it belongs to default namespace. So I could completely control that who can come and communicate to me. It's called network policies. Similarly, the other pod if we try to, it's going to fail. The network policy would say, if you don't qualify for this particular thing. So that you cannot come and connect to us. So when you deploy the application, you will network policy you can put in. And with that you can say who can come and communicate to me. Any questions here? Yeah, I'll talk about that. Yeah, I'll talk about that then. Any other questions? Okay, this might be, okay. That's okay. Then talk about authentication and authorization in the cluster. So what happens is, like I'm a good example of Kubernetes, but it can applicable for any contrite orchestration you would be going to use. So we're going to have these objects there. That is actually your containers or applications, your names and so on. And then you would have a user. That user would be accessing those objects, like, for example, if I'm in the Cube CTL get pods or get nodes come on, right? For example, if I kind of go to my app here, so here I have Cube CTL get pods come on. Now here I'm listing my pods, right? Now this Cube CTL is nothing, but a user is kind of running a command there. So the user talks to the cluster and then try to access the objects. Now this particular thing between your user to the cluster happens in three stages, which is called authentication, means a user can log into the cluster. Authorization means what a user can do in my cluster. And then eventually we have some further rules and the validation check. Only then a user can access my objects of Kubernetes. Now I'm talking about this one. There are two kinds of users we have. One are the normal users like us, like we are trying to run a command. Then there is one more type called service accounts. Now these accounts are very, these users are very kind of a unique. These users think about these are your programs. So for example, you want to have an UI for Kubernetes. Think about just a broader case. You have one to have an UI for Kubernetes where you want to tell your user, go to this UI and perform some operations. Now that UI need to access the Kubernetes cluster behind the scene. For that we create service accounts. So these are the users which connects to your cluster behind the scene performs some operations. So when you create these service accounts user, you need to be very careful that what they can do in the cluster. Just kind of don't say just okay. Again, if you can download a ML file or a manifest file from the internet and try to run it, they might have the permissions that can kind of just destroy your cluster or kind of whatever it is, right? You need to be careful about them. Like here, I'm giving you an example here. So here we have, so what we do for that is authorization there. Authentication, a user can log in and I will talk about that right now. Authorization means that what a user can do in my cluster. Like here, what I'm doing talking about that specifically for the role-based X control rules, roles there. So what we have here is in Kubernetes, we have these objects like the objects I mentioned about pod, right? So for example here, I have an permission called pod, right? Get pods are there and you just see in it. Now, is my user is able to remove the pods, kind of delete the pod, scale the pods or whatever. For example, I am the root user currently and I could actually go and delete a pod here. So I could say delete pod and the pod name. So I have a permission of creating the pod, removing the, of deleting the pod as well. But what we can control as an orchestration there that what a user can do in my cluster. Does he list the pods? Can he create the application? Can he scale the application? All this thing is configurable. So what do you see here is something like that. So what we are saying here, I could have, I want to have these resources for that user and that user should be able to do get list, watch, create an update. He cannot do delete in the sense. What we are example we are giving here and this particular role is given to a name space. We talked about name space earlier. Name space is a way to partition the cluster what you have, a project there, right? So what we are saying apply this particular role on the cloud yoga, cloud yoga name space there. And then once we have the role created, we can give it to a user and then we can tell what a user can do in my cluster. Hopefully it makes sense. Any questions here? So we are creating a role and then giving the permission to a user that what he can do. Any questions? Okay. So let's just go ahead and see the example of that. So I already created a user here before kind of starting the labs there. We just look at the example of the RBAC rule there. So I have finished all that. So after creating a role now, so I'm going to execute this command. This is going to create a role for me. So I'm creating a role. So I already have a name space here. So if I look at the name space here in my cluster, I have a project like cloud yoga default. These are my projects are there. So I'm going to execute this command and this would create a role. And with this role, I have a role now. The next step is for me, I have to do a role binding there. So I would say with this role bind to a user group or service accounts. So you can kind of bond a role to a given users there. So here I'm going to bind it to my user called Enkhare. But this user can be service account user, which means a program of yours. So a program can be given a role. So here we are giving a role that deployment manager. This is my name was there, deployment manager. And I'm giving that deployment manager role to my user Enkhare for my name space cloud you are there. So let me just do that. So I'm going to do the role bindings as well. And now with the next command, I would deploy my application here. So just to kind of quickly tell you the thing now. So for this particular cluster, I have two users. One is an admin user and one is the Enkhare user. So currently as I'm running the command, I'm running the command as the root or the admin there. But when I run this command here, I'm changing the context. Context means I am now want to run the command with the Enkhare user, a different user there. So as I run this command, now because my user now have the permission to create the pods because as you can see, we mentioned that I could create a pod. I could do a pod creation or a deployment creation. Let me do that pod creation here. And then I should be able to list the pod because I'm passing this context at the runtime. So I'm kind of running this command as a Enkhare user there. So I could list the pods, but let's try to list or let's have to remove this pod now. So if I do a delete pod now, we check in the last step, delete pod and the pod name. Now you would see it's going to fail because as per my role, we have not given it. So what's going to happen in larger company? You're going to have an operations team. They're going to manage the clusters. And then they're going to give the configuration file per user. And the per user, what you get as a developer, you'll get a config file using which you can connect to a specific cluster on a specific name space. And on that name space, you will have a specific permissions. So your operations team is going to kind of control what the complete your policies are going to be. And then per user would be responsible can do only thing on a specific name space and only specific options there. Hopefully it makes sense. Any questions? No? Okay. Then especially to Kubernetes, you can do auditing there. So you can kind of look at the what happened when it happened, who did it and so on. So you can enable the auditing in Kubernetes or it would be possible for any of the orchestration also where you could look at that audit APIs where you can answer these questions like who'd log into a cluster, what they have done and so on. So you can kind of get an audit trail there which would kind of help you to kind of audit and system as you are sapping there. Then what you have once you are running your applications. So for example, you might have a backend application of another application, front end up to the backend. And that communication would require some password authentication there, right? For that you can use a sort of user object called secret object, you can go pass those credentials as you kind of use those applications. So for that purpose, you can use secret object there using which you can pass the credentials TLS or Docker registry passwords. So for example, you might have a private registry where you have images in the private mode while in the application you want to provide the credentials and then pull the image. So for all that purpose, you can kind of create different kinds of secrets and then you can kind of pass those credentials and manage them better. Then there are these benchmark tools are there which is, which you can use. So benchmark tools are like, so there's something called as CSI benchmarks. So if you heard about CSI, CIS sorry, CIS benchmarks. Yeah, CIS benchmark. So CIS is the center of internet security and they kind of release the benchmark for different things. For example, Amazon Linux, AWS, Tomcat or whatever, these kind of, they release the benchmarks against these particular cloud providers or software water like CentOS, right? Similarly, they have the CIS benchmark for Docker and Kubernetes. What you can do is they are very simple like you can download the benchmark and then you can run it against your cluster and it can tell you what kind of works or what not. Yeah, yeah. So yeah, so there is something called as QBench from across security. So QBench is a software which kind of people, I mean that the company has built collector circuitry, this is an open source tool. They have built for this CS benchmark of Kubernetes. So what you can do is you can kind of download this QBench tool and you can then run it against your cluster for, I mean this is for Kubernetes, but even for Docker also you can get it. So you can execute this command and this is going to give you the output what I was showing you earlier, like pass, fail, pass, fail like this. So then you can kind of go and find out the actual error for example, fail happened, it will have a description. You can go to the CS benchmark, find the exact definition of that one and you can then rectify it on your cluster and so on. So you can kind of run these benchmarks that will be interval and you can generate a report for you. Yeah, for Docker and Kubernetes both you have these benchmarks. So Docker Bench, I'm not sure what software it is, it's a paid or free. Aqua Security has released the, for maybe they might have Docker Bench also, but what you can get from here is, so they also have, I mean, this benchmark also has some scripts are there, but they are mostly like a problem statement, what solution you have to perform and those kinds of things. This is going to give you this one, but if you kind of, of course, I think if you're going to buy there, whatever the subscription of that one, right? For example, if you go to Docker only, right, you can download the report and then I think there somewhere, somewhere I saw they kind of, you can get the details are here there. So click on this one. So they have all these benchmark and so on and they'll kind of give you, it's like depending on the level what you have, but these benchmarks are very stable. I mean, this kind of a standard there. So you need to rectify them, either rectify them via writing a script or whatever, but you have to kind of, so Q Bench is an open source for Kubernetes for Docker, I don't know actually, is it free or not? Or there will be some other, other guys might have it. Yeah, yeah, so they have, right? For example, if you look at the Docker one, right, they kind of giving you for Docker 1.6, 1.11, whatever. What are the community? Of course, it will be for a specific version there. That's true. So Docker is for just for Docker demon perspective. So Docker API is common, either you'll run on Windows or on Mac or wherever, right? So it's like mostly for Docker demon per se. So, but yeah, you'll find for Windows, I don't know whether they have Windows for that, but these are the benchmarks, they kind of they work actively on. And there is a chapter in Bangalore also, which kind of manages or part of the thing as well. Okay, any questions? We still have two more minutes. Yeah, so again, there are paid software, they mentioned about Twistlock and then Acqua Security. So these are some companies, which kind of provides you this one. So again, as I mentioned about Q Bench, this is an open source tool, which will execute. That's one. Second thing is if you kind of buy their paid product, they'll kind of tell you exactly what kind of problem you're having to have and how to rectify it. And third way is you don't manage this cluster and everything on your own. You buy the managed result from like AWS or Google Cloud, where they would manage all the control pane for you. Everything like, there's a company called Nirmata. They kind of gives you, they give you managed Kubernetes, not managed Kubernetes. So they will help you to install the Kubernetes cluster on your cloud provider, but they will manage the cluster for you. Security wise, they will do all the testing for you and everything. So I will just give the link for those guys if you're interested. So Nirmata, so Nirmata is a company which kind of gives you that thing that you can kind of go and do that. I have a question here. That's there. And then last I just want to open up my company website. Any questions? Yeah. Yeah, of course, of course, of course. That's the whole idea. Yeah. Yeah, clear is a similar tool, which is going to be images scanning. So it's mostly for you have our registry. You deploy a clear tool, it gives you the vulnerability per image what you have. No, no, no. Yes, you can do. It's just a, but clear is a tool only for the image only. It does not do the everything for you. So there are the seven multiple parts are there, right? Right in the container and they're running a cluster and so on, right? For the application, image only clear can help you, but only for that only. My question is. Yeah, so the, so CNCF has the two sets. CNCF of the machine runs the Kubernetes now and they have the standard certification there called, yeah, this is going to finish. And then the CK and CKID, that's what I'm saying. You talk about secrets, right? Right? Yeah. So where do we store those secret objects? So those objects, oh, there's a good question. So there are two, there are two places you can store it. One is there is a key value store in Kubernetes called at CD, right? So, I mean, you're deploying an orchestration when you deploy a secret object, those get inserted into that key value store in Kubernetes cluster. So if you are up here, let me just quickly bring up our Kubernetes slide, maybe. So here is the architecture of Kubernetes. There's a key value store, which stores all the cluster states. So by default, your key value or the secret of it goes there. That's one way of doing it, or you can integrate with some third party tools like vault of HashiCop vault where your ticket can go in store and then you can retrieve back and use it in an application. Okay. And how do we manage the load balancing for this? Load balancing happening, it happens, yeah. Of what? The application? Of application load balancing, yes. Application load balancing. Yeah, it happens, it happens. I'm not going to talk about Kubernetes now, but it happens, it's all, it's all over. Is there any tool to handle those load balancing? Yeah, it's all, it's all inbuilt. There's no extra tool required for that. It happens internally. We'll deploy a load balancer on top of the cluster handling that. Would you recommend any tools? I mean, would you recommend any tools? Any tools? Yeah. For what? For the load balancing. No, it is, it is a Kubernetes inbuilt tool is there. The load balancer is available. What happens is, I mean, again, I just want to kind of, don't take our time there. Let me just, yeah, it happens. I'm going to talk to you offline, yeah. Yeah. I think that's all the time we have for questions, sorry. Okay. Thank you. Thank you.