 Hello, everyone. I'm Rahul Jado. I'm here to talk about security. So before you doze off, I hope you had your dose of caffeine. So I'm going to talk about securing Kubernetes secrets. And one of the things that usually people talk about when securing Kubernetes secrets is, why don't I simply use a secrets manager and be done with it? What I'm going to talk about here is what are the threat models? What are the issues? Even if you end up using a secrets management store or a secret store, what are the different threat models that you might have to consider to further security of the secrets? How can you prevent secret leakage? That's what I'm going to talk about here. I'm Rahul Jado, one of the co-founders and CTO of Accunox. We are a security organization. And we are very paranoid about managing secrets. For two reasons, A, as a security organization, it's very important to keep our secrets secure. And secondly, we are also dealing with some of the secrets that we are managing on behalf of our customers. So when we started looking at the secrets, we started looking at, oh, what is the worst thing that can happen? The worst thing that can happen is that we can get shut down if we don't manage it well. So not just using the secrets manager is not good enough. What is the worst case scenario? What happens if the secrets manager itself gets compromised? What is it that we can do to further protect it? These are what the questions we were asking ourselves. And that's where some of these things came up. Briefly about myself, I'm also one of the maintainers of a CNC of project called Cube Armor. We have a booth in here. And at 5 PM, we have an announcement of our Rahul's winner. So please, 5 PM, please be there. So on the agenda first, let's start looking at the threat model for secrets management itself. What is it that an attacker can possibly do and from different perspective? There are different attack possibilities. One is from the client side, where the attacker might get hold of the desktop or the actual user device equipment itself. The second is, secret store side attacks possibilities. What I mean by that is, in a lot of cases, you might have on-prem deployments. And you might want to use an on-prem deployment of a secret store. Within our organization, that was the case. Basically, we were deploying a secret store on-prem. And we were really worried about or paranoid about, how do you harden that particular secret store? And then understand the risk and the security options. Protecting secret store, a lot of times, the secrets are finally consumed by an application through an environment variable. Or a secret mount point or a mount point from a Kubernetes. How do you ensure that these mount points do not become the source of your secret leakage? That's what I'm going to talk about. So yeah, I mean, the levels of secret protection, like a lot of organizations already have started making use of secrets management store, like hash code vault, cyber conjure, Confidant, and several other ones. And that's a good step one. In most cases, the step one is to just make use of Kubernetes secrets, and then they move towards something like this, which is fine. And then configuring the RBAC accordingly, configuring the RBAC properly, that's another step. Scanning the hard-coded secrets in the target environment, like your GitHub repositories in your container images itself is another step that most of the organization today do, if you're not doing, you should definitely consider doing it because the risk of a secrets compromise is extremely high. And then you might have to think about advanced protection. Now, when I talk about advanced protection, what I mean is what happens if you have a privileged attacker compromising your solution or a system itself? Second is, how do you prevent the secrets leakage? And third is, and this is what our primary concern was also, how do you protect from the ransomware attacks itself? Basically, if our secrets manager gets compromised, our complete application would face a downtime, which means none of our customers would be able to use the security solutions. So this was a pretty big problem for us. Now, the first threat model that is usually talked about in the context of secrets management is, how do you secure or how do you control the user access itself? And this is pretty important because this is where most of the leakages happen, where in your Windows desktop machine itself might get compromised because of several attacks. And then you have your computer get compromised, and then someone can use that computer to actually log into the secrets manager and get the secrets. This is an important point to note, but there are a lot of solutions out there which can tackle these, especially having multi-factor authentication and things like that might help you in that context. But I'm not here to talk primarily about this because this is a problem not for Kubernetes necessarily. Well, I'm going to talk about threat model for the secrets management from the server threat model perspective. What I mean by that is, what happens if an attacker ends up compromising the server, which has full control and gets full control of the storage and can corrupt or delete the secrets or encrypt the secrets if it's a ransomware attacker or manipulate the server logs. So that's what I'm going to talk about here. And here in the sample deployment, what you see is, the secret store is locally deployed on-prem. And then basically, what are the worst case scenarios that can happen? This is what I'm going to talk about. I think one of the key things that I would like to note in the context is some of the guidelines that the US government has set in the context of secrets management itself. And it says that you should ideally have an access control, at least permissive access to these secrets. What it counts towards is what I'll talk about. And this is another threat model which a lot of organizations do not consider. What it means is that finally, the secrets have to be consumed by the applications. And when that consumption happens, when the application tries to use the secrets, there is a possibility of secrets leakage. And a lot of attacks start here as well, where you have service account tokens which are mounted directly on the file system. And attacker gets hold of a known vulnerability within your target pods or workloads, gets into your system, is part of the pod. And because the attacker is part of the pod, they have direct access to this access tokens. And from there onwards, the latter moment is much easier. Any attacker, the first step is usually the initial access which MITRE calls initial access, wherein they try to get into the system. It could be through remote command injection. The second or the third step is usually to look for credentials using which they can go and attack the workloads of interest, which means essentially it involves two kinds of things, which is one is the recon phase, and the other is getting the credentials itself. The recon phase, any attacker who does recon, you have to worry about those kind of attackers because these are not script kiddies. So that is something that you should be very much focused upon. So I'm going to talk about this as in once the secrets gets injected, how the secrets gets injected into your target workloads, and what are the points of compromise there. Yeah, so the first point about mounting secrets in the user deployments, a lot of applications actually depends upon a lot of secrets management tools provide a way to actually mount the target secrets into the application space using one of these three things. Of course, there's another model as well where the application itself is changed to directly make use of an API call for a secrets manager. Typically, that approach is not followed because organizations do not want to tightly couple with a particular secrets store. So they end up using environment variables or a volume mount point, or in some cases, the secrets management tool injects the secrets as Kubernetes secrets. That's another model. Now, Cyborg has a tool called Summon, and where it injects secrets from conjure, KMS, keep us into services as environment variables. And again, this is not something that's very specific to Cyborg. Even a hash of what injector does similar for a Kubernetes cluster. So basically what you see here is the secrets provider where all the secrets are kept. There is an intermediate service called a Summon. It could be a hash of what injector. And then you have a target workload, or the workload which is finally going to consume the secret, and the blue box that you see or the purple box that you see there is the target secret. So one of the things, it's very simple. If you look at, if you want to get access to the environment variables for any application, it's very, very easy from an attacker's point of view. For example, here you see that I have a MySQL workload. And MySQL workload, in some cases, you can also, it allows you to inject or use secrets from an environment variable. And if I'm a privileged user, or if I have a privileged access to the target environment, I can go into Procfile system and just dump the environment file here. As you can see, in this context, as part of Procfile system, there's an environment file. You just dump the environment file, and all the environment variables are visible. You see, it is this simple for an attacker to get hold of your secrets. Imagine what can happen if the attacker get holds of your MySQL database, or your RDBMS database secrets, the possibilities are limitless from the attacker's point of view. And again, this is not something new. There's an organization called as MITRE in the security space. This is the organization which tells you, from an attacker's point of view, what are the tactics, techniques, and procedures that the attacker will typically make use of to gain access to exploit the target system. And they have this attack ID called T1083, which talks about the same thing where an attacker can, as simple as, leverage the environment file and get access to the secrets. Now, I'm going to talk about remediation and some of how to protect some of these things. And this is where the project that we have been working, this is a CNC project called Cubarmor. This is something that we've been working since 2020, comes into play. You can see it's a runtime security engine, which does something called as inline mitigation, wherein an access to a resource is prevented before the access can actually be done. Like, there are multiple other solutions which can detect and respond kind of model, where you detect a malicious access. And in response to that, you take a remedial action in the form of killing the process or quarantining the node or deleting the pod itself. But by the time you take a remedial action, the secrets is already leaked. Or if you're a ransomware attacker, your volume mount point is already encrypted. So you need a tool with inline mitigation. And this is where Cubarmor is differentiated. Cubarmor is the only project in CNCF which does inline mitigation. And on the screen here, you see some of the tech that Cubarmor uses under the hood to achieve this. And the primary one being LSMs or Linux Security Modules. And you see, and LSMs have been there for quite some time. Like, they have been there for past couple of decades and used in the context of virtual machine and bare metal hardening. What Cubarmor is trying to do is bring it in the context of containerized workloads and orchestrated workloads. That's what essentially it's doing, basically orchestrating AppArmor, SLNX, and there is a new LSM on the horizon called as BPF LSM, which I'll talk about a little bit more in detail to secure some of these things. And you see another box, EBPF box. EBPF is now becoming fairly popular. Most of the observability and monitoring solutions already use it, leverage it to a lot of extents to get observability into what the application behavior is, what the workloads are doing to get access into what the malicious events are. But it's primarily for observability perspective. If you want remediation, if you want to stop attacks, if you want to prevent attacks, EBPF alone cannot help because it cannot prevent. You can observe but not prevent using EBPF at that layer. So you need something called as LSMs. And BPF LSM is one of the new technique that was introduced in the Linux kernel a few years back, which is today mainstream. And that is the secret source which Cubarmor uses. It's no more a secret because here I'm talking about it in a Cubeday. So BPF LSM is something wherein, as a user, you can specify a policy which gets converted into BPF bytecode and get injected at LSM hooks in the Linux kernel. All right, so it gets injected into the Linux kernel at the LSM hooks, which allows it to do inline remediation. That's the differentiation that the project brings in. Now, how is it relevant in the context of secrets management? You may ask. That's where I'm going next. So with Cubarmor, what you can possibly do or essentially do is multiple things. You can say only allow a particular system's behavior in the target workloads. And what it might mean is that you might say only allow the actual binary to access its own environment variable and no one else. There exists something called as mandatory access controls or MAC in Linux. LSMs are a breed of MAC, basically mandatory access control wherein, even if you are a root user, you won't be able to do an access of a particular resource unless the LSM allows you to do. So even if you are a privileged user, like a root user, you won't be able to access the environment file. And that's where Cubarmor comes into play. There are a lot of other things that Cubarmor does wherein it can be used for such kind of protection, which means you might say only allow these set of application binaries to execute in the target pods. Typically, what attackers do is that they use package management tools to actually download external toolings or accessory toolings which can then further help them in recon and things like that. So all these toolings are by default shipped as part of container images. Ideally, a lot of organization, a lot of workloads are moving towards scratch images or destroy images, which is a very good practice. That's great. But even at the end of, even if you have that, you will end up having vulnerabilities in those application binders itself, which might allow you, which might allow attacker to do remote command injection, which means that the attacker would be able to inject into the target pod their binaries, and their binaries would have execute permissions. And just because of the fact that the attacker's binary is part of the pod, they have direct access to all the assets within that particular pod itself. And that's what, essentially, Cubarmor is trying to leverage in the context of secrets management, with Cubarmor, the user can say that only allow the target binary to access its own environment files. That's an example. So protecting Kubernetes secrets and volume mount points. Like I mentioned before, there are three things that gets mounted, using which the application can access its own secrets. One is through the Kubernetes secrets mounted within the pod, which means that you can use the API server to connect to the Kubernetes server and then get the secret, or the other thing is the volume mount point, which means it's a file system path in itself. You can enforce constraints such as, you can say, allow access to certain paths for only certain processes. And even if you're a root user, you won't be able to access that path. You'll be able to access the path only with the target binary. So only your application binary would be able to access those things. You can say, you won't read only access, but all write access to certain processes only. I mean, all these are the combinations which you can do, essentially. And that's what can be used in the context of secrets management protection itself. Here is a real world example that we used with Hashek or Wart. Like Hashek or Wart keeps all its customers secrets, or user secrets, within a particular volume mount point, which is slash Wart. And if you look at the actual execution pattern of Hashek or Wart, you'll see that there is a particular target binary, which is slash bin Wart, which is the only binary which requires access to the sensitive asset, which contains all the secrets. We have shown that you can attack the target volume mount point and then encrypt the content such that the content will no more be available to the user themselves, which means as a ransomware attacker, I can get into the pod, encrypt the file system assets, and then no one will have access to those particular assets. With Kube Armor, what you can do is you can say, no one is allowed to inject a binary or an executable within the target pod at the runtime. If that's not enough, you can say, only allow certain binaries to execute in the target pod and no one else to execute. So even if an attacker binaries somehow gets injected, it won't be allowed to execute. Let's say it somehow executes. It won't be allowed to have access to the target secrets volume mount point. And that's what we have done. A sample policy is shown here, which states or which shows how you can specify the rules. And basically, these rules are internally getting converted into either app armor policies or BPF LSM, BPF EBP of bytecode, and then getting injected at the LSM hooks. That's the differentiation that Kube Armor brings into the play. Now, you might ask, it might work for hash code world, but what about other secrets management? Everything works on the same principles. Every sensitive asset has to be mounted as part of a file system. And then there we go. And it needs access from a certain set of processes. Even if it's a MySQL database, it's keeping the data at a particular volume mount point. And you have MySQL D and MySQL admin and certain other specific binaries having access to these kind of resources. Now, you want to make sure that you harden these workloads which are getting deployed in the target Kubernetes workloads and protect them. So that's what essentially the position is here. With Kube Armor, you can achieve that least permissive security posture, wherein you can say, only allow these specific things to execute within the target workloads. And only those set of executables will have access to certain sensitive assets. And you can as well say that only these set of processes can do network communication. So what we have in general seen is that in the context of Kubernetes workloads, when you spawn a pod, within that pod, almost everything has unrestricted access. And that's where the problem begins. Well, that's primarily the threat model that I was going to make you aware of. So basically what my point is that when you think about securing the secrets itself, it's not good enough to just have secrets managed or a secret store in place. It's a big first step to do, because everything follows sweet. But the important thing is you have to still worry about, how do you harden the target environment itself so that you go beyond just using a secrets management tool? Secondly, you need to understand the risk associated with the secrets compromised. Our exercise started with that particular exercise in mind, saying that we have customer secrets to manage and if any of these gets leaked, we will soon become the headline for any of the tech journal and we'll be out of business very soon. So it's very important for us to take this seriously and that's where it all began. The third is the hardening the secrets management tool can be an effective strategy for on-prem deployments. On-prem deployment especially have to consider these aspects, because you are the ones who are managing the secrets stored in the first place. Thank you. That's all from my side. Any questions? Anything? Well, I would just like to remind that the Cubama booth is available outside and if you have scanned for the raffle, please be there at 5 PM. We'll be announcing the winners. Thank you. Oh, wow, that's a good question. So how does it, how does confidential computing or confidential containers can help? So confidential containers goes to the next level. Like you have hardware-based security. Yes, eventually the aim for Cubama is to actually weave together hardware-based security with the Cubama policy. That's the eventual goal we are working towards. We are not there as yet. But yeah, that is something that we have on the roadmap. That's right. So as a user, you can eventually push your secrets or secrets management tool itself as part of confidential containers. If you can do that, that's even better. Yes, sir. Hello. Yeah? So what is your recommendation for maintaining the secrets like dynamic type password protected and it should be rotated like that? Oh yeah, so those are the best practices that every organization needs to follow. Like rotating the tokens, rotating the secrets, those are considered to be given. So even if, the point that we are trying to make here is that even if you end up rotating the secrets, the secrets are available directly as file system paths and someone can make use of it. So rotating the secrets is the best practice that every organization should follow. And that's mandated by certain audit and compliance regulatory requirements anyway. So that's a given in this case. So if we are implementing the rotating secret, so it should also require the syncing between the cyber arc tool and also our parts and container base, right? Correct. So yeah, so eventually you would require secrets to be rotated. So cyber arc, hash or vault, and all these tools can be made use of for secrets rotation itself. But eventually the application pulls the secret. And then the secret is available locally and that becomes the threat point. That's what the point is here. Yeah, the most important thing, like if you are using any third party tool like cyber arc or something, so we will use any service account that will communicate to your container and that. So how we can protect that service account, because that also can be compromised, right? You mean the service account who has direct access to that? Yes. So that comes under RBAC based policies. So yes, there are multiple levels, multiple threat layers on which you might want to operate. RBAC is one that is what I mentioned initially in the beginning. RBAC is something which is given. Like you need to have appropriate RBAC policies in place. What we wanted to bring in here is that there is another threat vector that is typically not taken into considerations. RBAC, certificate rotation, or token retribution is something that is typically taken into account. But our intention was to bring into awareness these kind of attacks, possibilities as well. OK, thank you. Got it. Thank you. Hi. Yes. So does this cube armor help protecting against heap dumps of a container? Like let's say if someone take heap dump and secret will be part of that because it's in memory. Anything which is a file system object, anything which is a process object, anything which is a network object is fundamentally protected by LSMs. And cube armor integrates with LSMs to protect those assets. Yes. So since it's a file system asset, yes, it is a product. It can be protected using cube armor. Hello. So I've told about cube armor being used to secure secrets which are mounted by vendors like Vault, et cetera. Yes. There's another mechanism which I've seen being used a lot, sealed secrets. What is it? Sealed secrets, basically the encrypting secrets and then storing it in your repositories and mounting them as an environment variable. So how do you? So if you look at Secret Service Account Tokens themselves, Service Account Tokens themselves are mounted inside slash water, right? So it's a file system object. The other case is the secrets which you access through API server. I'm talking about sealed secrets. So let's say your credentials, sealed, S-E-A-L-E-D. It's a project by Bitnami. Sorry, what is it? Sealed secrets. Sealed secrets. Yeah, sealed secrets. OK, yeah, all right. Look, finally, the point is that if you're inside the target pod, you have equal access as the binary itself, which means you have access to all the primitives that your target application has. So that's the point that we are trying to make here, that you might want to harden your target workloads so that no other binary, apart from your target application, binary should be allowed to execute in the target workloads. With that, even the sealed secrets problem is taken into consideration. Thank you. I have a question. Suppose you have a secret that need to be accessed by multiple binaries, right? Yeah. Maybe I want to exclude some binaries, but something like a global secret. Yes. In such a use case, do cube armor can also help? Yes, yes. So the point is that you have sensitive assets, and it might not necessarily require access by a particular process. It might require access by a set of processes. It's completely possible. Basically, the rule here, let me take this example, where it says that the from source is a particular binary. It takes an array of binaries as well. So you can say, these are the set of. In fact, you can say that all the binaries from a particular path has access to that file system asset and no one else. All those rules are possible. Can I define other way around, right? I want to exclude, right? Rather than include, suppose I want to exclude some binaries, right? Exclude, right. Yeah, OK. Yes, that is also possible. So those are block-based rules. If you look at this, these are allow-based rules, which means that allow-specific thing deny everything else. But if you want to specifically say that, oh, I know that this binary is vulnerable and I don't want this to access anything beyond this, those specific rules are also possible, wherein you say action is blocked and this is the source and this is the target asset. Completely possible, yes. OK, thank you. Thank you.