 Okay, cool. We're recording now and yeah, we're going to have a talk today on understanding the CNCF security landscape. Hi, my name is not Mike Coleman. I'll be delivering on behalf of Mike as he's not feeling well today. My name is Nigel Douglas, but all the content put together today was by Mike Coleman. So I certainly suggest reaching out if you have any questions. So just about Mike himself. Yeah, he's my colleague. We work in the same team developed for advocacy here at Cystic, and we're working directly on the Project Falco. And Mike has a lot of experience in this cloud security landscape, worked on different projects within the space. And you can certainly catch him at that tag Mike G Coleman. It's pretty much the same username on everything. So for the agenda today, we're going to look at what makes cloud native different. I think for some people are still moving into that. Cloud native space and just need a rundown on what cloud native means exactly. Once we understand what cloud native is, we want to look at the existing landscape, the CNCF who are pretty much the overseers of everything cloud native. And what I mean by that is all these open source projects that kind of get brought into the cloud native computing foundation. They certainly do the oversight for that. So nearly all the projects we're going to talk about today are under the oversight of the CNCF. And as such, they have a landscape view of all the projects that are under that underside. So for that, we'll then talk about what CNAP means. So again, for a lot of people's CNAP is a new terminology. When we think about these industry analysts, the likes of Gartner, they define these acronyms, CWPP, CIEM, CSPM, CNAP. They don't do it deliberately to confuse us. They do it actually to consolidate technologies. So what I mean by that is, you know, we could all as software vendors have different definitions of what you need as an end user or consumer to solve industry problems. But since they're the analysts, they've gathered all this data. They collectively say, look, a CNAP is the better choice because it covers the wide range of issues. It addresses those issues, but it does it in a consolidated forms, which should be more affordable since you're buying a single platform. So what we're going to talk about today is is it possible to gather those open source technologies that are under the oversight of CNCF and consolidate into something that resembles a CNAP capability or platform. Now as a disclaimer, you know, for this talk, we're not exactly saying we're going to provide a supported CNAP platform. We're just covering whether the technologies can address the problems or pain points that were defined within that CNAP platform. Now, the technologies themselves are going to fall into different pockets. That makes sense. When we talk about these analysts, where, especially as Kubernetes, the project evolves as cloud-name technologies evolve. Originally, you'd have something like CWPP evolves and we see CDR, then we're seeing CNAP. So by definition, some technologies may fall into CWPP and CDR or they might be Kim, but also possibly CSPM. We'll talk about all that in a while. So it's just important not to hold us directly accountable for where projects may fall into more than one category. But as far as a comprehensive list, you know, this gives you an idea of what the technologies are. A lot of the CNAP platforms, SISTIC included, are going to use an amalgamation of these different technologies. So in SISTIC's case, we will have technologies like OPA, for policy enforcement, we have Falco for real-time detection. We have even, I'm not sure how we achieve it, but we certainly have network policy generation. So I don't know if we're interacting with the CNI anyway in Kubernetes, the network interface to achieve that. So let's get started. So what is cloud-native? Now, cloud-native, it doesn't mean just the cloud platform, it's beyond the cloud platform. So we're talking more as the applications, so basically non-monolithic applications. So I'll go back again. What is a monolithic application? Monoliths are generally speaking, they're the old way of developing applications. So you had this rather large application, it resided on a physical server or virtual machine. And it was clunky. You know, we had to, if we made updates to the front end, you know, how was that going to interact with our database? You know, it was just the way it used to be, we used to develop applications. Now applications are a bit more resilient. The idea is what we call cloud-native, they're designed to work well in the cloud. So they're all containerized. So compartmentalized into those different sections where you have your front end, you have your back end, and they all can be updated seamlessly. These quick updates, it's fantastic system cloud-native. And how are cloud-native applications achievable? Well, we talk around technology such as Docker for the containers, we talk about Kubernetes for orchestrating those containers. All of this is cloud-native. So what we're talking about cloud-native, essentially we're not talking about just the cloud platform, we're talking about everything in the delivery of these applications. So basically non-monolith, non-on-prem, the new way of thinking about container delivery, whether that be serverless or Kubernetes orchestration, this is all cloud-native, it's all designed to deliver in the cloud. So this is a, you know, these modern tools, these techniques to achieve that, and all the following are cloud-native. So we're not going to talk about perimeter-based firewalls. We're going to talk about network interfaces and now policy as alternative technologies. And the idea is this all fast, and we can deliver the applications quicker without impacting, you know, service delivery. So customers see, like, essentially 100% of time at all times. So with that, yeah, even the perimeter, you know, I just alluded to it there a second ago. But in the past, when you had a monolithic application, you had a firewall, and that firewall was blocking or allowing specific IP addresses. So it was all fine when an application was static, you know, you would say, look, I have an application with a fixed IP, block these IPs to that fixed IP. But with containers, since they're ephemeral, they don't last very long. You know, my pod lasts five minutes and it dies and is recreated, the orchestrator does all of that. You know, it gets a new pod ID, it gets a new IP address, nothing static, nothing lasts very long and cloud-native. And that's all part of that whole resilience thing of, you know, quick delivery, everything updates quickly. So when it comes to detecting intrusions, we can't rely on just archaic systems of monitoring based on IP, monitoring based on application name. So these kind of things, this state is going to change regularly, so it's more less stateless. Now, another thing to point out here is everything around the perimeter has changed. When we have a cloud-native application, most likely it's being delivered through the cloud, you know, so the cloud provider themselves, they're the ones managing those external connections. This is different to when it was all running in your own enterprise service. We're also talking about exposure, it's much larger when we're talking about cloud-native. So by default with Kubernetes workloads, you know, they're on a flat network. So it means all pods, there's no restrictions by default, they can all freely talk to each other. And if we're using a load balancer, well, we can expose that to the public internet. And that's a scary thought, you know, we don't want to overly expose resources, workloads to the public internet where they can be subject to a threat. So we need to be able to control the access to services, to workloads, to network connections. All of this needs to be achieved both at the cloud layer and also at the workload layer. And that's what we're talking about with cloud-native application protection platforms, both the cloud services, but also the underlying workloads that we're going to secure. And how do you detect unusual activity? Since cloud-native workloads are so different to a traditional technology, our monitoring tools are the tools we need to achieve this zero trust architecture if you want. And how is that going to be different? So I'll cover all that in a second. So this is the general anatomy of a cloud-native application. You have your VM, which is no real different to a VM that you would see in an on-prem environment, except we're talking about an instance that's running in the cloud. Containers, the workloads that we're seeing are going to run on that instance. So Kubernetes at its core is still going to have virtual machines, what we call a control plane, and you have worker nodes. Those are still VM instances. So containers run on those VMs. There is the alternative approach of serverless. So we're just talking about deploying our workloads through a cloud serverless approach. But in either case, whether we go down running them on a VM or serverless without a server, you're still going to have the same controls that you need to have enforcement. So you're going to have identity and access management. The cloud provider, as you see at the bottom, whether it be AWS, Microsoft Azure, Google Cloud, IBM Cloud, there's a whole bunch of different cloud providers that will do this. They should have all these services out of the box. So they'll have an equivalent of an IAM service for defining what users can and cannot do. They'll have their own virtual firewall technology like VPC controls on what connections are allowed in an avid environment. They'll have object storage. They'll be able to find, you know, where you're going to store secrets that are going to be accessed for database or other resources. There's going to be logging, you know, real-time logging. And when we're talking about monitoring the health in a cloud provider or working at the workload level, we're going to have to plug into these different services. We're going to need to access those logs from the cloud provider, the audit logs. And then we're talking about network security. If there is a load balancer configured at this cloud layer, we're going to need to access that audit log service. And we're going to need to be able to access those security groups to make sure if there's a change to a security group, if we need to enforce something in a security group, you know, that we have access to those APIs. So cloud native applications at their core are running on cloud infrastructure. Moving on to next slide. This is what I was talking about briefly a second ago is the cloud native landscape. If you go to, if you were to Google CNCF landscape or project landscape, you would see this, it's much bigger than what you're seeing here. It's all the projects that are, you know, in the CNCF. And these projects, you know, some of them are sponsor projects, you have the ones that are incubating, they're going through the internal process or a gradual process of making sure that they're essentially audited, they're checked that they comply with a bunch of stringing controls to make sure that they are the best of the best. As you can see, some projects like OPA and Falco as of last week are now graduate projects, you know, which means that they've gone through all this rigorous process that met all the conditions to say they are now able to go outside of the oversight of the CNCF that they are, you know, meet all the best standards. So when I look at that top left corner and I still look at Falco, OPA, Caioverno, we'll talk about these technologies later in the session, but it's also, you know, it adds to the credibility that we're not just picking a niche technology that no one's heard of. And then you might have one minor use case for, but rather we're picking the technologies that are widely adopted, you know, they meet stringing controls and are most likely going to be integrated into a CNAP technology. Now this is as of the third of, sorry, the second of March 2024. This table is going to update on a, you know, a regular basis. So again, check out the landscape if you want to see an up to date view. So with that, what is CNAP? And I mentioned it briefly a while ago, a CNAP is a cloud native application protection platform. It's a bit of a mouthful. There's a lot of words there. But what it means is it's integrating the capabilities of existing acronyms defined by industry analysts, such as CSPM, which is the cloud security posture management. CIEM, which is this cloud infrastructure or cloud identity and event management, cloud workload protection platforms, CWPP and CDR, which is the cloud detection response. Now that will cover anything from identity and access management, network security, anything on the workload platform infrastructure, the data, the secrets that you're managing. So down to the logs. Again, we need to look at everything within the platform and CNAP does a job of tying in all these loose ends. So you don't have to buy a CSPM, you don't have to buy a CDR technology. You should now be able to, as an organization, vet out a CNAP, understand the capabilities of the CNAP, and then say, okay, credibly, we could consume or build that technology. So those building blocks are. So CSPM is the first thing to cover that, since it's posture, we're looking at the health of the environment. For most people, that's vulnerability management to look at, you know, do we have a lot of vulnerabilities in the environment and how do we scan those, and we'll use technology such as like Trivy or Clare to check for, you know, are there vulnerabilities in the environment. But posture management also includes just general misconfigurations. So as much as a vulnerability, we're identifying something that's insecure, general misconfiguration, it could be overprivileges. You know, if we're talking about, you know, when we configure our, when we're creating our cluster, let's say, using our infrastructure as code, we're deploying your environment. So we may do certain insecure misconfigurations, which leaves our, you know, the environment open to attack, and that will fall under CSPM. But there's different parts of CSPM, or KSPM, which is Kubernetes specific posture management to make include things like threat detection compliance. And I ask, why is the threat detection under posture? But if we think about regulatory compliance frameworks, if we look at the likes of PCI, HIPAA, you know, for them, they define water insecure controls, even OASP G10, for Kubernetes, they'll define water insecure controls. So if you have a threat detection tool, or even any kind of introspection or monitoring tool, and it can tell you an insecure configuration was done here, and you're able to detect that in real time, that that certainly plays into KSPM. While I'm not advocating and saying you have to use, you know, real time detection tool specifically for CSPM, they do have a use case under there. For Kim, that's identity and event management. So that's more easy to understand. That's specifically identities. So an identity could be at the cloud layer. So we're talking about IAM, identity and access management. So usually the cloud providers provide their own services for defining the identities and permissions for those identities, those users in the cloud, what they can and can't access. But even when we're talking about the workload level, you know, there's tools like role based access control, RBAC, it's been in Linux for long before, cloud native. And that defines, you know, again, what fraud actions you can perform. So create, read, update, delete. And again, this is all Kim. For CWPP, we're talking about workload protection. Now this is where there's a gray yard for a lot of people. Did CDR play CWPP? Certainly not, but there are overlaps between the two that are more hard to define than in the other buckets. So CWPP is workload protection. And there's the last piece platform. It's a bit of a strange one, but it's more like CWP. So when we're talking about here, workload protection is just making sure workloads aren't doing something insecure. Now this ties back to things like vulnerability management. Certainly we want to thwart threats if they are malicious. So certainly want to maybe mitigate the risk. So that would tie into CWPP. But for a lot of people, this will be, you know, monitoring the container, monitoring Kubernetes. If, if something malicious is done or an insecure behavior, let's say terminal shell container and try to download a malicious package, everything CWPP. But what when we're talking about it's workload specific. So it's CWPP. So for workloads, we're focused on the containers, the pods. If we're talking about serverless, again, it's the application you're delivering. So let's focus on security specific to the workload. So if it's a workload with a high risk vulnerability, we want to mitigate that CWPP. If it's a container, whether serverless or running in Kubernetes, and we want to again detect and prevent that threat, it's CWPP. So it's host container detection response, whereas CDR is in some cases, let's say an extension of CWPP, because we're still monitoring container and Kubernetes runtime activity. We are doing the host threat detection response. But we also are extending it to cloud detection response. So, you know, in the cloud provider, a user might do something malicious. They might be trying to access secrets, delete resources, doing things they shouldn't be able to do. So are we able to detect it? Well, unlike workload protection, which is fundamentally using things like system calls in Linux, or you're connecting to the Kubernetes audit logs to see what is happening at the Kubernetes audit there. Containers, all that's done through system calls. CDR, you're going to need to plug into the cloud audit logs. So things like cloud trail, whether it be Google audit logs, I don't know what the name of it is, Google platform logs. Azure probably have their own Azure platform logs. So whether you can plug into those audit logs of the cloud provider, that will help us to detect and respond to threats in the cloud. So that's the main differences. CDR is sort of an extension of everything cloud. Some people used to call it CNDR, cloud native detection response. So it's everything from cloud native and the workload. But again, CWPP before CDR existed was focused purely at the workload layer. So yeah, let's start off with CSPM. So yeah, as I said a while ago, CSPM is cloud security posture management. As you can see in the graph, it's going to cover everything around the data, the management, even the network. So anything posture, anything that's kind of drifting from the original controls. This includes flight charts, navigation material, anything, you know, so if you have secrets, you have, you've configured network control, sorry, around your security groups. If someone's making changes to those different records, we want to be able to verify if there's a change to a static configuration. And then that's essentially keeping the posture. So you'll see in PCI or SOP2, there'll be a bunch of controls saying users who aren't at certain clearance level shouldn't be able to make changes to certain things. And if they do, well, you certainly need to have a technology in place or system in place to either prevent them from doing it or minimum detect that there's a, you know, a compliance check that someone's doing something they shouldn't be able to do. And that's the posture management part. So yeah, this was the piece of data. So there was three different breaches. There was the Flexbroker and Booker, Pegasus Air Lines and Sivicom, and they're all related to misconfigured S3 buckets. So in this case, like personally identifiable data, whether it be like photos, signatures, electronic flight bag, software source code, plain text passwords, secret keys, all that can be accessed through these S3 buckets. So, you know, the use case is pretty strong for a CSPM technology is to be able to control and prevent how people access resources in the cloud. Because by default, there's, you know, first limited controls that are enforced to stop people from accessing resource that they shouldn't. So an attacker could, as they've proven in these breaches, access resource that they really shouldn't be able to do. So that's the use case for CSPM is to prevent access to sensitive controls or from deleting controls or modifying controls, let's say, network policy, security group enforcement, that would allow you to therefore access sensitive credentials more easily. The next thing is KSPM. Now KSPM is, as you can see by the box, it's specific to Kubernetes. So we're not talking about the cloud layer anymore. We're talking about just Kubernetes. With that, we could put technologies like Kyverno by OPA. They could be in either KSPM or CSPM. And the reason for that is they both carry that whole deploy time enforcement. Whereas, you know, OPA gatekeeper would be purely Kubernetes enforcement. So there's technologies like I could recommend gatekeeper for Kubernetes control about what you shouldn't be able to do. But Kyverno would certainly be able to be at CSPM or KSPM use case. As I say, there's overlap between both. And in some people's definition I've seen up, they may not talk about KSPM because that might be integrated into the broader CSPM logic. But what are the options that I should have, you know, probably open up the slide when I was mentioning that a second ago. The technologies like Kubebench, I think they do certainly qualify under posture management. And the reason for that is they show, I think it's with the CIS benchmark, they'll show what are best practices for Kubernetes specifically. It'll say, you know, on a checklist, are you green or red? Have you, you know, let's say anonymous auth is a set to true, is it? And then if you say false, great, you know, it's disabled. If it are accepted, yeah, if it's set to true, no, it's an X. So it goes for a checklist of all the insecure configurations, you generate your report based on CIS with center for information security or something like that. So it's a pretty highly recognized standardized procedure. And you can go back and start reconfiguring, you know, your controls to make sure you get a green list of, you know, I've done all my best practices. And technology is a great cross-laner also great, you know, they use infrastructure as code or IAC to create specific control planes for all sorts of workload controls. But at the runtime level, as I mentioned, technologies like Kyverno, they're Kubernetes specific, you know, so they do all this, you know, enforcement controls, but they also do certain kind of cloud controls. OPA is a cloud native policy engine, I believe it ties into the cloud layer, whereas I said gatekeeper is, yeah, it's a policy enforcement at the workload level. And then Falco, while it's not, it doesn't do the enforcement controls to say, you know, what you can and can't do, you know, let it instead what it'll do is it'll monitor all insecure activity. So in this case, you know, by plugging into the cloud audit logs, Kubernetes audit logs, monitoring system calls, if we detect that a user has tried making contact with them, you know, secrets or making changes trying to delete resources, even if they're successful or not, it doesn't matter, being able to monitor activity, we're able to go back and, you know, actually audit that activity and it's brilliant for reporting. So we can say, look, these users tried doing this and we took this enforcement action based upon that. But lack of monitoring at all, if you're just relying purely on enforcement, but you have no monitoring, well, it's very hard to show reports say, look, this is, we can monitor that activity. So I think all the technologies played a part, technologies like Qbench certainly are important for reporting on, do you have insecure configurations in place? Falco is certainly important for monitoring all activity, anomalous or not, to see what is going on. And certainly then for the important enforcement engines are that that, you know, they are the last protocol, they're going to make sure that someone can't make those changes that they shouldn't be able to do. So that's all, that's all CSPN. The next part is Kim, identity and event management. Again, this is everything that's human or not human. So you can have identities in the cloud for like robot accounts, there are a lot of those. But for users in the cloud as well, so you say, John has access to AWS, what can he do in AWS? Well, you're going to want to force I am controls identity and access management. And that says I have access to 50 out of 5000 services. And in those I have read only access something like that. Now, Kim shouldn't really be limited to to the cloud layer because again, if we are talking. Yeah, it's very hard for us to list specific technologies. The reason for this is, you know, if we're talking about I am, you know, your cloud provider doesn't bring a job of writing that. So, you know, what I would just say is, use the existing technologies and place provided by your cloud provider. It's going to be different slightly for each provider, but point is they give you technologies use those where appropriate. So I wouldn't be recommending open source technologies here. What I would say is, like Ksock is another software vendor they recently released a policy generator for our back. So this more in the Kubernetes, you know, Linux host layer to say the crowd actions on workloads. So what user can cannot do or, you know, yeah, service accounts kind of can't do and declare inside Kubernetes. So there's different things that define identity and event management, pretty much everything around access and entitlements. So what I would say is if you can enforce your trust and logic into your environment where possible brilliant. What you'll notice more and more seen at platforms like cystic, for instance, they'll do Kim policy generation. So they'll say, we monitor users what their activities in the cloud. And if they're not accessing these services over 24 period or for week or whatever, then they'll say using kind of a simple logic. They'll say, you know, these services are never accessed by these users, or these users services are rarely ever touched by user. Maybe they only ever needed to read what was going in there, but they never actually wrote any changes. So they'll recommend appropriate policy that won't break anything, but it will ensure that they don't have overly. Permissive access. And that way, even if that real human account was compromised, there's a limited blast radius for the attacker. There's limited things they can do in the cloud, as far as damage control. So apologies. It's like cough. So as far as Kim goes when we're talking about recommending OSS technologies, technologies like key cloak are great for single sign on. You know, when, even when we talk about Falco, you know, I say your octa identity, if you're configuring identity and access management in an identity provider set to the cloud provider, you know, Falco can monitor the activity on the ID ID, yeah, IDP provider. And that way, you know, if there is anomalous activity, someone is trying to make changes to to octa to get access to the cloud provider. Well, you can also monitor that in Falco. So although I didn't list it there, that's a genuine use case. But the same logic would apply to something like Prometheus Low Key Grafana. If you can aggregate real time events from other technologies, including octa or other IDP providers, or in the cloud provider itself, you can get those cloud trail logs in. You know, you have a pretty good job of monitoring what users are not doing as far as permissions go and be able to then take appropriate action. Technologies like Terraform, Open Tofu, Pulumi, they're all perfect for defining consistent templates of what users should be able to do. So you can create like, for instance, complex infrastructures code templates to say, here's our infrastructure of this what's going to be spun up. These users will be lax with it. They will only be able to do these actions within the environment and do it all as a defined template from day one. We're all then thinking about security as an after sight of everything you're doing in front of DevOps practices. So kind of this whole DevOps, DevSecOps methodology of tying security into the DevOps workflow. So yeah, here's just more statistics regarding Kim. I don't know if there's where these, I think these are taken from our statistics research report, you know, 90% of ground permissions are not used. Time back to what I was saying there a second ago, if they're not used, if an attacker comes in, and they make changes, are we monitoring any of this? And if we're not monitoring it, then it's really in our best interest to limit what the users kind of can't do. We should also monitor what they're doing, but either way, we should limit the blast radius. So an attacker can't go into a service that we're not monitoring and start abusing it for crypto mining or whatever operation they want to do or, you know, reconnaissance. 58% of identities are not human rules. I was saying there a second ago that it's not all just Bob, Mary, Joe, you also are going to have a bunch of, you know, ABC123, and they're used for, you know, a CLI accessing the cloud provider. They're kind of, yeah, to a piece of technology accessing another piece of technology, non human identities, and are we monitoring non human activities? How many permissions do we give to non human identities? And if it's a lot and they're compromised, are you monitoring it? Because I might be able to notice that my password has been changed as a user her logs in with a password. But, you know, we don't tend to believe stale credentials on these non human identities and it's a serious concern. And of those 98% of permissions have not been used for at least 90 days. So we're looking at stale accounts. Worst case scenario, and it's a terrible thing to talk about, but like, if there was a layoff in your company, 100 people were moved. Do we have a plan of action for removing those old counts or do we just have a bunch of I am profiles just sitting around the cloud? And what stops a user who has left that company still accessing the cloud provider. So this is all Kim, you know, it's about if credentials are getting old, do we force the user to, you know, reset them? If a user has been removed from a company, do we have a plan on action to remove those old credentials? There's a bunch of ways of looking at it, but please do factor in on human roles, because it's a massive security gap. So this CWPP, this is the vulnerability management threat detection and hardening side of it. And again, it can be anything from containers, VM serverless. Now, this is pretty self explanatory, I guess, what we're trying to say is make sure your workloads are secure. To detect vulnerabilities, what is our plan of action for removing those? Again, as I mentioned earlier, technologies like trivy, great for detecting vulnerabilities. To technologies like Falco are great for threat detection. Tets are gone as well. They have response actions, you know, you can take signal kill to kill a threat. But you know that might tie more into CDR. So we'll come back to that in a while. So with this, you know, again, this is probably coming from not the most recent report, but probably 2023. But yeah, 87% of containers just have higher critical vulnerabilities that are not patched. This is just a reality. Unfortunately, is that there's a huge amount of vulnerabilities running in existing workloads, create surface area for attackers to abuse. For that, when we look at those operating systems, because every container has its own operating system, we'll notice that most of them are going to be using Red Hat Enterprise Linux. I personally use Ubuntu a lot, but you know, reality is only 16% here at Alpine. And why is Alpine such an important thing? Well, when we look at, you know, Alpine images, they're like 5.7 megabytes is on screen. Whereas like those larger images, you know, the alternate ones are like 228 megabytes, so it takes a certain load on your environment. There's bloat. The reason why some people might use it or for two reasons, maybe the first reason is because you have all your dependencies in place. That's great for your application if it needs those. But then sometimes it really is just I deployed this base image before, and I'm just going to keep doing that over and over again. There's very little thought gone into base images. And that's scary when we talked about the previous slide of the number of vulnerabilities associated with those more bloaty base images. So yeah, base image OS selection can reduce bloat by 97.5%. And that's probably the biggest factor I would provide when we're talking about CWPP is to be really serious about what images you're choosing to use. Because if we can reduce the vulnerabilities and reduce the exposure, we certainly then are handling the majority of the issue on CWPP. And then hypothetically, if say that same, you know, lightweight image, which has less vulnerabilities does get abused. Well, we still have our threat detection response actions in place to take action, but we are reducing the attack surface. And if you look at from those images that were analyzed so cystic, yeah, two years ago we would have done this report or a year ago and they would report and done a report for over 1,800 malicious images identified within containers. 608 of those were crypto mining. And that seems pretty obvious in hindsight because you think about what Kubernetes, what cloud native means fundamentally. We talked about the idea of resiliency, scalability, you know, short lifespan. These are all great for, for something like crypto mining, because we can say, look, you can, you know, you probably have no resource constraints in place by default. Workloads, you know, we can scale up more and more as we need them. So, you know, they're by default in a good way for abusing resources. And due to their short lifespan, it's very hard to do instant response and forensics. If you don't have native technologies in place of the CNAP to monitor, you know, was a crypto miner use what actually happened at the time when they were abusing our container for mining operations. And if they're, if it's in base images, then they can do it over many different pods and just keep coming back as long as they're pointing to the same mining pool, they can get rewards. So as I say, Kubernetes, the fact that it's about delivering applications at scale, it seems like an obvious target for such an operation as mining, make, you know, crypto mineral currency off a fairly simple operation. So as I say, you know, how do we avoid this? You can use tools like SNIC to scan the application dependencies and development earlier on in your pipeline to make sure you're not introducing these insecure dependencies into your environment. So if you have technologies such as Harbor, they provide artifact registry signing and scanning. So again, it's just making sure that nothing has changed. So once it's signed, we know that's the, the approved, the standard one that we wanted. And there's been no kind of drift from the intentional design tools like six store, and they do specifically the supply chain scanning, and that also includes the signing verification technology. So if we want to reduce our vulnerabilities, if we don't have a reason for using images that have, you know, a large list of vulnerabilities, then we can use to container such as Wolfie or Alpine Linux, which are what we call distrust. Now, what, what does that really mean when we're talking about like distrust so Alpine Linux, it's, it's a Linux distribution that was designed to be small, simple and secure. It's a little busy box, open RC. Basically, instead of using things like G lib C or GNU core utilities and system D and all of that that's usually in the other container images. This makes Alpine one of basically, you know, the only container images that doesn't use that core utilities that we see with GNU. And it's just, it's a different approach. It's less dependencies, less load is brought in with Alpine Linux and that's a great one to maybe choose as your base image going forward. If you don't have a reason for using rel or Ubuntu or something else. So CDR was that all about. So this is the detection response. We talked in at the workload level, and also knows the response part CDP P was never really, you know, mad about talking about response capabilities for the detection. You know, it was more about making sure we detect and know what workloads are doing. But there was never a specific defined need to, you know, mitigate not mitigate but to kill a pod or kill a process when identified as an issue. So that there was just funny in this terminology. So CDR says detection response, similar to what you expect with EDR, XDR, any of these detection response technologies. So it's like that for the cloud. Now as mentioned, by now, people have a fairly good idea of what CDR or sorry cloud native is and how it's different. So now we're talking about cloud native detection response. We can detect malicious activity on IDM, your IM or in your security groups, or even just changes to buckets or anything in the cloud provider as well as at the workload CDR is focused on that. So, you know, can we detect it. Yes, we'll, with so technology like Falco we can monitor cloud trail audit logs or any other audit log from the providers, and then say, okay, with an automation action like Falco sidekick, we can then, you know, mitigate the risk in real time or close to real time using again serverless approaches if you want to go down that route. But technologies such as Tetra gone, they can do something like signal action at the workload level to, you know, kill a process that we identify in the container which is really useful. And all of this ties into CDR but CDR so much more than just workload, it extends to the cloud provider as well. Now here we're looking at companies that take about 207 days to identify and 70 days to contain the breach. This is more and more we're hearing about this is the, the average total cost of a data breach. It's continuously going up. But again, it's that the time spent to detect respond by monitoring all the anomalous activity at the cloud layer, we're certainly reducing our time to to respond from weeks, not even days, but rather like get it done in, you know, within an hour or less, you want to talk about minutes. Because again, the costs are crazy we're talking here measuring in USD millions. And as a 2022 going on to 2023. They've easily comfortably exceeded the four million per cost per breach. And this number can only go up I'd say, especially as adoption of cloud continues to rise. So briefly, I just mentioned technologies like Falco, a tetragon. The reason I didn't add more technologies to the CDR part is it really is that detection response. You know, it's more aligned with traditional EDR, but for the, you know, use case or problem points of cloud native and the cloud provider. So our tetragon does a great job of looking at runtime observability enforcement for containers kubernetes Falco does the same, but then also provides comprehensive monitoring across octa get up across the cloud providers as well. So key takeaways of today's session cloud native applications have a large tax surface. I'm sorry to say it, but it's just true, you know, we talked about flat networks design so no networking enforcement by default. No resource limits in regards to abuse done by mining activity. There's no are back controls really in place for what, you know, create, read, update, delete actions. And at the cloud layer, you know, there's not a whole lot of difference either in the cloud provider you're also expected to enforce those controls yourself. You should adopt these technologies gradually. This is a journey when it comes to moving from monolith cloud native. You know, Microsoft's architecture, you know, it's something that we need to really look into how it's going to affect your business, how it's going to benefit your business. There is an increased need for least privilege and zero trust what you should get from this session is from the data we talked about research reports. Most organizations just aren't enforcing those kind of controls, and it's hard right we've got sporadic technologies, they're not well linked together. And that does a good job of tying the loose ends to give a comprehensive view of controls. And this also is a single solution that should, you know, be implemented and managed by different teams whether it be DevOps security in a collaborative manner. So again, this will address business issues such as addressing that large attack surface, but it is a collaborative approach. And that's why we're proposing CNAP, but we hopefully have given you an idea of the different technologies that will come together to make that CNAP. So with that, I hope today's session was useful. And we're looking forward to seeing you again soon. Bye for now.