 webinar. So, I will welcome everybody. Thank you for joining for today's webinar for the Cloud Native Computing Foundation. So, my name is Alessandro Votze. I'm a Principal Engineer at Microsoft and I'm a CNCF Ambassador. So, you're also for today. We're going to have an interesting webinar presented by Gadi now or CTO and co-founder Alcide. So, just a few housekeeping items before we get started. So, first of all, unfortunately, you cannot talk during the webinar. So, please just listen in and there's a Q&A box at the bottom of your screen. So, if there's any questions for Gadi, please put it there and I'm going to read at the end of the webinar. So, you have to hold on until the end. We're going to leave 15 minutes at the end to answer your questions. Very important, this is an official CNCF webinar. So, we have to adhere to the code of conduct of the Cloud Native Foundation. So, please refrain from using inappropriate language. Please respect everybody and please respect the participant and the presenter. Well, so, done housekeeping. Now, I'll give the war to Gadi if you're ready to present. I am ready. Thank you. Thank you. So, hello everyone and absolutely thank you for joining me. This is my first webinar for the year. So, I'm very excited to do this and in this webinar we are going to cover Kubernetes audit log and kind of dive into why this is a gold mine for security. So, to briefly kind of introduce myself, my other obsession other than Kubernetes and security is skateboarding. I was born and raised in Tel Aviv, Israel and throughout my kind of most of my career I spent as a kernel developer at companies like Checkpoint building firewalls, VPNs and all those things that are kind of less feel about Cloud Native and then kind of moved into a startup that built distributed firewalls on VMware kernel and spent a few more years at Juniper Networks doing Cloud security and at some point of time I think I got upgraded to do Cloud Native work and Kubernetes was kind of the main work around that and presently I'm the CTO and one of the founders of Alcide which pretty much is a company that is purely focused on security end-to-end as a whole. So, just a moment before I dive into kind of Kubernetes audit log and why does it matter to kind of look into that, I want to set the stage in terms of like what does it mean, where this piece of the puzzle falls in terms of the entire security of our Kubernetes clusters. So, in a normal kind of constellation we will see our pipelines CI and CD and these are kind of two completely beasts. Normally with CI we will do the source scanning and then we will build them into containers. If we did a good job then the image scanning wouldn't find a lot of issues and if we did kind of not as good as job as it can be done we pretty much find ourselves sifting through a lot of vulnerabilities that are part of you know operating systems and a lot of stuff that doesn't need to be in a container and once we bake those artifacts we pretty much wrap them together with Kubernetes configuration files. Normally we will do that using subsystems like Helm or Terraform or a combination of scripting and templating but at the end of the day we pretty much deploy those assets into our target cluster and at this point of time and I will touch that briefly and later the resources go through what in Kubernetes language is the admission phase. It pretty much allows to kind of inspect the resources before they are introduced into the cluster. This is a very important point in time where you can apply some security policies on top of the security policies that you do during the CD and all of this is pretty much relates to the hygiene of the Kubernetes cluster and as you move into kind of the workloads themselves and the underlying infrastructure we do want to make sure that from a runtime perspective we pretty much employ network security and process level security on whatever is running inside the cluster. Now when I'm referring to whatever is running inside the cluster the first thing we want to take care of is our applications but just as important we want to take care of the entire pod as a whole which is pretty much our vehicle for our application containers and we don't want to forget kind of taking care of the host itself and when we try to think about what is the third component or the last component that we want to make sure we cover is pretty much the audit log which captures all the API interactions or invocations made by users or by automated services that are running inside our cluster and we can do a lot of kind of or specific kind of actions and learn from the audit stream and this is going to be the focus of this session. So if we kind of take a deeper step into what this Kubernetes API server means so essentially the API server is kind of the brain or at least a very important piece of the brain of the cluster so if we think about the Kubernetes control plane that runs the cluster every request that is being submitted to the API server it basically goes through kind of four stages the first one is pretty much the authentication we want to establish the identity of whoever in that perspective the principle that actually would like to perform an action or request against the cluster at this point the request processing will switch over to the authorization stage so essentially at this stage we determine whether the established identity or whoever is trying to perform an action against the cluster is allowed to perform this action this is specifically something that we configure using all those RBAC configuration as part of our provisioning of the cluster resources themselves and if we pass this gate the resource is basically submitted to the admission controlling phase where we basically perform two main actions within this stage first one is kind of mutating so there are two types of admission controllers we have mutating admission controllers that can essentially change the resource that we pretty much admit into the cluster one good example here would be for example in Istio the injection of the sidecar is something that is being done using a mutating admission controller if you think about for example tooling like like secret injection and and such subsystems you can implement those using mutating admission controllers once we pass this gate we go through the validating admission controllers which pretty much can say yay or nay based on compliance checks hygiene checks or any policy that we would like to kind of apply to the final version of the resource and once we've done that the resource will go through kind of a Kubernetes validation stage where we pretty much validate that the fields and everything kind of conform to the expected layout of the resource and from this point on word this is something that is being admitted into the cluster now the API calls that basically performs the entire kind of machinery that I described here basically is basically captured by the API server so if we try to kind of ask ourselves like who needs access so who actually access the API server we can see that because Kubernetes inherently or pretty much natively encompass the notion of operators and controllers and control loops then we can pretty much break it down to kind of three main pieces or groups the first one is kind of human operators these are users that access the cluster normally this would be either using specially written Kubernetes client using the client libraries that the Kubernetes SDK offers or it can be using the kubectl command line or whether this is Helm that is deploying into the cluster itself the second group is basically captured by the system components so for example the kublets that are running on the nodes and represent kind of the Kubernetes agents repeatedly or kind of continuously will probe the API server for example once they get a request to schedule a pod on the node the kublet will fetch from the API server the config maps and the secrets that are relevant to the spot so there is an ongoing access being performed by the system components to pretty much maintain kind of the desired state or the expected state of the resources other good example would be every deployment that we push into the cluster that kind of in in a cascading event will fire up a replica set and the pods and the underlying pods are also accessing the cluster using special kind of designated group when we are looking at this kind of from the API server perspective and the last kind of component or group is basically service accounts these are service accounts that we provision as part of the deployment of resources where we grant certain workloads certain permissions to do against or to perform against the API server one good example would be for would be operators so if you think about Prometheus for example this is normally kind of a privileged component when I'm saying privileged it's kind of with respect to which APIs it allows to kind of access the API server and these are these are this is a type of other resources that require access from the API server so if we think about it zoom out a little bit so we have a growing and number of components that access the API server now based on kind of the notion that every API invocation may made by those let's call them role players or principle are being recorded by the API server so effectively we have kind of a very large stream of data that is is pretty much policy driven so you can tweak which kind of data flows in those event or how verbose those logs can be and we pretty much from a security standpoint would like to surface signals that are meaningful and actionable and we can use in our day-to-day kind of security operations of the cluster so one of the main challenges is how how do we crunch this raw data into meaningful insights inside the cluster so for example and we'll kind of dive into that deeper later on is is someone exploiting a vulnerability in my API server it can be a known API it can be a known vulnerability in the API server because we didn't upgrade the server in kind of in time but it can be unknown vulnerability that pretty much leaves signals or traces inside the audit log itself it can be scenarios where stolen credentials from users or stolen tokens from service accounts are being exploited or used or abused inside the cluster we want to take those we have instances of kind of compliance based checks I mean if someone kind of exec into a pod this is something that for example an auditor or compliance from a compliance standpoint would like to keep track of and another example would be if a component basically had the misconfiguration in the RBAC this is something that leaves traces inside the audit log itself so when we try to think about how do we get kind of the audit stream this is something that kind of is relatively a moving target when it comes to the different versions of Kubernetes that you are using so the Kubernetes native kind of approach which is still an alpha level API it was introduced in in version 1.13 is the ability to stream the audit log to audit sinks that reside inside the cluster it's a new resources that basically enables to register an audit log target and the Kubernetes API server will kind of stream the audit log to this API web or to this audit webhook so essentially it enables us to kind of plug in an inspection point of the audit log that resides as part of the cluster so you can think about it as something that you want to build to employ security analysis or or misconfiguration analysis as part of your cluster security infrastructure and if we take kind of a few more examples from from some of the cloud providers so in in GKE for example by default there is an audit policy in place that basically stream the audit log to stack driver which is kind of the built-in logging log shipping service in AKS you can leverage the event hub to basically extract the Kubernetes audit log for external processing and in AWS EKS we pretty much leverage or enable kind of EKS to ship those audit logs to CloudWatch and then by streaming them to kinesis we can pretty much fetch those logs in order to analyze them so what those audit logs actually look like in in in practice so this is kind of a trimmed down version I didn't put all the fields that basically are part of the audit log but as I mentioned earlier Kubernetes has an audit policy that basically controls the verbosity of what exactly exists in each and every one of the logs so you can basically control for example whether the resource itself will be placed inline as part of the log audit the audit log itself so one bad example would be to configure the audit policy to place the objects the resources themselves of type secret it's pretty much would be revealing your Kubernetes secret resources inside the audit log and if you ship it to kind of external systems you pretty much create a kind of situation where you leak secrets in a similar fashion where why you don't want to leak secrets in kind of the console logs of your container or or pod so essentially when we try to think about what's exactly the components that we have in the audit log so first of all we have the resource type that a principal or service account is trying to access and then we have assuming this is a namespace resource we have the namespace of the resource and then we also get to see which verb HTTP verb in for that matter it's pretty much represents whether we are reading whether we are listing whether we are deleting or updating the resource and then on top of that we have a few kind of pieces of information that are extremely valuable the first one would be kind of the user the established user identity as part of the authentication process so essentially we can see here the username and the group that the user is kind of part of and when we try to think about it from establishing profiles and understanding kind of the nature of of API operation being performed against the cluster these are very important features to track when we try to understand kind of who is doing what and and figuring out this kind of entire picture we have a couple of more items or or fields in the audit log the user agent basically represents the client something that identifies the client version that connects to the kubernetes API service so for example if i'm you if i'm accessing my kubernetes cluster from let's say the google dashboard then essentially we will see kind of the user agent being embedded by the google kind of API client as google container engine on top of that we will see network information or network location from which the API invocation was made some of the fields that we see here can be spoofed by an attacker so for example the user agent is something that is quite easy to kind of pretty much spoof or or or or fake and from that because of that reason we want to make sure that some of the fields that are not encoded by something that we trust we want to treat them with kind of the respected level of trust and and and account for that in a way that we pretty much do not associate a lot of kind of importance into those fields and the last kind of portion is pretty much the API invocation decision whether this is something that was denied by the API server and and all that in the rest of the field we can pretty much see like error codes and and kind of more specific data that we can derive some some insights just to give you a sense like this was something that was uh in the recent coupon uh datalog and had a really nice talk on kubernetes audit log and how they basically monitor the audit log internally and on a 2500 cluster the the amount of audit logs per per minute was was 1000 1000 events per second sorry 1000 events per second so if you think about it this is relatively a very large amount of data that needs to be processed and digested and because there are very important signals that actually reside in the data it relatively requires some some heavy lifting or at least some analysis to basically surface those insights from those logs so let's take a few use cases as an example here from a troubleshooting perspective we can pretty much detect system failures based on error code 401 for example from nodes so specifically one example would be if kubelet on one of the nodes cannot connect to the API server and we start to see kind of a drift in the number of error codes that we see from resources that are part of the system nodes then this is something that we can pretty much fire to the SRE team to take down the nodes or drain them and make sure that they can connect to the API server another thing that a less kind of would recommend from a practical standpoint would be to check the responsiveness of the API server by measuring what is the inflight time or the meantime to response time of the API server calls that we have let's take another example from more of a security kind of standpoint we think about our production clusters as kind of a closed environment a well gated environment so one of the kind of interceptions or or or compliance kind of elements that we can plug into the audit log in our systems is the ability to trigger alerts when sensitive pods or in general even all the pods in certain environments are being accessed some some useful kind of information around that would be that for PCI compliant kind of environments we would need to kind of keep an audit trail of who access the card data holder environment so the audit log for that matter can be an excellent source of data to basically keep track as long as we kind of surface this those events in the right time and kind of put them aside for an auditor to kind of inspect another example would be if you want to have some internal security guardrails built into our environments where production clusters are not being either exact or proxied into a pods or even dumping logs that are that may contain sensitive data then this is something again that we can pretty much leverage the audit log to help us with and it's very useful to kind of use that because it covers a lot of grounds from that perspective another kind of deeper example would be to kind of detect misconfiguration in our environment so any kind of unauthorized access to the cluster that basically is represented by a permission denied for example a return quote from the API server is something that we can track and pretty much based on that we can understand whether we have kind of errors in our environment or misconfiguration or for example this is something that represents kind of malicious activities against our API server so there are always kind of in with that respect anomalies that can be associated with misconfiguration but on the other hand this can be also someone kind of a threat actor that may be inside our cluster or breached one of our DevOps is trying to do operations that are not pretty much part of the permissions that the breached component is allowed to so those signals can be interpreted in multiple ways and this is the reason why first we want to look at those signals and then having enough data to understand whether this is a misconfiguration or this is kind of a human mistake or kind of ruling in or out the findings based on more kind of forensics data another example which is relatively more related to kind of the human link in this kind of role playing when we think about our human operators whether these are DevOps contractors or whoever have access to the clusters essentially if someone loses his credentials to the environment for whatever reason and whether this is social engineering or all those kind of exotic methods to kind of extract credentials from human users we can for example detect if the cluster is being accessed by the same principle from different countries or different ASNs in relatively short period of time this is an unusual kind of way to work with your cluster I mean the idea here is that we can detect those events by analyzing the audit log looking at the source IP addresses for example that exist in the audit log and by tracking those we can understand whether this is something that kind of falls into this category so we've talked a lot about what we should do with our audit log but some of the elements that audit log doesn't get us so the first the first thing is if we want to check the hygiene of the resources for example we want to make sure that we don't have pods that are running as privileged components or we want to make sure that in our config maps we don't see any API access keys or tokens or password or anything of that sort so the in my opinion the wrong kind of way to approach that is to configure your API server to basically dump the resources themselves into the log the audit log and running those inspections through the audit log there are much simpler mechanisms to perform that admission controllers would be one way of doing that or even just calling the API server reading the resource or dumping the entire resources and running the scan against them the main reason for that when you have kind of a growing number of nodes in your cluster or growing number of controllers in your clusters and then shipping all the resources themselves as all of those kind of automated components read and write stuff to the API server you pretty much move a lot of data from Kubernetes server to the audit kind of target another example which I kind of highlighted earlier performance monitoring we want to use kind of the metrics that are available and exported by the API server itself through the metrics endpoints and leverage Prometheus and alert manager or any monitoring solution that you have there to basically understand if there are performance related issues in in our environment you can use audit log to do kind of the second tier troubleshooting of issues just like we kind of went over with system nodes being unable to connect to the the API server itself another element that the audit log doesn't cover is pretty much anything that's related to the workload level protection so anything that happens like network access to the application workloads or pods are not tracked by the audit log and this is a story that the pods are not telling the API server so for example if there is a pod restart it's not tracked by the API server logs or the audit log for that matter it actually covered by the event subsystem that basically captures life cycle events and this is not something that is covered by the API server audit log so let's try to take a look I'm going to be brave here and try to see if we can connect to a real system that's kind of monitors one of one of publicly facing clusters and naturally it starts with a snap so what I would like to show you a little bit is so I hope the demo gods will be with me all right so what I would like to show you is an example of real kind of life scenario where this is an example here where the timeline that we see here pretty much captures who access the cluster at what point of time and which principle basically access the environment we can see that some of the principles are captured by or as IPs meaning that they were not really authenticated against the API server and some of the principles are actually denoted as actual usernames in our system the interesting part is that we can see that some of the IPs here are not IPs that we are using regularly and one of the nice things that are built into the system is the ability to kind of crunch that crunch all the audit log data into kind of features and signals and kind of counters on all those different features that we have here so I will switch over this is kind of a very easy and go to kind of dimension to look at the audit log from which countries our Kubernetes cluster is being accessed or at least attempt to access the cluster so if you have any Kubernetes API server that is publicly or facing kind of the internet so if you're pretty much using the default on AKS GKE or EKS that's by default kind of the the reality so you should know that your API servers are being probed constantly by internet scanner some of them are actually tailor made for Kubernetes so let's see if we actually have an example here so as much as I can share here we don't have kind of an operation or DevOps operation in France and we can see here that the cluster was accessed from France recently and if I'll scroll down we can see kind of nice visualization that pretty much captures which IPs from which countries access the API server now the nice thing about this that I can basically pivot or pretty much focus the system on the individual kind of access attempts being made from France and you can see that once I filtered out the entire audit log into kind of the dimension of an IP coming from France we can see that that's very same IP was accessing the API server specifically this was last night and the API server basically responded with permission denied I can also try to see and pretty much figure out that the origin of this access attempt was trying to access the API endpoint to read pods and we can see that the action itself is basically captured by the verb that we pretty much saw in the audit log so crunching the data or the audit log stream and breaking it down to different dimensions and and enriching them pretty much enables us to see or pretty much rule out and understand if there's something wrong going on in our cluster or if everything is kind of in line with certain assertions or expectations about our environment just to give you kind of another example we can for example see based on the audit policy who for example perform exec operations into the cluster itself we can track it down so you can see that some of our automation system is basically accessing the system for regular kind of operations but because we pretty much captured those kind of rules in our policy we can pretty much fire alerts or events to a slack channel or whatever security tooling we have there to basically capture those events because this is something that as an internal practice we make sure to kind of track and respond to so with that in mind just to kind of conclude what we covered here kubernetes audit logs is pretty much that the logs are are incredibly valuable for both ops and security and when we try to think about what we can do with them we can take a lot of advantage of those logs but it does require some effort we saw that we can pretty much stream the audit log into kind of an analysis pipeline so streaming them or acquiring the data source is the effort around that at this point in time is is not straightforward with all the managed kubernetes services that are out there but on the other hand this is something that over time once the apis are maturing and we'll get kind of to v1 that this should be relatively easier to kind of handle but i will say that crunching the data and performing the analysis will require using some tailor-made tooling that understands both kind of static exceptions or or rule-based exceptions to whatever is flowing in the cluster but detecting anomalies and understanding kind of deviations from from base profiles and all that is something that requires some some AI to build and bake into the analysis stream and the other kind of element is that we want to make sure that our audit policy balance the verbosity of whatever is happening inside the cluster the audit log is an extremely robust kind of channel or data source and and we want to make sure that we kind of tune the policy to make sure that we cover all of them they the use cases the audit log can cover from both ops and security perspective are extremely wide they can address both kind of regulated environments like like PCI or HIPAA or similar kind of external regulations but just as well create some internal policies and follow them and implement them using the audit log you can kind of detect known and unknown exploits in the API server through monitoring and applying kind of higher level analysis of the logs themselves so this is kind of a very valuable source of information for detecting kind of of such breach attempts or even successful breach attempts and all in all we highly recommend kind of employing this data source as part of your security tool stack to cover this audit log so I think it pretty much wraps up what I wanted to kind of share on the Kubernetes audit log you can register for early access to the audit log I'll see the K audit which basically will give you some of the dashboards that we saw we saw earlier or you can try our cloud service to do kind of some of the other Kubernetes security challenges that are out there and I think that at that point at this point we can switch over to some questions from the audience thank you thank you Gadi yeah thank you for the webinar was illuminating I also learned a few new things about audit logs I didn't know never stop learning what there there are a couple of questions that the through in the chat one is from Maurizio is asking if this panel is native from kubernetes the dashboard you showed are native from kubernetes so you have to install something no so the dashboard that I saw is actually kind of an early version of i'll see k audit so this is if you want to get that then you should hit the link the first link and kind of get that there is I think that there are some open source tools out there even a cncf1 file call it basically will enable you to plug kind of the audit stream and do some of the basic rules that are out there but the more advanced stuff the AI stuff you need kind of a specialized tool to do that so k audit would be kind of one option to go okay thank you a bunch of questions are popping up now so let's start with Fabriz so Fabriz is asking if say a micro a java microservice with a reverse shell one of the ability was deployed on a node I will distool the tech when an intruder gets unauthorized access with the node so so you got the question so let's say that you are involuntary deploying unsecure application and then you yes this is a great question so so actually there are two parts to kind of answer that so it really depends what the attacker actually did so let's say that I have a pod that basically was breached and someone kind of created a reverse shell so if the attacker will try to kind of do the lateral movement inside the cluster to where my kind of assets my data assets are through attempts to call the API server so k audit will basically detect that because this this would be kind of captured using the AI and the anomaly detection however this is why I kind of in my first it was the first slide in this webinar I mentioned that the runtime defenses require different type of security tooling or or a security tool to cover so the network security is not covered by the audit log so essentially if you want to cover kind of or at least if you want to track down or hunt down an attacker a sophisticated attacker that will kind of use multiple channels to do the lateral movement and exfiltrate data you will need to have more than one tool in your cluster it's clear yeah so it's a k audit is only integrating other tools I mean you need more than one tool to of course I mean for this type of attack yes for this type of attack you will need more than one tool to detect kind of a threat actor in the cluster another question from micola what is an idometric way to apply changes to a cuba audit policy to avoid a cluster disruption so can you repeat the like what's the idiomatic way so I suppose the question is also about are changes that apply to cuba audit causing disruption in the operation of the cluster so I would I would guess that that when and this is something that happens a lot I mean we change the cluster I mean the cluster is pretty much a living creature I mean if you deploy a crd for example and it creates a chain of events then the best kind of the best way to kind of approach that is is for example if you want to monitor or or at least detect disruptions that are associated with the misconfiguration you want to track down the permission denied kind of the 403 error code from the api server so for example if your steady state represents kind of zero 403 in your cluster you did kind of a new deployment and now the 403 spiked then this is something that would kind of flag you to see what happened in the last deployment and whether this is a configuration error you can go to the audit log itself and look specific audit entries that has the 403 error code and then look at their actual resources that fire them so that would be kind of one motion of leveraging the the audit log for analysis but it's not part of so so the mechanics or the machinery around that is I would say it's it's pretty much you need to tailor use cases to kind of if you want to do the static analysis if you employ employ kind of AI into that a good AI will detect those anomalies with relatively low low kind of a false positive rate okay makes sense the question was also focused a bit more so the question was more about if updates to the kubei out policy needs restarting the kubei kubei api server no so absolutely not as long as so maybe there's one so it really depends on which kubernetes service you use if you use the manage kind of gke eks and aks and friends absolutely not you just need to kind of turn this feature on and the logs will start to stream to the analyzer yeah so I think there is also a question does this service run as a side car to every pod but no absolutely not you need to deploy one cluster analyzer per cluster okay great moving on to uh to other questions so is there an integration with prometheus yeah so we we have some security metrics that are exposed from kaudits it just you need to kind of enable them by default we don't enable the security metrics pretty much because this closing kind of security I'd say it's pretty much discloses kind of aspects of the security posture of your environment so we've kind of sensitive or paranoid about that that makes sense but the thought answer is yes okay okay yeah it's a it's a standard for for for monitoring the another question is what kind of database or storage is used to persist audit logs so we don't actually persist the entire audit log luckily for us we just store a digested version of the audit log so the entire raw audit log is actually stored in your normally that would be your cloud provider so stack driver cloud watch to s3 or aks to an azure blob whichever kind of mechanism you do however crunching the data is something that we kind of persist on the uh using actually stateful set to persist it on the node itself and if you use kind of a network attached storage so it's it's effectively outside the cluster okay that's clear um coming to the end of the question so um a question about what is the advantage of using this over more traditional logging systems like splunk or elaxis search great great question um so one of the main kind of differences that uh that that chaotic basically brings into the table um it it was built like tailor made analysis of the kubernetes audit log it pretty much captures or understands some basic or core concepts of kubernetes itself like for example um what are nodes in the cluster or what are service accounts or what are users or how the apis are being structured what is resource um and the way that the analysis is being done is that we pretty much build profiles for each and every user in the cluster and each and every resource and the entire cluster as a whole and when you look at kind of generic tools like like elastic or splunk they would do kind of general kind of machine learning analysis that normally would yield more false positives if at all uh versus you know something that is more tailor made for for kubernetes okay well maybe just add to that that we can feed uh external tools like elastic and and and splunk um pretty much feed them with detections or policy exceptions so things like splunk or elastic search can act as a long term storage for for the out logs right we have we have integration for example uh with datadog uh and we ship all the findings that uh that we have as part of our system to uh your datadog dashboard so you can consume chaotic findings uh in in your datadog dashboard oh that's uh that's good to know uh uh yeah well so i yes you're gonna get you can find the copy of the presentation deck i think it's uh customary to that you find you attach it on on the youtube recording but probably you can you you will get a copy of this presentation or if you can share we can take care of that okay so a few more sorry i missed a few questions that is there they were on um on the regular chat um does this work in a private cluster yes that does yes okay i'll get in a private private settings yes so you definitely don't need any internet connection or any air gap environment is good enough okay is it easy to migrate with tools like veledo when we have to migrate the cluster right as i interpret this is about um if you can use backup tools like veledo also with kaudit i don't see any reason why not to i mean we are pretty much kind of think of us as as kind of security infrastructure app inside your cluster and if you can migrate any other application i mean valero will basically take care of exporting your volume data volumes and your pretty much resources so that should be straightforward yeah that's what i suppose as well so well this kind of exhaust all the question if there's any very last minutes question please type it now otherwise i will i will thank our speakers for today and it's a very good job and it was great to learn a few more things about how the logs and i'm looking for i sign up for the for the for the preview so i hope uh nice kick the ties of this pretty soon thank you everybody for yes thank you for for attending and i will see you soon looking forward to see you soon at the future cncf webinar which i think is next week thank you have a great day everybody thank you everyone