 Good morning everybody Thanks for joining so early Let's see if we will double in number over during the course of this talk, but let's see where we can take this So I wanted to talk a bit about Auditing and general how to figure out security events because oftentimes security often looks like this or At least oftentimes you have the feeling it is like that that the vendor or whoever you're talking to is sitting in the middle And while the data is being exposed there is still sitting there and saying everything is fine And we kind of want to avoid that or generally I would say the stages how bad Something could be would be something like this worst case would probably you learn something bad happen from your users or from the press You're not going to have a good day when that happens Next step is maybe that somebody takes your data ransom that happens Unfortunately quite frequently that somebody takes your data and then ask for a ransom to get your data back or maybe not get your data back Or you just see it on your cloud providers build because somebody got a hold of some of your credentials or access credentials For example your AWS keys and started mining bitcoins, then it's just getting very expensive Maybe a bit embarrassing but mainly expensive, so that's okay Ideally you learn about it yourself after the fact and Even better you learn it yourself that something has happened and you can even prove what actually has happened And the idea is like how can I prove what actually happened to my system? what what is going on and Basically want to avoid the bottom half where you figure out everything is on fire and everything is terrible and We don't want to go there. That's the general idea. So where do we start? obviously there are no silver bullets and Everybody promising you a silver bullet is probably wrong In one way or another. So well, there are no silver bullets, but I want to start with audit B. Is anybody using audit D Okay, maybe so it's That's the description from the main page audit D is the user space component from the auditing subsystem and it's just generating the The events basically That it has captured and it looks something like this Or let's start with the features. Generally, you can access file and network accesses You can see any system calls or depending on how you configure it You can catch those you can see which commands have been run by user and you can generally just get security events And it looks something like this you have your application and your application is doing some calls to the kernel and then Based on three filters where you filter them based by user by task or when they exit on the exit code You can always check like does it pass through my exclude list? And if it does then you can record the event in the auditing demon So basically if it passes your exclude rules, then you're recording the event and you're collecting what somebody is doing So to just quickly show how this might look like so Let's start with all report. This has been running very shortly here Just to give you an idea. I set this up yesterday evening. So it was only running for like one and a half minutes But you can also really see like configuration changes number of login attempts how many groups and users and roles are there So you just get an overview of What has been going on on your system? That's the all report You can also use that the brother there or search And for example to get just the raw event you just type in raw And then you could see this for example is one possible event. So every event has a type Here you can see okay a demon ended that was when I actually stopped audit D You can see there is a message. There is the audit Event and the first thing here. This is a timestamp and afterwards that is an ID So after the column that is the idea of the event You could have the same timestamp and the same ID for events if you capture multiple things from one processes, for example And then you have other key value pairs of what actually happened. So for example, you see the result was a success The process ID was one the auditing ID was zero and you extract those and to look at the event before here the user started something with No, it was a different timestamp and a different idea. So this is a totally different event and you can see basically You know pseudo open a session on one specific post name and it was a success. You could also do add Additional filters so you could say I want success. No We don't have any failed events in our audit log if you would for example say I only want the successful events that ended in successful Yes Yeah, that is true and that's why it's not working So this one works if I type no Then you don't have any matches with the yes Then you see just the successful events just like the one we just had One thing that you can already see is that these messages are already quite varied So this one is relatively simple. This one is much more complex where you have This specific time stamp in there as well. So parsing that can be a pain and If you want to figure out like what the success or what are the specific types that are available and what is going on You can't find all of those in the documentation So for example, if you go to the Reddit documentation that example You will find various examples and they just walk you through Things that can happen and types that happen and and explain those Sometimes you also want to set up more specific rules that you want to have yourself In the GitHub repository of audit D You have a bunch of rules that might make sense for you or might not make sense For example, we'll come back to that one, but for example for a power abuse. This is basically Let me make it slightly larger What is happening here basically is a Privileged user is looking into the home directory of an unprivileged user So the user ID is greater than a thousand for the unprivileged user Basically, we're checking somebody is using or abusing their pseudo privileges to look into somebody else's home directory We'll actually try that out afterwards as well But here in the repository you have a couple of examples of things that you might want to try out or if you try to Generate any rules. This is kind of the good starting point to see which are the rules that are available and people can actually use Okay, so We saw the logs we saw more rules Something that is still a bit work in progress, but there has been some progress But it's like a large issue is for example Docker support or having namespace support figuring out like Some event happened just in this specific namespace or under this specific user them in that namespace It's a fairly complex issue and links to various other sub issues and that is still work in progress So the idea is you don't just have one system But you have many systems and you basically want to collect all of them and want to see what is going on over all my systems So the problem then leads to how do you centralize this? and now It's kind of like where I'm from I work for elastic the company behind elastic search log says Kibana beats and We tried to tackle this so I generally build hello world programs and this is like hello world of oddity and to see where we can get with this one so Anybody not familiar with our stack or most most kind of familiar You know we started off with the famous elk stack elastic search log says Kibana And then we tried to morph it a bit further. We added the so-called beats, which are like lightweight agents or forwarders Then we tried to add that B to elk and then we came up with belt or elk B Which might look something like this You can see it's a bee and there's the elk horns. It has everything But we're always a bit about scaling and this is not very scalable because what happens if we have another open-source product Then we get another letter and whatever that letter will be it will be very hard to make up another animal And also we need to do we do the entire branding. So We kind of got rid of that belt or elk B We do have sometimes stickers with it But now we just call it the elastic stack because that's super scalable whatever component we have We just stick it in there Maybe we have to read raw the colors and add one component somewhere here But otherwise the name stays the same So that's a bit easier for us and generally what this looks like is and we'll only use These three components here. So you have beats, which is like the lightweight agent or shipper It's written in goals You have native binaries and it's as small as possible just to forward events For example for log files, we have something called fine beat, which I always describe It's a bit like tail F over the network. So it's just forwarding log events Then we have lockstash to parse the data for example and enrich it. What could be enrichment? and I guess is Yes, for example, if you have an IP address and you want to get the geo point for the IP address that could be the Enrichment state and we generally do that when we ingest the event So we actually enrich it when we store it so that search and retrieval afterwards is much faster Also, for example, the IP addresses IP ranges can change over time So it might be a good idea to actually take the kind of owner at the point of time when you ingest the event Because six months later somebody else might own the IP address and it might have just moved around So that's what logstash is doing and elastic search is just storing everything and then we have kibana to visualize it Today I will just focus on beats elastic search and kibana and use those how to collect things. So And all of that is a patchy to license so you can just take it and go while with it So the first thing we did is I showed you there All search raw like the raw event logs and they're also in a file on the file system So the natural thing is we point that file beat to the file and said like okay collect this file And then we tried to parse it By the way, we added something called file beat modules now because we figured out there are some things that are Very frequent that people need to do very frequently for example collect engine x logs or my sequel logs or something like that And we have so called modules for those So basically we tell file beat a collect engine x or in our case ODP And it will know automatically on your operating system Ubuntu. It's in Val lock Whatever you're trying to get it will know okay this file is or this lock file is in this location and the default format for your operating system is this it will Then also know how to parse that automatically and you don't need to set up those rules for every single or Not everybody needs to set up the same rules for engine x for example Because well previously with lock stash you would always add your custom rules to say like this is engine x And this is my sequel and that is an Apache HDBD It was kind of boring that everybody had to do the same thing So we kind of got cleverer and we we do stuff like that automatically. No now What do we even have here? So just to give you an idea So do we are you to see file beat file beat game? This is where the file beat configuration leaves and to collect the audit the event What I'm doing here is basically I say I have the file bit module. I Pointed to audit D. I add some tags for example here. This is the name of my instance I just give it some tags because we can I Enrich it with cloud metadata and host metadata and then I just put it or throw it to my elastic search Instance with username and everything and this is all you need to do to collect that auditing event What that looks like is if I had over to we have Dashboards for those modules. We generally have dashboards to visualize what has happened here So if I search for audit D and you see the file beat module I hope that is large enough for everybody to see Since I only had audit D running for a short amount of time at night tonight Let's change the time frame to the last 24 hours And you can see These were the events we have in that auditing log So this is a bit like the all report where you saw this was the overview of things that happened But here you could also do for example You could just filter into one of them and say like I'm only interested in those events So if you click on that little plus in the magnifying glass it adds a filter here And then you can see all of those create this events were done by route And then you could throw those away again And then if you scroll down for example, you can see here We didn't have any failures over time But here when I started audit D we had Some events in a short amount of time you could also see for example Where have this been happening and well, this was me when I just set it up. So it does a Reverse GUIP look up on the data and you can see the raw events down here So this is basically just parsing that out However at some point we figured out that parsing that file was a major pain in the ass because every line looks different and You know, you write regular expressions to parse it and does anybody like writing regular expressions Like every time somebody says like yes, I like to write regular expression I would say it's the Stockholm syndrome where you just got so used to it and that you start liking it But I personally don't like it and Writing regular expressions can be such a pain and then We also had like that dog fooding Problem or or maybe you don't want to say dog food But we might prefer like drink your own champagne because it sounds much fancier But we have our own cloud service and they had that need and decided well We have all this technology we want to be sure like did somebody break into our instances Or what have people been doing and we want to monitor that But we don't want to rely on parsing those files because well, it's a pain So what do we do? So we created something called audit beat. Basically, it's using the audit the syntax and So you just use the same configurations But it's running that automatically for you and you don't need to parse it anymore So what that does is it can do correlation of events It automatically results the user IDs to the user names and it can forward the data Directly to elastic search You don't basically have it in a structured format then write it out to a file and then parse it back But since the binary has it in a structured format already it can just forward it in a structured way directly to elastic search Sometimes people ask why why not ebpf? Which is probably more powerful the downside of ebpf is it depends a bit on newer kernel versions And we have a lot of customers on the very old kernels as well and The what is it the extended Berkeley packet filter It's another way to filter for security events, but it needs relatively new kernel versions to do that It's just another way to get similar kinds of events. It's Not just networking it or it ties into the network as well, but you can get all kinds of Events out of ebpf But it needs new and it depending on the feature set Because features were added over time with very old corners. You have a very limited subset of features new kernels have more features But we didn't want to rely on having a new kernel But we wanted to have something that's kind of working on all the ways as well And all the D has been around for a long time. So that's generally available And we think it might be slightly easier to configure and also it has Docker metadata enrichment built in So basically we just look up against the Docker demon in which namespaces this running and it will properly tie into Docker or Any any container namespacing that you have so what do we have here? We have another binary now Which is called oddity so it looks like this you could just type So basically we have an oddity module here. We have some configurations these configurations you have Mostly in In audit DS well for example the backlog limit how many events should you look in the past? That is something you can configure or by default configure in audit DS. Well, we have added some other things For example, yes, we want to resolve ideas automatically. We don't want to rate limit it You might want to do it if you have a lot of events not to overload your system collecting and forwarding all of those events And if I want to collect the raw events, but well, I Didn't because it might be just too many events and then Thanks to the awesomeness of Yamel you can just pass in after the pipe like your good old audit D configuration rules So you're reusing the same rules and because I always forget I wrote a comment for myself and you can see Generally watching a file start with dash w Or to have a syscall you have an action and then the filter you want to use And the filter action can either be always or never And then the filter can specify which specific kernel rules you want to target And those can be a task exit or a user and you can also exclude something and then you can add additional Keywords which we'll use after the words with dash k to identify our events and what happened And you can group several rules into one by this dash s or capital s and it looks something like this here for example Any file access to ETC group past WDG shadow, etc Is locked and gets a key of identity So I could afterwards filter down to the key identity to see who has tried to access those files And obviously you could whatever files are sensitive for you You could just add those and as soon as somebody accesses those you want to trigger or create an event Here for example, I say if one of my I Pick the user ID of one specific user. We'll use that user afterwards if that user logs in I want to be notified if they try to access ETC past WD And then we add a tag developer past WD read so yes No, no, no, this is this is not from so that example So the question was like why this specific rule basically So why this set of rules and so no this is not what our cloud team is using and I I'm probably I should not Even share what the cloud team is using because they're trying to find the bad guys and if I show like what are they searching on It would give kind of a stupid advantage to the attackers So I'm not sharing like the actual rules These are just like the hello world rules that you could do with oddity So this is just since this is the general oddity syntax. I just picked some Yeah, this is yeah, we're reusing the oddity syntax one-to-one so you could use this exactly in Oddity and we just reuse the same syntax. So the syntax is Maybe weird like I'm not super used to it or I'm like, okay Maybe this makes sense But this is what audit is using and to make that switch easier and we didn't want to create another proprietary system I will say we have our own configuration language and now learn our configuration language We're just reusing what audit is providing and we're tying into that and these are just some sample rules Just to show you some stuff that is going on Yeah, so this is what we're collecting with those basically and it might look something like this So if I had over To audit D and I had over to the overview Here you can see this has been running For the last 24 hours and you can see for example We have a lot of our events are user logins or system services and you just have a break down into user login Authenticated and logged in users or those events are all some auditing rules that we had and here most of them Were that some program was executed, but we have some more here and you can just see over time how many events happened And down here you can basically see which user did what action and you could even unfold one of them and see the raw event and Whatever that the raw event here was so you can see here something with SSHD was being run Two nice things by the way that we are doing here is first off We are enriching that with the host information so you can see on which operating system was that running so you can see I'm using the latest Ubuntu Sorry for that It's an old habit dies hard And the second thing we're enriching the cloud metadata so since this is running on AWS You can see this is running in Ireland in the one a availability zone with that instance ID on that instance type etc So you could filter down for example I'm only interested in one availability zone or I know maybe one of my instances has been compromised and I only want to see events for that specific instance or with Operating system you might know or there was a security issue with just one specific Operating system in one specific version and then you could just filter down on those and see like did anything specific happen on those instances To see what is going on there Yes So the question was did we implement parsing that no so what I showed you at first that the File beat module that used the parsing rules So that basically took the what our search raw would give you or in a log file that we parse Here since we run the binary in the binary we have it as a structure format So we don't write it out to a file just to parse it back But we wrap the audit the binary run that have there the information in a structured way And then we can send it off to elastic search directly So we're not going this in direction through a file basically Yes, I mean we do process it so the question was do we process it yes We do process it for example this enrichment here that happens in the binary already This is not like after the fact that basically For the host metadata, we just use whatever Whatever call maybe LSB Whatever to get the operating system out So that is enriched when you collect it and not really after the fact not seen it. It's called our shape Okay AU shape Interesting. Yeah, that sounds very similar since we in the end we also generate Jason Because elastic search uses Jason, so it's just basically generating Jason forward to Jason Interesting then we have kind of the the same goal in two different ways. Yeah, interesting. Okay. I look into that Wait, I've never seen our chef before to be honest Cool Okay, so We have that we have by the way We have both this auditing event But we also have some other information and this is just from the plain log files And then you can collect or connect that for example You could see for example, which users have I created in my system? For example here, you can see this is when I created my different users in my system Or you could for example also see like when we're pseudo commands being run and you can see All search success. Yes has been run by the Ubuntu user four times And you can collect that kind of information and just see how those work. Okay So let's try something else So I want to SSH into my box. Let's say we use the elastic user And well, let's enter a wrong password for a change Where will I get this event from? Where will fail login attempts by SSH? Yeah, I think in Ubuntu it's Valor log off log but same thing We do collect that as well by the way So we can both get the auditing event or we can just parse the file This is now the information passed from the file So you can see in the last let's say four hours because people seem to have woken up now You can see these were all the failed login attempts to my instance, which is probably not a surprise because Default SSH board people just tried to log in This was me when I either use my public key or a password Yes, don't use passwords, but for the demo I do And then you can see which users had failed login attempts and where where they're coming from so you can see somebody from China Normally it's either Russia or China But somebody from China seems to brute force or tried to brute force into my instance So they have like 700 something plus 200 plus another 200 or so login attempts from this area And this here was probably me So if I check like where was the elastic user coming from you can see it was only coming from me here By the way, I would be curious That's the wrong way how good my geo information is today because it always depends a bit on GUIP data is sometimes very good and sometimes not so good, but today it seems to be pretty decent Seems to be very good. We are more here further up north, right? Okay, but it's still you you get you get the city you get we can get a pretty good impression of where we are Since I'm probably on the university network I would be curious why this is off Maybe they have another main headquarters here or whatever, but that's what you get from GUIP lookups and you can see this was my user from this IP address trying to log in and it failed Anyway, let's say we want to do the same thing again and this one this time it will succeed If I would take it a bit correctly Okay, let's say Service engine X My user is not in the pseudos group, so it will need to do something and let's say we want to use this admin user here Luckily, I know their password and then I just restarted my engine X Then you could find those in the executions, for example, so if you look for all the T If you go to the executions You can see all the executions that have been run by different users Let's just see what has the elastic user been up to so I filter down on those This is the general a overview and if you then look here This was the command that I have just ran so you can see I use that to restart engine X And if you look into the details You will also see okay. It was the elastic user Running that binary and it was successfully executed by somebody in the root group And you see where it happened and like the specific user groups like who has been accessing what and which permissions etc So this event has been collected by the auditing event Now let's do something else. Let's say We log in with the admin user By the way, something I didn't mention is audit D is just collecting information. It's not blocking anything It's not like se linux that can block bad calls audit D is really just passive and it's just collecting. What are you up to? Before I showed that power abuse thing, so let's try to do the power abuse where the User with admin privileges is looking into the home directory of a specific user So we have let's say we have the home directory. Okay, that looks okay Let's say we have an elastic user that might be interesting. You look into the elastic user And then we see okay, we have a secret txt file Little surprise Let's do let's check out what the secret of that user is will this work. There's no pseudo Let's make sure we run pseudo And then you can see it's my secret Okay That's also not super surprising that we could read that but let's see how how to figure out what has happened here So I mentioned those tags before so we we had those in the configuration And if we scroll further down somebody will have the power abuse Here we have the power of use so this is the exact rule for the power of use So when somebody accesses the home directory of a user with it an ID greater than 1,000 and It's somebody in the pseudo group then report the power of use So basically we can just fill that down on these events Which looks something like this. I'm in the audit beat data, and then you could just say We have a tag And the tag is and then you have like the common tags that people do so for example here We have the power of use. I just filled it down on those You can see we had a couple, but let's see what the last the latest one was and hopefully It's no, it's not the beam info that we want to see but yeah since we've been vi is always creating multiple files here Where is my? Basic so this is the one that I wanted so you can see we were running cat on this file and Basically you can then see like which user was doing that and we reported that the power abuse That user ID with root privileges on that file so we were collecting those for example if I would a log out of that user again and say I want to log in with my general user and I say I want to for example see the ETC past WD we had a rule for that as well You can just see that because well, that's a public the excessive file is generally also not super critical But if you for example filter down, I think we Called it Developers past WD read so there I had a very specific tag to fill the dose events and then you can see okay, there was just Just now it happened and you can then see okay this user the elastic user Ran bin cat on ETC past WD and whatever your critical files would be you could just monitor a dose and figure out what they are up to Okay Moving on we've seen that Next thing we can do is and we've added that so the auditing audit these stuff only works for Linux because only the Linux kernel has We do have something else in the audit beat module which is file integrity monitoring and that works on the major operating systems Depending on what operating system you have to basically watch your file system and watch for changes and hash all the files in So you watch a file or a directory we hash all those and every time you change something We will detect that or something has changed with these events depending on your operating system And we can just tell you okay. Somebody put some new information in your web service directory So for example, let's say This is this is your web website Not very spectacular, but somebody was able to break into your server and just changed it and I will probably need a different user to have missions to do that. So let's change to somebody who has root privileges just in case So for example, let's say we have var ww html and then we should have an index file And then you see this is my welcome message and let's say we want to Change this So we've changed the file and suddenly your professional website Doesn't look that professional anymore because you have the new emoji in there And you want to figure out where is that coming from or when did that even change? So for that we have Let's quickly look at the rules So if you scroll further down that file you see here, we have the so-called file integrity module This is the path that we are monitoring and well We just set some limits so don't scan more than 50 megabytes per second And don't scan files larger than 10 megabytes because otherwise it might just be too much for you CPU You can also change the the hash type So we're sticking to the default which is shower one that we have but we have a couple of others I'll get back to those in the slides and With that let's see if anything had changed on our file system. So I have a dashboard for that By the way, all of these dashboards are pre-built. I didn't build those myself I'm lazy. I'm just using what we have So we have sorry that this is called differently all the file integrity This is the one I want So you can see in the last four hours No fires have been changed on my website now just now something has happened. So three fires were created Three updated and one moved by the user in the group route And why three might be slightly confusing? But you know VI is opening like the swap file in the background Which is kind of hard to see but you can see we have a swap file with an SWX file What and what that was and then we have the actual file. So those were edited and you can see That was the most updated file the swap file and you can just see these were all the fires that were changed in here you can see on My host this was the event and then you can see okay when I closed VI the swap file was deleted Like all these temporary files were deleted And you can see this file was moved from the old state to the new state So these are all the events basically that happen in that folder and we're just keeping track of any changes in the directory If you upload it by FTP, you would only have like the replacement operation to see These are all the hatches that we support and the very last one is the fastest. So if you're Concerned about like how perform and distant and how much Time will this hashing tape? The last one the access H64 that is the most perform and hashing algorithm that we have in there Okay, we've seen that Sometimes if you get kind of Stuck with all we have too many events we need to figure out what is going on We have some other thing That's kind of like learning automatically what is normal for example This could be like how many users are locked into your system where you see during the day It's kind of like a lot of people but over the weekend It's not that many and that night is also not that many and then you could just see otherwise this one here This drop might be very hard to find But here you basically see the this is an anomaly. So this is just like Time anomalies over time series you where you know like this blue band is basically these are the expected values and This is where you have an anomaly and this would then just tell you okay here For example, you have too few users who are locked in maybe your network was down Or if you have too many maybe somebody was trying to brute force their way into your system or whatever So sometimes it's very hard to find the right thresholds to alert for that So you could just automatically do that. Okay to wrap up I always compare the stack a bit to Lego because you have all these building blocks But you need to kind of put them together the right way So you need to have the right all the key rules and then you need to kind of look for the right stuff It's not like an out-of-the-box solution. It just does Pixi magic and it runs automatically and does everything for you You will need to like know what are you looking for? What is your kind of threat model for example, like which files are sensitive that somebody accesses or which? Folders shouldn't change on their own like some configurations settings We do provide like these building blocks, but we will need to do that putting it together the right way yourself Yeah, generally Audit D is great It's just very hard to parse and work with the output form That's why we added audit beat to have that in a more structured format and also enrich it with some more information And then you can combine that information with more logs and dashboards For example, you could just get off log or general logs from your applications and then combine those to see what is really happening with your system If you want to try that out yourself You can I'm just giving you the regular user not the root user, but you put SSH into that instance And if you want to try out dashboards, it will be automatically locked in if you head to dashboards Then you can just add to the dashboard Since we still have three minutes left We can do one more small demo that I kind of forgot to do or forgot. Let's say Let's say we have netcat we listen on port 1,025 How do we talk to this now? Anybody has their laptop out or has something like that? It should be something like this So you can see whatever I'm Whatever I've typed here will appear here. So if anybody feels inclined to send a message as well You can just tell net to my host name and port 1,025 Now, how would I find that somebody has opened a specific port? I'll keep it pretty simple and I'll just say like I'm heading to the raw events And I'm just interested in like somebody reported some weird behavior on port 1,025 so I just use the full text search over everything I have in my system and Okay, we have one event and if you open that one you will actually see here Somebody has used netcat Then there was the command there or the arguments that they ran. So we have a netcat listen on port 1,025 so you could figure out that somebody has done it and Well, it's a very basic chat server But this is one of the ways how to figure out that somebody has opened a port and has been up to Good or no good depending on where you stand Okay If you want to try out the code It's mostly like Ansible and a bit of Terraform just to set it up because you saw I just started there that midnight or so I basically run rent Terraform to create me one AWS instance and then ran a playbook and through in the configuration files That's all I did all the sample code is there. That's probably the most relevant part Any questions, I think we have like five minutes left for questions should be perfect Six yet. Yes, please. Can you enhance events with? Kubernetes. Yes. So there's actually a very good point. I I Let me Let's kill this one so We have here we have the so-called processors and the processors they include the cloud information the host information We do have that for Docker. What basically what you do with Docker is you need to have access to the Docker demon because we we we need to socket and then we look up like stuff like The base image and tags from the Docker image. We have the same thing for Kubernetes Where we go against the namespace and the pot and we will enrich all of that and then you could just say I'm interested in just one specific namespace or portal whatever and filter down to that And we very recently added one more. Give me a second So just to Okay, we have the metadata here So we have the host metadata the Docker Kubernetes I thought we had one more Sorry. Yes. Yes, exactly. It's Yes, sorry, we are in file bit, but they all have the same They all have the same so you can add all of those in all the beats and I thought we had Was it open shift? I know that we have integration for one more I Would need to look that up that is that one is very new and I haven't tried it out myself But Kubernetes and Docker are very common and useful as might be host or the cloud metadata for the cloud metadata We support At least AWS and GCP Maybe Azure as well, and I would need to check But those are the ones that we need photos and or support for those processes Also, that's kind of a pretty cheap look up because basically what we do is we General or we cash that and then enrich every event for example for the Docker socket We'll just cash that and then reuse that for to get the AWS information. There is the special IP address on AWS Which is 169 dot something something something if you query that you will get the information about the instance itself back And we basically cash that and enrich every information or every event with that information from that API That's how we get to that metadata. Yes, but that that's a good point. Yes, Docker and Kubernetes are supported Any other Questions Otherwise, I have still have a couple of stickers over there Since you're not that many will probably last for everybody So So the question was about the performance for the file integrity module So we We have done benchmarks like which hashing algorithm is the fastest and we have like a bunch of the ones that we support And you can also say like don't hash files over X size and don't hash more than x megabyte per second So you can totally limit how much resources you want to use so nobody can DDOS basically on your your instance by just changing too many files, and then you will exhaust your CPU or so I would assume that hashing generally should be pretty cheap, but of course it depends how much files you change I mean don't don't throw it on on just root and then for for the entire file system like Probably put that on folders with contain which contain like the sensitive information that makes sense Also, otherwise, we will have a lot of garbage events I'm not sure we have like done too many benchmarks on like super large scale, but we have tried out various Hashing algorithms and you can just limit it to what you want to do So it shouldn't kill your instance. It's not that part. Any final questions. No, okay. Thanks so much