 Hello, everyone. Welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I am Annie Tavasto, and I am a CNCF ambassador, and I lead marketing as Vision, and I will be your host tonight. So, every week, we bring a new set of presenters to showcase how to work with cloud-native technologies. They will build things, they will break things, and they will answer all of your questions. So, you can join us every Wednesday to watch live. So, this week, we have Jesus and the Center here with us to talk about Prometheus plus Falco, the Swiss Army knife for SREs. Excited for this. So, as always, this is an official live stream of the CNCF, and as such, it is subject to the CNCF Code of Conduct. So, please do not add anything in the chat or questions that would be in violation of that Code of Conduct. So, basically, please be respectful of all of your fellow participants as well as presenters. But with that done, I'll hand it over to our speakers to kick off today's presentation. Well, hi everyone. Thanks for joining us today. And in this session, we are going to show you how we like to bring Falco into Prometheus, to have Falco insights in our Prometheus metrics. And we are going to start almost from scratch. So, this session will be easy for almost everyone. So, well, we've already introduced, so let's get ready for the agenda. Vicente, can you start showing us what's Falco? Why are we here? Yeah, we are going to talk about Falco. Falco is a threat detection engine. And what it does is, it listens to all the activity that happens in your system. And it has a set of rules enabled. So, those rules are basically a set of conditions that are going to be matched. And when something triggers anomalous behavior, Falco is going to send it out. So, that's what Falco does in an art show. It does a lot more. But we are going to go into more detail in a second. So, can you move to the next tab? Sure. Thank you. So, what does Falco do? That's exactly where it's cracked, right? So, it's basically listen to system calls. And system calls are necessary to let the processes talk to each other. So, everything has to go through the kernel. Then it matches all the information that comes from those system calls and compares them against a set of rules. So, the moment that rule is violated, let's say someone is trying to open a file that shouldn't be accessed, then it triggers an alert. We are going to see the process throughout this session. And we are going to see how that integrates with Prometheus. So, Falco is able to check for privilege escalation, for name spaces, access for access to read or write into files or directory that it shouldn't, and so on and so forth. So, you can see a list of all those conditions in the website you have right in front of you. You can access that through Falco.org. That's the documentation website. And we are going to see now what those rules look like. Very, very simple, very easy. Can you go to the next tab? Sure. So, I ESO you are sharing the link. That's great. Thank you. So, those basic elements of a Falco rule are unidentified. That's the rule ID. We have a description just to tell the maintainer what that rule is doing and the important bit, the condition. So, in this case, we have a system call called XPE, which basically starts a new process. We compare whether it's coming from the calling the call or from the return of the call, then we compare that that happened in a container. Falco can work in traditional servers, but it has very good features when it comes to containers. So, we can tell you in this node, in this container, something anomalous happened. You don't need that if you don't have containers, but if you have them, it's absolutely amazing. And finally, we compare what kind of process we started. So, in this specific rule, what we did is to observe that the shell was started in a container. Something that, well, it could happen. So, we could filter those rules out, but issue them. So, that's a very, very typical rule that we monitor using Falco. There are many more. And we are going to see a couple more in a few minutes. What happens when a rule is triggered? Well, we need to advise anyone. We need to tell anyone, hey, this rule has been triggered, and we need to decide how we want to do it. So, Falco has different channels to do that. If we go to Falco Alerts, we can see that we can send it to the standard output. That's good for monitoring, but not very, very useful if we are doing an automated monitoring. We could send it to a file, which is like, yeah, I'll keep them here, and later I can observe them. But the interesting bit is when we send it to somewhere else, we can send it to an endpoint like Falco Sidekick. It's a project. We are going to talk about it in a moment. Or we could send it to another program that does something else. That's up to you. You can choose what you want to do with those alerts. I'm going to show you very, very simply how to configure the HTTPS endpoint. So, if we go to Alert Channels, here you can see that we have the standard output I was mentioning before. We scroll down. We have the file output. If we keep scrolling, Azuz, can you do that for me? Sure. Thank you. So, the syslog is also a traditional way to put those alerts. Keep scrolling, please. I just want to reach the HTTPS endpoint to show them how we are going to configure in this case to use Falco Sidekick. There we are. Very, very simple. We just indicated we want a JSON output because that's going to be enriched, and that we want to send it to HTTPS endpoint. This endpoint is going to be, in our case, a Falco Sidekick binary that's going to be deployed with the same instructions that will deploy our Falco infrastructure. And that's simple. We only enable that, and we set the URL to the endpoint. Very, very easy to follow. Just a bit more information. If we open the Falco security, yeah, that's the Falco repository. We have access to all the repos here. You can see the contributors on the right. There are many, many projects. We are going to talk about a couple of them. The more important for this session. So, in the next one, we can see we have the charts which are used for deployment. And the last one, we are going to skip the middle one for a moment. Yeah, that's going to be Falco Sidekick. That's going to be also automatically deployed through the charts, right? So, the last chart, the last tab I wanted to show, the Falco rules, contains all the rules that we can monitor with Falco by default. That doesn't mean we are limited to it. That means that we offer already like 80 rules, and you can customize them. We are going to do that through the session. You can extend them. You can delete them. You can disable them. So, there is a lot of flexibility here. Rules are basically macros, lists, conditions. We put them together. We try to make it as easy as possible. And we are going to have a few interesting cases here. So, for the moment, we are going to stop here. Jesus is going to talk about Prometheus. And then we will talk how integrate them together. Okay. Well, I'm sure most of you know already Prometheus. Prometheus is an open source project that allows us to monitor whatever we want to monitor, actually. It's actually the standard for monitoring Kubernetes, but you can use it for monitoring anything you want. But today we are going to use it to monitor Kubernetes. There is a very basic thing that everyone should know about Prometheus, that instead of sending metrics to Prometheus from our applications, just like we did with traditional strategies, what we do is we tell Prometheus where our metrics are. So, it goes to those endpoints to ingest, to script those metrics. So, that's the main difference. So, the first thing we need to know when we started using Prometheus is where, or how, are we going to expose those metrics? So, Prometheus has... I'm sorry. I didn't... Sure, this... Okay. Thanks. Thanks, Vicente. As a quick note, I see a few people asking about the links to the chat. I unfortunately can't send them to the LinkedIn side, but they're in the YouTube side, and that's why you can see them on the screen occasionally for your side. And then someone precious is asking, how can we get the recording? You can find the recording for this session, for example, from the CNC of YouTube. Immediately after the session from the live stream tabs. So, no worries. You can get every second of this goodness later on as well. So, tune in there. Okay. So, as I was saying, applications needs to be properly instrumented to send, to expose those metrics. So, Prometheus Community has an SDK so you can instrument your application in different languages and it's pretty... Not easy, but it's not an impossible task to address. The thing is a lot of traditional, very mature applications aren't instrumented yet. So, in the meantime, the community has created what's called an exporter. So, an exporter is an application that runs along with the application you want to instrument and it gets those metrics. For example, NGINX has an exporter, I don't know, Redis has an exporter, etc. So, the first thing you need to decide when you start monitoring with Prometheus is where I'm going to get those metrics from. So, at Cystic, we have this open source project that you can visit. It's from cat.io and you can search for the application you want to monitor and it will tell you if the application is already instrumented or if you need an exporter, it will give you the home file or the repo to download exporter or the configurations and also alerts, well, the setup guide for example, in this case it's already instrumented, some dashboards, some Grafana dashboards and also some alerts. So, this could be the easy way for getting started with Prometheus. And also in this presentation, we are visualizing all the metrics and the dashboards and the panels are in Grafana. So, we will see that in a minute. So, yes, we set the basis, right Vicente? So, in this QR, you can go to the GitHub repository where we created the demo. I don't know if I can share, sorry, yes, this is the, as Vicente was pointing, Prometheus.io is open source so anyone can use this exporter. So, they are completely open source and no matter that you are not a Cystic customer, you can use them right away in your Prometheus open source infrastructure. So, don't worry about that. Okay. So, in the demo has, I think, it has like two main parts. We are going to install and configure Falco, Prometheus, Grafana, exporters, everything that we need to have in order to do the troubleshooting use cases scenario and then we are going to have this, we are going to have like a scenario to troubleshoot and to learn how we can use Falco and Prometheus together. Okay. So, I'm sorry. Okay. I hope everyone can, I hope the font size is enough. It looks good. Okay. So, we are going to use Homefile, which is a way of orchestrating home commands to have everything in one file. So, we can start as easy as we are doing here. So, we are going to use two charts, which are the Falco security and the Prometheus community charts. So, Vicente, do you want to explain what? Sure. Okay. So, as Jesus said, we are using HelloFile because when we have to deploy several applications at once, that's the easiest way of doing it. We don't have to run one helm installed after each other. So, in this release, in this Falco release, what we are doing is we are using the standard Falco security Falco chart. This is supported by the community and the values we are passing are basically two. One is the TTY, that means that we want the alerts as soon as they happen. Otherwise, they might get buffered. People usually complain, well, I don't get the alert the moment it happens, yes, because you are not monitoring the tool by itself. And Falco side cake in here is the one that forwards those alerts to Prometheus. So, we are going to use two parameters. One to enable that. And the other one is to access the web UI. Falco side cake is based on two projects. One is the forwarder. And otherwise, it's a very nice UI that shows what rules were recognized by Falco side cake UI. And you can filter by tags, you can filter by labels, you can filter by when they happen and so on. Great. There's an audience question. Suppose that I want to monitor my pods on a Kubernetes cluster. Is there no exporter or pod exporter directly usable? Yeah, that's going to be for Jesus. Yes, well, it depends. If you install Prometheus with a Kube Prometheus stack, it's already configured with the node exporter and Kube state metrics, so you're good to go. In other scenarios, maybe you might have to auto the jobs independently. But if you use this chart, you just need to add the Falco endpoint. So, just adding to the question in the Falco chart, we are deploying Falco side cake, which also offers the exporters. So, there is a possibility to monitor Falco with exporters directly, but if you use Falco side cake, they are included. So, it's just a matter of how you want to deploy the tool and what you feel more comfortable using. Falco side cake offers more possibilities than just a simple exporter. And Jesus, I'll let you describe... Yes. Well, this is very easy. You just need to use this chart. This is the home chart that is the target is a Kubernetes cluster. So, you just... It deploys the Prometheus operator, so everything will be running automatically. You will have the node exporter, KSM, the Kube state metrics, and also Grafana and all the tools we are going to use. So, I'm using... We created a vanilla cluster, Kubernetes cluster, and we deployed this chart. So, you are going to see exactly what we are describing here. So, in this case, the only thing we need to add to the chart is an additional scrape config. So, we just need to have a name, the scripting interval, the script amount, where's the matrix path, and the endpoint, which is the service and the port that Vicente just showed us. Yeah. So, basically, what it does is it connects to the service that we have deployed already with the Falco chart, and it access the metrics route. It's going to do it every 30 seconds, and then you will see all that information on our lovely Prometheus. So, we can take a look now. As I was saying, sorry, that wasn't a spoiler. So, now we have Prometheus, and we have metrics from Falco. And also, as I was saying, you can see that we have also the QVest state metrics and the exporter, et cetera. So, right now, we have this and Falco Sidekick UI that gives you a nice overview of the things events triggered by Falco, right, Vicente? Do you want to... I don't know. Walk everyone through this. Yeah. The thing is Falco Sidekick UI is not only... Well, Falco Sidekick is not only for order, it also gathers the information of Falco Sidekick UI. And here, we can filter by the origin of those trigger rules. The priorities, which are the classical severities from a syslog scheme, we have the host names in case they come from a node or from different pods. We have the rules that were triggered. So, those rules were little tests that we were doing. And we also have tags. So, if your rule is affected or is included in the mid-tree scheme, then we can filter by those. Yeah, that's basically what you obtained from this dashboard. But if you go to events, I think we shouldn't show much here, otherwise it's going to spoil the demo, right? Okay, but if you go to events, you'll get a list of events triggered by... It's going to be a list of events with all the information we saw. We can filter there as well. We don't see a graphic. Yes, a list of those. Okay. So, let's go forward. So, in the first version of this home file, we saw that we didn't add anything, any custom rule for Falco, but that will change in a second. So, let's see life. A rule triggered by Falco. Okay. So, we can go to... I don't know. And to next problem. But if I open a shell here, I can go to the rules and I can go here, maybe in the events rules. It's just loading. Maybe we have a lot of things, but... Oh, maybe we just lost the port forward. Give it a second. What's up? Okay. Oh, okay. So, terminal is in container. And I want that alert. Okay. Okay. This is like right now, we had the terminal in the container that I just run in canine. So, that rule is automatically like it's out of the box, right? It's activated by default. Is it correct, isn't it? Yeah. If you look at the time, you can see it's been triggered a few seconds ago. And the source basically says syscalls. Falco is able to detect not only syscalls, but also events from other sources like audit logs, from cloud trade, from cloud watch. In this case, we are keeping it simple. We are monitoring syscalls only. The host name is... I can see the font. Can you just move the mouse on it? It's because of the color. I think the contrast is... Yeah, it tastes... Yeah, well, kind of. It tastes Falco anyway. So, is the container where this happened? And if you look at the outputs, well, that's basically what we had at the end of the rule. We can use parameters here. We can add text. We can add as much information as we want. In this case, it says that the command sh was executed. The container id is in there. The Kubernetes pod is called ngnext exporter. That's the one SUSE connected to. And it even says the PID in the host, in the system, not the PID in the container. That wouldn't be that useful, probably. We could add the information to it if we wanted. And finally, this is also interesting. The image, the container is based on it. Because then when we set rules and exceptions, we would say, I trust those images. I trust those binaries. I trust those hosts. We can combine them as we want. So, that would be the output of an automatically trigger rule like the one I was showing in the documentation. Okay, now we are going to show something similar, but not exactly the same. So, let me... Okay. So, what we are going to do right now is to start a VCbox pod. And then we are going to access it. But instead of doing just an SH, we are going to do VCbox SH. So, okay, now we are inside the... Inside the VCbox. And the thing is you need to trust us on this now, but this won't trigger any alert. Why is that behind it? Well, it won't trigger any alert by default. And the reason is that the shell that we are comparing it to, it belongs to a list and comparisonals like KSH, CSH, Bash, the classical SH. But it doesn't compare with VCbox. So, what we have done is we have extended the previous health file that you saw before, and we have added a new rule. That's the only difference. We created a new rule. We trigger a synchronization of the health file. And this is how the rule looks like. So, if we look at the condition, basically it uses a couple of macros that says we have started up. Yep. Maybe we should... So, what we did for creating this new rule was copying and pasting an existing rule from the Falco GitHub that we showed earlier, and then tweaking it, right, to fit our needs. Okay. Yeah. If you want to open the rules repository, you can find the rule very easily. Just look for the word terminal. I think that will bring us there in a moment. And in that rule, we are comparing with that list of shells. Since we wanted to show this specific case, we have changed that part. So, here, again, it compares that a new process has been started. It happens inside a container. And this is where it changes. We compared that the process name is VCbox. One thing to take into account is that's not going to compare VCbox SH. That's going to compare VCbox. So, every command that comes with VCbox, and for those that don't know VCbox yet, VCbox is actually a Swiss knife. It contains a lot of commands, and it's like a rubber. You call VCbox SH, VCbox FDs, VCbox LS, and it will execute that section of its code. So, it compares that the process name is VCbox, compares that we are doing something interactively. This is important because if the shell is started non-interactively, it will have a different configuration. And, well, in this case, a few more parameters. And, at the end, we have an output that we have customized as well. So, VCbox instance was spoiled. So, how we add this rule? Basically, we have it just scroll a little bit up. Yes. This font is huge. Sorry, I can't do this. No, it's perfect. So, custom rules is a value that we are going to pass to the home chart, and we are going to create a small file called custom VCbox rule.jamo. That file will be created on a config map, will be mounted on the Falco pod, and will be read by Falco. Yes. In the GitHub repository that we shared earlier, you have all the steps that we are going to show. I mean, I'm opening the same repository code. So, you can go there to have the configurations. All right. So, once we are done with the configuration, we should, yes, synchronize the whole file, and it will automatically start. Yes. So, now, we should see that in the here. Yes. Terminal VCbox instance, and that happened a few minutes ago, three minutes ago, and this is exactly the new rule that we created. In that rule, you can change almost everything that it says here. Okay. So, we created a new rule. Now, we had this annoying event that we didn't want to appear in the events that was something related with Prometheus. So, what we did was creating an exception, right? That's the next thing we created. So, we can go to step three. Right. So, we have another file. Yes. Another exception, which basically reuses the same rule ID that we had before. Contact Kubernetes API server from container. That rule already existed. What we are going to do is to extend the conditions to trigger the alert, because we don't want it to trigger when we use a specific image. This is an image we trust. So, you can see the exceptions is a list. The name is the ID of that first exception, and we are comparing three fields. One is the name space where this event happens. The second one is the pod name, and the third one is the image we are using. One thing to remember when we write rules is that if we are very generic, we are going to have a lot of noise. So, we have to be very specific. And if we are too specific, the rule is not going to be triggered and we might be having false negatives. So, in here, we have the three fields, then we have the comparators. The first one has to be the name of the name space exactly. The second one is going to match the beginning of the pod name. Prometheus Grafana, I don't know the rest. And the third one, we use the starts with comparison, right? Exactly. It starts with Prometheus Grafana would be like a prefix. And the third one, the container image repository we are using the in operator, which is an operator that looks for instances in a list. This is why we created a list inside that list. And the image we are going to filter is kibbit grid, kubernetes sidecar. So, the moment this event happens, with those specific conditions, it's going to it's going to ignore the rule. It's not going to trigger it. So, with this little, little relatively small content, we are adding extra functionality to our rule, either exceptions or extending the functionality add to us. Okay. So, now let's go to the next part, which is a more or less real use case. So, imagine we have this cluster that we are running things. And we saw that something's happening. If you take a look here, you can see that the CPU utilization has increased dramatically in the last hours. So, we don't know what's happening here. So, we can do some troubleshooting. And, well, it says that something's happening in the default in the default name space. So, what we can do here is having the traditional approach, that approach would be something like going to enemy spaces. So, we have these five pots and something's happening with them. Something's happening with them. They are requesting to some awkward endpoints and calculating stuff. And so, we can do this and we can try to shoot this situation using KSM and going through all this information to end up knowing that this is because of a crypto miner. Okay. We had someone that started a crypto, five crypto miner posts. So, what we could have done instead is using Falco to detect these kind of threats. How? By creating a new rule that detects crypto miners. So, how can we do this? Actually, the rule already exists is in the default rule set. The thing is the rule is disabled by default. That's right. Because, well, maybe people don't want to enable that by default. And if we look at the rules, I know you're looking for it. Yes, I'm in it. A bit lower, scroll down. Keep going, keep going. It's not very long because we use macros. I think it's more or less a bit higher. Yeah, a bit detected bound connections. The first one. So, if you look at the enable field, you are going to see that the rule is disabled. What this rule does is it compares the connections we are doing with a specific list of domains and a set of ports and basically ignores a set of images because, well, those images might be requesting something legit from those domains, right? So, the only thing we need to do to enable that is to reuse the rule ID and change the enable field to true. So, if we look at now at our latest modification on the help file, you can see the rule ID and the enable true. That would be also a way of extending the rule functionality, but we don't use append like in the beginning. We use only the enable field because we are not adding conditions or exceptions. So, that those three lines are the ones that are going to start our rules. And because of how the cryptominer is working, it's going to start triggering a lot of alerts. Okay, so now if we go to the Falco Sidekick UI, which is here, now we can, okay. So, we have like a lot of events. Okay. Detecting this cryptominer. So, now that we have that alert, how can we see it in Prometheus? So, how can we integrate this knowledge, this metric, this information in our troubleshooting dashboards and flow? So, now that we have this, that we have this over here, the first thing we can do is, okay, let's have a look at the events of our clusters. So, we created this dashboard, which is called Falco Events. And in this dashboard, you can see that something's happening. Okay. So, we already know the namespace that is having this CPU utilization increase. So, we can filter here and say, okay, this is in default. So, what we see here is that we are triggering 1.5 events per second of Falco Events per second. And there's only one event, which is this one. Detect outbound connections to common minor pull parts. So, in one simple step, we detected it just by adding Falco metrics into Prometheus and created this Grafana dashboard, which is very simple. But you can see that these events have increased. If you can select all the new spaces, and maybe instead of five minutes, you can last 12 hours. So, you can see that the only detection that we have right now, I don't know why, is this rule, but we should have more, I mean, there would be one line per detection. And also, we have here this table that shows us a workload overview. So, if we go back to the default namespace, we can see right from this dashboard that what are the pods related with this event. So, we can see right away that something that we don't recognize these pod names, this workload, and that would be the cost of our CPU increase. So, now apart from being reactive, we can also be proactive and we can configure some alerts, right Vicente? Let's see how we can do it with Falco first, and then we can do it with Prometheus. You mean configure alerts with Falco side kick? Yes. All right. So, we can go here. Yeah, I would need the deployment for Falco side kick. Okay. So, Falco side kick has a very wide set of options or possibilities. Can you open the Falco side kick repository for a moment? Oh, sure. Yes. Yes. No. So, if you scroll down, you are going to see that Falco side kick supports different chat applications like Slack or Discord or Telegram is supported only in the code branch, but it's going to come out in a few days. It also supports Prometheus. That's one thing that we are using at the moment. It also supports alert manager, page at duty, or you could send it to different logs aggregators like elastic, Splunk and so on and so forth. So, to configure either of those, you would basically give some parameters. So, if you keep scrolling, you are going to see a set a bit lower, more and more. Yeah, a bit more. So, in the Falco configuration, this would be the Falco Jamal configuration file. So, how to tell Falco that we are sending those events to Falco side kick. This is the information I was showing before. And if you keep scrolling down, I just want to go to the environment variables that we use. Keep kick going. Oh, this is the Jamal file, what we would pass to the Helm chart. That's very, very, very large, I can tell you. Or we use the environment variables. That's the one I use for this example. So, yes, when this file ends, because it's really, really big, here we have environment variables like this Slack web hook URL. This URL basically is a web hook configured for Slack. And every time there's a new alert that has a severity higher than the one we define, it's going to send a message to the Slack channel. So, if we go now to the deployment, you will see that the configuration for that is actually very, very simple. We set here the environment variable Slack web hook URL. And then the web hook is based on a Slack workspace that I just created anonymously. And the Slack minimum priority, I set it to alert. Because if you set it to something lower now, you are going to have like, I don't know, 500 alerts from this crypto miner. Yeah, it's going to float your channel. I can show, I can show that, actually. We almost did the DOS Slack earlier. That was fun, too. Yeah, it was fun. Yeah, so that would be a way of configuring FalkoSite gig to send alerts. So, I said, you can use different methods. And what Jesus is going to show is how we use Grafana and... Yeah, in case you already use other manager Grafana and Prometheus, you can also use the other manager to do this. You can see it in the repo, but you can... Okay, we have an alert manager config here. So, you can deploy the other manager with this configuration. Basically, you just need to make the group by and then configure the receiver. This is a disposable email that we created here. And the alert itself, I'm sorry, alert. And this is the alert that we want to use. So, we are going to use this metric, FalkoEvents. And I want to have all the events that have to attack the label priority set to critical. And then I want to have this information for it. It's one of the events. So, if for one minute, I keep getting these events in Falko, then I will be notified in other manager. And the summary is that this critical route triggered and we can use the labels as variables for the message. So, if everything is okay now, okay, we can go here. Okay, I just deleted. No, I can go to alerts and then go to firing. And then I can see that there is critical rule triggered. And if we configure our alert group to send to Slack, Telegram, email, whatever, you'll see that this was sent to the group. And the value, the same value we saw in Grafana. One event and a half per second was being triggered for this, for the critical priority. So, this was everything we wanted to show you guys. This was how we integrated the Falko metrics inside Prometheus and how we used it in a pretty common use case. Perfect. So, I guess now it's starting to be time for Q&A. Good. All right. So, I saw a question that Falko has a rule engine. How does the rules work? Well, basically what Falko does is it captures the information from either source from the syscalls, from the audit logs, from whatever, and it creates a structure, right? That structure is something we can parse, something we can use to filter. And those rules are basically indicating which filters we want to, which fields we want to filter. So, in the case of a syscall, we could say a file has been opened. The name of the file was la, la, la. The process that opened the file was li, li, li. And the moment all that matches, then we have, we have a trigger, right? And we have to be careful how we write the rules because the moment our rule is triggered, then it's not going to continue looking. Otherwise, we could have like 20 rules triggered by this event. Basically, it goes from more specific to more generic. And that applies to anything we want. So, if we have an application that generates logs, we could create a plugin that translates those fields or those strings into fields that we could filter. And we could write rules that don't exist yet for those events, for those logs. And this is what we call Falko plugins. It's a relatively new technology. It's been with Falko for a few releases. And we invite everyone to see how it works, bring their ideas, bring their plugins and, well, bring that value to the community. I think my route has a second question, a bit more interesting. How do we address the never-ending process of writing the rules? That's a difficult question because we have an endless set of sources, applications that we could be filtering. And, of course, the kernel has more syscalls. The Kubernetes audit logs adds more information. So, that's something that's not going to end. We have to be watching our rules. We have to remove false positives. We have to be careful with false negatives. And the latest release of Falko has added a new functionality, which is a way of distributing the rules as an OCI artifact. What is that? Well, an OCI artifact could be a container image. Well, we use the same technology. We put the rules files inside something that looks like a container image, and we store them in a container registry. Falko CTL is a binary that comes with Falko and is able to observe when there is a new version of the set of rules and downloads the rules automatically. Falko is going to realize of that, and it's going to reload those rules. We expect that more people are going to start distributing rules by now, and we hope that it helps to have a more varied set of rules. We don't have a specific case at the moment, but that's going to open a new wall of possibilities. Great. There was a few other questions as well. I think Internet Guidance asks, do you have a multi-window, multi-burn board like Loth provides? Jesus, do you know anything about it? No. I don't know about that. I guess you are referring to having multiple views, different troubleshooting views in a dashboard. I think Grafana doesn't allow that, as far as I know. What you can do instead is creating link panels to go from one panel to another. The other thing we did here is as I usually are going to need some overview on the workloads, I created this panel over here. This dashboard is in the GitHub repo. Just for your information, here you have the JSON that you can import into Grafana installation. I wrote this query to have all the information I needed, the pods and the workloads and the enemy spaces. With this variable over here, so with just one click, I can change the scope of the dashboard. As far as I know, you can have multiple dashboards in one view. Yeah. Then Ricardo asked, is there a base set of rules out of the box to use for determining security posture or Kubernetes cluster? Very interesting question. There is a set of default rules, but we have to take into account that Falco does runtime detection. It's not going to give you the secure posture of your cluster. That's not something that you find out when you execute your workloads. What Falco does is it goes to one step later, after you have already scanned your images and hardened your system, everything is running, but something could happen. Your workloads could be attacked. Your image could have been compromised, and the scanner didn't realize. In random security, what we do is we monitor what happens during the execution. This is why we wouldn't use Falco for secure posture. The set of default rules as Boniface said, the Internet Guidance, it's there. We receive a lot of new rules every year. The community is contributing. It takes time to vet them. If you want to contribute as well, feel free to join to review the rules with us. We are really, really welcoming people to do that. Great. Then there was a question from Boniface who asked, what is the advantage of using Falco instead of Elasticsearch? Elasticsearch is an aggregator of logs. It gathers everything that is sent to it, and then we have the possibility of searching. The problem that it has according to Falco technology is that you have to gather the data first, and then you can start looking for patterns. It's very good to keep track and look for what happened after it happened. However, Falco, as I said, it's random security. It detects the action when it happens. If you want to keep that information, want to send it to Elastic, perfect. That's a great idea. But what we do is we detect as soon as it happens, and we call that stream detection. The moment it happens, you get an alert, maybe you want to trigger a remediation action, like you want to kill the pod. This is something you wouldn't be able to do with Elastic. In Elastic, you have a record of the events of the logs, but with Falco, you have instant action. Great. I don't see any questions other than these so far. We have a few minutes left, so if anyone is typing away and trying to get a question in, do you so fast? This is the last call for questions. While we see if anyone is typing away, do you have any final words, anything you want to add? I think I already said that we are a CNC project. As most of the CNC projects, we are constantly looking for contributors. It's a project for everyone. I think everyone can use it in a very useful way. If you guys like this technology or you find it useful, just come to the Slack channel or attend a community call or ping us. Any mean, just go to falco.org slash community, and you can find a lot of means to connect to us. We are always welcoming people. Either they could be the commutation contributors or they just want to know more, they just need support, whatever. They are always welcome. Perfect. I think those were really, really important call to action. Everyone should go ahead and join in. While that important reminder was happening, internet guidance asked, I think pretty much the final question of this at this time. They asked, does it work with EVPF? Very good question. We haven't mentioned anything about the driver technology. I said we capture system calls. Originally, we were using a kernel module, but we also supported EVPF. The problem with those two technologies is that you have to compile them for each kernel version. First of all, we support EVPF. And second, with the newest Falco release, we are going to release something we call a modern EVPF driver, which is basically a core EVPF driver. Compile ones run everywhere. That's going to make it much easier to deploy Falco, and we hope that increase the adoption. Perfect. Great note to end today's stream. So, oh, final question. It always pops in at the last minute. We have time for it. Falco works on Windows kernel also as Krishna? No, not at the moment. What we could eventually see is a plugin to translate Falco, sorry, Windows events into Falco events. So, as I said, we have those Falco plugins that could be used for that. But I don't know of any Windows Falco plugin at the moment. Yeah, and great. And no worries, Krishna, you said sorry in the chat. This is exactly why we're here to answer questions, so perfect for asking one. But that's starting to be it for today. Thank you, everyone, for joining the latest episode of Cloud Native Live. It was great to have a session about Prometheus plus Falco, the Swiss Army knife for our series. And we also really love the audience interaction and questions. Always happy to see those. And we bring you the latest Cloud Native code every Wednesday. So, in the coming weeks, we have more great sessions coming up. So, thank you for joining us today. And we'll see you next week. Thank you for having us. Thank you.