 Welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm your host today. My name is Whitney Lee, and I'm a CNCF Ambassador and the Developer Advocate at VMware Tonsil. Every week, we bring new presenters to showcase how to work with Cloud Native technologies. We will build things, we will break things, and we will answer your questions. Today, we have Melissa Kilby and Pablo Muzza here with us to deliver a presentation called Falco, a peek into the people and the latest features. Now, this is an official live stream of the CNCF, and as such, it is subject to the CNCF code of conduct. Please do not add anything to the chat with that would be in violation of that code of conduct. So basically, be respectful of your fellow chatters of the presenters, and please respect me too. I'm last place there. Friends who are joining us live, please say hello in the chat and tell us where you're tuning in from. If you have questions during the presentation, please do post them in the chat. So with that, I'll hand it over to Melissa and Pablo to introduce themselves. Hi, Melissa and Pablo. Welcome so much. Welcome so much. I'm so excited you're here. I'm Melissa Gilby and I work in the Apple Service Engineering security team. Amazing. Hi, everyone. My name is Pablo. I'm from Brazil, but I actually live in Amsterdam for a while now. I'm a developer advocate at CISDig and a focal contributor for the last year and a half, and I'm delighted to be here. Thank you very much for having us. Impressive. What cool guess. I'm so excited for today's show. It's going to be a blast. One thing that I'm super excited about is, we're kicking this show off a little differently. We're going to do an interview with Melissa because Melissa is super cool, and let's find out more about that. Melissa, can you tell us about yourself in the beginning of your tech career? Yeah, big surprise. My university studies did not initially evolve a tech-related field. Instead, I focused on sports science and had a strong passion for gymnastics. Later on during my PhD, I discovered my new passion for computer programming, and my entry into cybersecurity began with my initial role supporting cyber research projects for the US government. I also taught applied data science at Placat and around five years ago, I joined Apple where I initially applied AI and ML and big data to threat detection. Later on, I transitioned into a new role that involved developing low-level Linux kernel monitoring tools and performing threat detection at scale using Falcon. Lastly, a fun fact about me is that I also had the opportunity to internet NASA and contribute to their space-suit engineering program. Wow. How cool is that? How did you start then with open-source in particular? Last year, I made my first open-source contribution ever to the Falcon project. It's so exciting. I started participating in a project by upstreaming patches and despite having no prior experience in C, C++ or EVPF, I gained proficiency through the project as I contributed a lot and I also had patients along the way. That's amazing. Tell us about your process. You're a full-on maintainer. Tell us about your process for becoming a maintainer. I definitely knew that to become more involved in a project, I needed to learn more about the project and make myself a known entity and contribute to do that. I started with easier patches. This approach allowed me to build trust with the maintainers and gradually familiarize myself with the codebase by tackling simpler tasks initially. I also gained confidence and deep understanding in the projects, intricacies. I also help reviewing PRs and triaging issues and provide guidance and expertise to the community. Additionally, I assisted in creating patches that others needed to effectively utilize Falcon. In summary, I just aim to be helpful and contribute positively to the community in various ways. One important thing is being comfortable with feeling uncomfortable and not knowing all the answers right away. All of us, all the maintainers, we're constantly learning and evolving, so I'm praising this mindset is very crucial. Being comfortable with being uncomfortable is amazing. I want to take a second and say, we have so much great chat right now, and I'm excited to dig into some of your questions especially about Falcon but we're going to do that a little later alongside the demo. Then we do have a lovely comment, this amazing achievements, Melissa, which I totally agree and being comfortable, being uncomfortable is a huge part of that I think. So you're a maintainer of Falcon. How does being a maintainer help in your career generally? That's a really great question. In addition to gaining more technical skills, you also gain technical leadership skills that you cannot gain in an equivalent way through more traditional training, I would say, engaging with individuals beyond my immediate internal network. I can help hardening the project through a diverse user base and their experiences with the project. That gives us all, all of us maintainers, the opportunity to look ahead at the technology landscape and better prepare the project for what is to come. I also learned that acknowledging diversity and other styles makes me a stronger, more well-rounded and transparent engineer. I can focus on the actual goal that we're trying to accomplish and often or maybe even all the time it's a combination of everyone's input that creates our success. Furthermore, having joined a project with years of experience of working in a large company, I've also noticed similarities. Working with open source requires basically the same skills as working in a large organization where you collaborate with various teams and departments, and you need to understand their needs and concerns and developing effective communication skills is of utmost importance, I would say. In summary, by being available to offer support and assistance and becoming a dependable expert, you not only gain personal and career value, but also gain insight into the challenges faced by other companies. Cool. Why is it important then to be involved in upstream Falco? We at Apple definitely stand out as one of the distinct adopters due to the scale we operate at. The scale allows us to bring valuable insights and assert that certain approaches may not be suitable for large-scale production. We can identify more efficient alternatives that can benefit not only ourselves, but also other adopters like us that are facing similar challenges. Our objective is to enhance the robustness and strength of Falco. We prioritize stability, safety, and integrity alongside the addition of features. Lastly, it is generally exciting to not only announce the availability of a particular feature, but also to proudly declare that we contributed to building it. Amazing. Then one more question. How has your perspective changed since you became a maintainer on the project? That's a really great question. I recall that initially I wondered, how come some PRs are open for such a long time? Now we understand the reasons behind it. I definitely gained a deeper understanding of the complexities involved in managing and contributing to the entire Falco project. Now I also personally experienced the challenges and overhead of context switching, especially when it comes to providing high-quality code reviews and contributions across different aspects of Falco. These areas include Falco rules, documentation, user experience, as well as obviously low-level kernel and user space coding alongside with troubleshooting assistance. I think navigating and excelling in all these areas simultaneously can be challenging. I think Pablo, now it's time to talk about some new and cool Falco features. Yeah. Are you ready for me to share your screen, Pablo? Yeah, absolutely. Let's do it. If we can find it. Yeah. Great. Before we talk about 0.35 release, which we did about a month ago, just an intro to Falco for people that don't know the project that well. Falco is an open-source runtime security solution for threat detection across Kubernetes, containers, host, and the Cloud. It's a CNCF incubation-level project and we applied for graduation back in November 2022, and hopefully we get it graduated soon. It's getting more and more traction over time, which is great, and here's an overview of how Falco works. The whole idea of Falco is being this string agent that's collecting data from Udpo inputs and sending alerts if something suspicious happens. The different inputs we can have, we can talk about system calls where we can look with either the kernel module or using eBPF probes or we are talking about plugins like just collecting data from GitHub, CloudTrail, Kubernetes audit logs. Then we have a set of rules which we run those events against, and if there is a match, it means this is weird behavior, this is suspicious, and then we send an alert to whatever the output you configure to. A little bit more on eBPF, I guess most of you heard eBPF before, it's been a bus for a few years now. It stands for Extended Berkeley Packet Future, but the name is basically history, so don't worry too much about it. In a nutshell, it extends the kernel capabilities safely and efficiently without changing kernel source code or without loading kernel modules. The way I really like to think about it is basically kernel instrumentation made simple. So that's my intro to Falco, and now we're going to talk about a very cool feature that's the modern eBPF Pro, and I think Melissa is going to be able to tell you a lot about it. Thanks Pablo. In the following slides, I will present the modern eBPF implementation of Falco, as Pablo already said, focusing on the concepts of one kernel buffers, the eBPF ring buffer, and the compile runs run everywhere core e features, the btf, bpf by type format tracing programs. Next slide please. The modern bpf driver feels good for adopters as it eliminates the need to worry about many underlying complexities. For example, if you are a Go or Java developer, you are accustomed to easily compiling and running applications on various operating systems, such as Linux, Mac OS, and Windows, as well as different architectures like x86 or ARM64. This ease of portability is possible because many of the underlying considerations are abstracted away from you. And everyone loves easy DevOps, easy testing and better performance. Next slide please. In a nutshell, what our kernel drivers do, all of them, they primarily read kernel data structure fields. As shown on the left side of the slide, the old bpf instrumentation requires performing successive kernel reads when traversing through kernel structs. And this is due to the fact that memory is read directly to the stack. And to successfully navigate these structures, you have to know the exact substructs and field names. This process is not only tedious, but also fragile because the Linux kernel does not provide a guarantee of backwards compatibility or stable APIs across different releases. Now let's shift our focus to the right side of the slide, where we explore a better approach known as the Core eWay that we adopt in our modern bpf drivers. Here's how it works. First, with the Core eWay approach, you can read the desired fields in a single operation, regardless of the number of structs you need to traverse. You only require one bpf core read. Furthermore, even if there are slight variations in the kernel data structures, the new bpf core read helper can adapt without requiring modifications to the read operation. That's pretty cool. And this is achieved through the use of kernel debug info, the btf I mentioned earlier that we have, yeah, and btf automatically identifies the new location of kernel structure fields. That sounds amazing. Unfortunately, nothing is ever perfect, especially not in a Linux kernel. This approach will not work in cases where there are significant structural changes in common kernel structures, or let's say when the meaning of a field undergoes a complete radical transformation. Next slide, please. In addition, in the old eBpf driver, we required the exact kernel header files to compile the bpf object code with the modern bpf approach where the driver is compiled generically and not specific to a particular kernel release. We faced the question of where to obtain kernel data structure definitions from. And I must admit that there is no plaque magic involved in this process. To address this, you need to maintain a VM Linux header file in your project, which contains all the necessary kernel data structure definitions. Additionally, if your program relies on macros or functions typically found in system header files, you will also need to redefine them. Returning to the scenario I just described, when encountering incompatible types between different distributions or kernel releases, you introduce flavor header files. In the example on this slide, you can observe that we defined an auto task infrastructure specifically for CentOS. Okay, lastly, because of the core eApproach in Falco's modern bpf driver, we can now bundle the driver with the final user space, Falco binary, during the linking process. And as a result, you have a single binary to run, providing a seamless experience and the perception of a kernel driverless solution that feels good. We have a quick comment from chat. So say, oh, yeah, enabling debug on production kernel. This is the kernel setting. Next slide, Pablo. Okay, switching gears a little bit, let's dive deeper into the concept of kernel buffers. Initially, when eBPF was introduced, there was only a perf buffer available, see the diagram on the left. And it was necessary to allocate one buffer for each CPU. For instance, let's say on a server with 96 CPUs, you would need 96 buffers, with each buffer typically having a size of eight or 16 megabytes. Let's consider a real-world example with the older perf buffer. Imagine you have very busy servers where kernel side drops are occurring. You keep increasing the size of the buffer, but your problems do not go away. This is because the challenges often revolve around bursts of events. So what now? The eBPF community has introduced a promising new type of buffer called the ring buffer. In our modern BPF kernel driver, we utilized the new BPF ring buffer. These buffers, however, have fundamental design differences that we had to learn through experience. The new ring buffer maps memory twice, contiguously back-to-back in the virtual memory, to make working with records that wrap around simple and efficient. While it turns out the new ring buffer implementation does not exactly duplicate memory, it was confirmed by the kernel memory management expert that currently there is no way to avoid wrong double accounting of memory reserved, but not used by the BPF ring buffers. So please keep this in mind when using the new modern BPF probe. To account for the different memory footprint and to handle event bursts better, the best approach is to leverage the capability of the ring buffer to be utilized across multiple CPUs, what you see on the right side of the slide. This not only helps in managing the memory effectively, but also has another beneficial side effect with the larger shared buffer. You may experience fewer or no event drops as the buffer can better handle temporary spikes in the volume of events. And Pablo, I think you now have a demo for us around the core E feature. Yes, absolutely. So one of the cool things about open source is like every time we work on something new, we like to write about it and put a blog out there, right? And then you have other people from the community, they're just talking to each other and just having putting together. So here's the modern BPF blog post. You can read about everything Melissa said and more. You're gonna probably see that some of the diagrams actually come from here. And at the end, you're actually gonna see a try it out with the link. This is the link that I'm gonna use to basically show you a quick demo on the modern BPF. So what I'm gonna do here is basically compare the classic BPF probe that we use to have with the modern one, with all the features that Melissa talked about, but mainly focusing on the core, like compile once, run everywhere to show how easy it is for you to just update or upgrade your kernel without having to think about Falco or anything that's actually using EVPR. So getting started into one of the comments, enabling debug on production kernel. So it's not really debugging, but it's more about increasing our visibility into what is happening, right? That's what we want to achieve with Falco in order to set using BPF. So what I'm gonna do is first, I have this Linux machine, just simple configuration, and I have Falco installed. And I have the... Sorry to interrupt, Papa. Will you make it bigger, please? Oh, absolutely. Yeah, thank you. I completely forgot. Yeah. So this is the straight environment that we use and where we like to put labs like this. I have Falco installed here and I'm just gonna run it with the BPF probe. And you can basically see here that, yeah, it's running. It's using the BPF probe and this is where the BPF probe is configured. Again, this is the classic BPF that we used to have before. And just to show how Falco works, I'm gonna do something that could be considered suspicious, which is basically trying to find some keys and Falco is gonna learn basically saying, oh, warning, there is a grab private keys or password activities found and it's gonna give you a lot of information about it because we're just running it on the host, not in a container or in a Kubernetes cluster. We don't have container information or the image, but otherwise we would. So this is the probe running. I'm now gonna do the same thing but using the modern BPF. So notice that to use the modern BPF with plain Falco installation, you just do dash, dash, modern BPF. And what you're gonna see here is, yeah, Falco started. We are looking into syscalls using the modern BPF. And more than that, we have one ring buffer for every two CPUs, which is what Melissa just described. I'm gonna do the same thing, suspicious activity and hopefully Falco is just gonna let me know. Hey, folks, this doesn't look really good. You might wanna take a look. I'm using standard output here, but we could easily set up Falco to send this to a CM in your backend, right? Or even our Slack notification if something is actually critical. Good, so that's Falco just running and working. What I'm gonna do next is I'm gonna go to the next challenge. I'm going to update the Linux kernel, which is something that you don't do every day but it needs to be done from time to time. So here we have the 1030 and I'm just gonna update it. This might take some time. I think it's gonna take less than a minute and if there is any question. Yeah, there's a question. I'm happy to take it. Yeah, what's the prerequisite for a person to know to understand all the technical nuances for this session? What's some background knowledge that would be helpful? That's a good question. I think understanding a little bit of how processes relate to the kernel and basically what CIS calls mean. It's an important aspect. So every process that's running in a computer whenever they need resources like memory, CPU, files they need to go through the kernel, right? And they do that through system calls in order to do it. So the whole idea is that if you basically have visibility and you instrument the kernel you have visibility into all the processes that are running. So it doesn't matter if they are containers it doesn't matter if they are within Kubernetes cluster within pods, like everything goes at the end of the day through the kernel. And because we're instrumenting the kernel itself we have full visibility there. So I guess that's kind of the gist. Public opinion to that. The question is probably the background for the session today could be considered a question what is the background I need to get started with Falco and it really depends what level you want to get into because Falco's cut across so many different domains including kernel programming, red teaming, offensive security, data science, big data, data pipelines. Today we dive more into the traverse. So the background you need is traditional kernel programming understanding that what I mentioned before for traditional kernel modules or the old eBPF driver you need to have the kernel header files for the exact kernel you want to deploy your tool to in order to compile the eBPF bytecode for. And then eBPF as a new technology there is so many great tutorials and I would just start reading and maybe then rewatch what we said today just as a suggestion. That's great. So I updated the kernel from 1030 to 1034. What I'm going to do now is I'm just going to oops, not this one. I'm just going to reboot the machine so we can start with the new kernel. Yeah, this is going to be inactive for a few seconds and I'm just going to wait. So if there are any questions in the meantime I can also try to answer those. There's a comment that maybe you can speak to. That's actually config trace points not debug but I don't, it's outside of my knowledge area. Yeah, it goes into the enabling debug on production kernel which is you're not enabling debug we're just hooking into trace points to get better visibility. And one important thing about Falco is we only do detection. So we're literally only looking into things. We're not actually changing the behavior of anything. So kernel is updated. There it is. And now I'm going to go for the last part of this specific demo. We have another one later that is running again the same thing that I just did. So I'm going to run the classic BPF row and then I'm going to run the modern BPF and we're just going to see what happens. So trying to run the classic one it's trying to use the BPF and basically says an error occurred here forcing termination. And if you look into the error it's like BPF probe is compiled for a different kernel but you're running this one. So we can't do anything. You need to recompile your classic BPF probe so we can actually work. On the other side if we just go for the modern BPF voila Falco is running. Why? Because of the compile ones run everywhere that's embedded into the modern BPF. So Falco itself already has everything compiled. And if we just move ahead and just run the same suspicious command you're going to get the same warning here as before. Looking here it seems like, okay but it's easy to compile like Falco in this scenario here. Yes it is. But if you're thinking about a hundred thousand nodes running it and now you need to think about Falco and everything else like that's one last thing to worry about plus all the performance improvements that we added into it like ring buffers and others. So yeah, I'll just click next. Again, this is available if you just look for Falco modern BPF blog you're going to have the blog and you can try it by yourself and just try to play with Falco. Yeah, so moving on I think we can now talk about another amazing feature at 0.25 which is adaptives is called selection. And I'm sure this is close to Melissa's heart since she really worked a lot on this feature and I really love this feature by the way. So go for it. Thanks Pablo. Yes, if you have been using Falco and believe that it only monitored the system calls defined in your rules I unfortunately must inform you that this was not the case for two Falco 0.35. However, I have good news starting from this Falco release 0.35 the statement holds true albeit with some caveates that I will explain in detail now. We modernized Falco from the crowned up really from the crowned up and introduced this new feature called adaptive syscalls monitoring. It empowers end users to tell Falco which system calls to monitor. Also previously Falco was limited to monitoring in narrow set of syscalls which was a drawback since its underlying libraries and kernel drivers were capable of monitoring a wider range of syscalls. So we also addressed this gap and this milestone allowing access to a notable range of syscalls not all syscalls but I think over 350 represents another significant advancement in threat detection. In summary, as an end user the benefits you gain from this new release and these updates include full control over the selection of system calls to monitor and this flexibility allows you to adjust your monitoring approach over time based on your cost budget and threat model. You have the freedom to tailor your monitoring strategy according to your specific needs and make adjustments is necessary and I think this was pretty cool. Next slide please. Now let's just guess the reasons why adaptive syscall selection was not available earlier. One of the primary reasons is that certain events involve multiple syscalls. For instance, spawning a new process typically involves a combination of syscalls like fork followed by exact VE. Additionally, in certain scenarios such as establishing a network connection, monitoring system calls like socket or bind is also necessary along with this specific syscall of interests such as connect or accept. To add to the complexity, Falco maintains a process cache table that stores state information and this state allows for real-time traversal of parent process lineages enabling features like parent-child relationship tracking that everyone truly loves about Falco. The combination of these factors made it challenging to provide adaptive syscall selections in earlier versions of Falco. However, with the recent advancements and modernization efforts we have successfully overcome all these complexities to offer this valuable feature. I have to say this advancement brings really great excitement to a variety of end users. Researchers can now monitor all system calls while adopters in production settings can customize their monitoring scope to align with their cost budget and again, their specific requirements. This flexibility enables efficient resource allocation and effective threat detection. Okay, in the upcoming slides, I will provide some explanations, not all of the explanations about the inner workings of Falco. Next slide, please. The key takeaway from this slide is that in order to effectively monitor system calls, it is essential to know their system call IDs. System calls are defined in the Linux headers. Here we go again with the headers files. And each syscall is associated with a specific number to support multiple architectures. Internally, Falco employs a mapping mechanism using a custom enumeration. Again, this mapping is necessary because the number associated with a system call can vary across different architectures, such as x86 or arm64. By utilizing this mapping mechanism, Falco's libraries can uniquely identify and handle each supported syscall in a consistent and uniform manner. Another key point to highlight is that syscalls have both an enter event and an exit event. To facilitate a structured approach in the parsing process, Falco introduces an additional mapping or enumeration shown on the right of the slide. This mapping is essential for organizing the parsing and handling of events, as Falco not only deals with syscall events, but also incorporates non-syscall events, such as container events. Next slide, please. The last two slides provide an overview of the adaptive syscall selection flow. We begin with the Falco rules. Falco traverses the abstract syntax tree of each rules filter and extracts the syscall strings. It then maps the string to the corresponding syscall IDs and the internal event IDs within Falco. This process involves not only syscalls defined in the Falco rules, but also the syscalls required for Falco's internal prane or stage that we already discussed. Next, to transfer this information to the kernel, we employ a dedicated EBPF map in the case of BPF drivers or an internal bit mask using the IACTL API in the case of a kernel module. This allows us to inject the relevant information into the sysenter and sysexit trace points within the driver. Next slide, please. Important to understand is that due to the triggering of the sysenter and sysexit kernel trace points for literally every syscall, our pushdown filter is designed to efficiently exclude unnecessary syscalls that we're not interested in before any data field extraction occurs in our kernel drivers. This filter optimizes the monitoring process by discarding irrelevant syscalls early on, the earliest possible basically. Once again, Falco operates as a passive monitor of syscalls and does not exert any influence or modify the behavior of the syscalls being monitored. Additionally, the purpose of kernel site filtering is to minimize the number of events that must be transferred from the kernel to user space through the buffer we already talked about as well. In addition, the goal is to reduce the number of events processed and evaluated against Falco rules in user space. By implementing this modernized filtering mechanism, we can achieve these efficiencies without compromising visibility at all. This is because the ignored syscalls are not utilized in Falco rules and also not utilized for the state, ensuring that only the events necessary for Falco state and rules are served up to user space. In summary, the kernel site filtering approach allows us to optimize event handling while still providing all of the relevant information required by Falco. I think we're ready for the next demo Pablo. Absolutely, thank you very much. Yeah, so again, for the adaptive syscalls is the same thing. We sat down together, we wrote a blog. Thank you, Melissa, Roberto, and Frederico for that. And we tried to explain here what's going on, what's the new feature, and there is way more information than what Melissa described. And at some part here that I'm failing to find, there is also a link for you to try another lab if you want. So the idea of this lab, okay. Sorry for that, I was expecting it to just run. I'll have to talk for a minute while it's just loading. Yeah, I have a question for you. There's one in the chat that was from earlier. How can I start contributing with a project like Falco? I would, my experience is that if you want, there's various ways. First of all, it must not be just code. You can also help on Slack answering questions about Falco or help triaging issues and just be helpful in general. If you want to contribute code, I would recommend to start with smaller, easier patches to build trust with us over time. And that's kind of my recommendation I would give you. Do you have a follow-up question? Yeah, I think even since you have like the different events that I've been joining, like people have been talking more and more about contributing without having to contribute code, there are many ways to contribute to a project. You can just help people. You can just look into the documentation. You can help with blogs. You can just connect folks to talk about the same thing or host an event or something like that. So if you're scared about all the technical part that Melissa talked about, which I don't understand everything to be honest, it's really complicated to me as well, but I do my best. There are many different ways that you can still contribute to the project. Yep, so going back to my demo, what I'm gonna try to do here is basically show you adaptives as call selection. So this idea that Falco has a defined set of system calls that should be monitored so you basically can guarantee that we have a good visibility into what's going on and important data is not missing, right? So we will start by basically showing you like if we run Falco, there are a few settings that you can add just to see like some logs, like log level debug. Make the text bigger please. That's great, thank you. And then you have the standard error here to true and you have the dry run. That's not gonna really collect events, but just do a dry run on Falco. And with that, we can clearly see like the sys calls here that they were added because they are in the rules, 31 system calls. And then you can see the other 43 system calls that were basically added to make sure that state engine of Falco has everything it needs to have all the data, which is a total of 74 system calls that is monitoring, right? So it's not all the, I don't know how many there are like more than 300, but we are just looking to those 74. So if I go to the next part and what I'm gonna do now is basically I'm gonna have one simple rule. So for us to really play, I'm gonna narrow the scope and then we're gonna see what Falco is actually doing. So for those of you not familiar with Falco, basically what Falco does is looking to this system calls that are happening, the ones that it decides to monitor and it compares that those events against rules, right? There's a default set of rules that for system calls by default has more than 80. I'm just gonna say, forget about all of those, just use this single rule here, which is a very simple and dummy rule to be honest. I'm just looking to, okay, is there an uncommon process execution? And by uncommon here is any process that's not bash, s, h, l, s or m. So I'm basically monitoring anything that's execv or execv yet in an exit system call. Good, now I'm gonna run Falco. I'm gonna use the same log level, standard error, and I'm just gonna load this single rule file. And what we can see now is there were only two rules found in, sorry, only two system calls found in the rules, execv and execv yet. And because of that, to keep consistency, Falco had to add another 56 sys calls that are gonna be monitored as well with a total of 78. So that's less than we had before, which was 70 something. And in this case, we actually have more before we had 43, now we have 56. So because we have less sys calls in the rules, Falco had to balance to make sure that it keeps its days consistent and increase the number of Falco, sorry, of system calls that are actually monitored. This is just an Ubuntu license check happening in here. Great, so now I'm gonna run some commands just with the script and we're gonna see what Falco gets. So there are a few warnings in here. Basically, I'm looking to what was the process. So we have the script itself. Oh, sorry, let me, I hope it doesn't get in the way again. There was the touch, there was an MCaDIR, another touch. So just a few Linux comments that I executed. The important part here is the path, because that's the example that I'm gonna explore. You can see that, okay, this is a root slash folder. That's exactly where the process touch was executed from in this case with everything set up correctly. So I'm gonna go to the next one and what I'm gonna do now is actually use what Melissa described and I'm gonna select the specific system calls that Falco should monitor. Not more, not less. I'm gonna add just two of them and that's gonna be it. And from there, we're gonna see that actually information is gonna be missing because we don't have enough context to do the right thing. And I'm gonna edit Falco. Feel free to stop me if there is anything in chat. I can see it right now. There is a question. Are these labs you're doing publicly available? I do believe that they are. I saw you navigate them. Absolutely, yeah. If you go to the blog, Adaptives is called Selection. It's just in the blog. I can share the links later as well. There is a slide with the references and the plan is to share the slides later on. Excellent. So, oh, oh no. So, I'm basically editing the configuration file for exactly and exactly that. And I'm saying, okay, so instead of using the default base set of Falco, that's just gonna make sure that we have everything. I'm forcing Falco to just look at those two system calls. There's nothing else, right? So I'm gonna save. I'm gonna run Falco again. And now we can see we got two system calls from the rules. We got an extra two that was basically from the basis call. And this is an override because I'm basically adding the same two that I have in the rules or nothing else. And then what you're gonna see that the total, it's actually gonna be three. There is the exact V, the exact V yet, and the proc exit. Proc exit is a safeguard that was added and hard coded in there just to make sure things don't derail. Good, so I have it running right now. I'm gonna run the same commands again. And what I want to show you is that right now, basically the path is not there anymore. Why is the path not there anymore? Because we don't have the system calls that we need to actually be able to have this information in there. So be very careful when playing with this. Don't do that in production. And if you actually look at the blog, there are some best practices with regards, I think it's over here. To the type of events that you have, that you're like the rules that you have, what are the system calls that you must include in there? And just to finalize my demo, what I'm gonna do now is I'm gonna use the last setting. Go on, sorry. There's a question in chat that is a ring buffer. I have a question on ring buffer. Now, are you using ring buffer across CPU? Does that mean there's a lock between CPU on system call execution and performance cost around it? Okay, that. Can we take this question after the demo? Because it's unrooted to the adaptive system call selection. Perfect. Absolutely. Yeah. So what I'm gonna do now is just set repair to true. And this setting is amazing. I love it. It's basically saying, okay, this is what I'm interesting in. But you know what? I'm not sure what I'm doing and I might be breaking stuff. So please just repair stuff for me if, and now when I run, I still have the two CIS calls and the two CIS calls are overriding with the configuration, with the basis call setting that I had before. But now Falco is adding 16 repaired system calls. There what Falco believes is the real minimal set of CIS calls that we can have to keep state consistent and still give you the output. I'm too free to correct me here, Melissa, if I'm saying something that I shouldn't. No. But just to finalize, now I'm running it again and you're gonna see that the path is here. And that's probably because the CHDR was just added there which is me changing directories and that's what I'm printing in here. So that was a quick explanation of the new adaptive system call selection feature. Sorry, demo, not necessarily an explanation. And you can just try yourself, just break it, use the same example or just go for your own example. And it's just super cool that you can see, edit there, break things and try again and have an environment to just do that. Let me share the links with you for the labs, so. The second one I was able to find and put on the screen. So we just need the first one. Yeah, the first one. The first one is here. The more an AVPF and you don't want it. All right, great. Thank you, I'll put that up now. And Pablo, I can probably add to point one second, the difference between the default state enforcement is that by default, Falco turns on any system calls that could potentially modify any state and the difference here and the repair option is that Falco now carefully analyzes the actual rules you provide and then activates only the necessary assist calls to avoid breaking Falco's functionality. That's where the benefits come in. That's really cool. It looks like you got it from like 78 system calls down to 18, is that what I saw? Yeah, pretty cool. Good, so shall we take the question now or shall we just stay around with the roadmap and then go back to the question? I can take the ring buffer question in the old BFF driver. I mentioned there was a per buffer and you had to allocate one of these buffers for each CPU only with the new ring buffer we introduced and we do use this ring buffer and then modern BFF driver. You can basically use four, six, eight CPUs and just allocate one buffer to all of these CPUs and increase the size of the buffer and we had a conversation with the kernel mailing list because of the different memory footprint and we also found out that there should not be any contention and again, our monitoring process is passive so you do not influence or modify assist calls or lock anything for that matter. Do we have more questions or that's our roadmap slide basically? There is another question. How do you recommend that folks pitch Falco to their organizations? What benefits and how? That's the elevator pitch question and I think if you had to choose just one threat detection tool, doing operating system level, kernel inspection probably covers most of the attacks. So that's basically where I would recommend people to get the most investment out if you can only deploy one tool. One downside is that you have to deploy this tool across your entire environment. Could be different backend servers like Kubernetes, your worker nodes, then you have your control nodes, maybe you have bare metal hosts and so you would need to deploy Falco everywhere and there's a small subset of attacks that Falco cannot detect. If you think about higher up in the application stack, for example, authentication bypasses, you would still need application logs and there's also the Kubernetes control plane, Kubernetes audit logs, Falco has a plugin for it but it's not Cisco's data. Pablo, you probably have more to add to this. Yeah, I think security must be seen as layers, right? Like it's a fact. And if you look into the cloud native environment, you have like many different acronyms. We're talking about KM, we're talking about CWPP and other CSPM. So you basically need to make sure that you understand what Falco proposal is, which is the last line of defense. So after you have all those layers in place, there's still like the zero vulnerability, right? Like log for shell was discovered almost nine years later. Like what was happening throughout those nine years and how could you have visibility into that if no one actually knew what was going on? And after people discovered until we had a patch, what happens in this gap? What happens in this gap is if you have a tool that's the last line of defense and that's monitoring at runtime what's happening, that's what Falco is, you can be alerted in real time if something fishy or suspicious is happening. So I would say to convince is to make sure that people understand the problem that it doesn't matter how many layers of security you have, you're still vulnerable out there. And this is the extra layer and I would say like the last line of defense that's how we usually like to call it. Yes, and really thinking about your cost budget is very important. For example, perhaps your servers are really, really busy and you have a very low cost budget. Now with these new features we presented today, you can do very targeted monitoring. Maybe you just wanna look at newly spawned processes because that's all you can afford to monitor but it's still better than nothing. Yeah. Shall we move on to the roadmap and the feature? Yes, please. Yes, so there is an exciting prospect for the next Falco release, 0.36. Most notably, we are developing a rules maturity framework. This framework aims to facilitate the onboarding process for new adopters of Falco and maybe also help with the sales pitch and guide adopters in implementing threat detection effectively, we will identify approximately 20 to 35 rules that are tagged as stable and highly relevant for addressing the top cyber threats. These rules will serve as a starting point for adopters to implement monitoring and alerting in their environments. And once these initial rules have been successfully implemented with an acceptable noise level because every environment is different, you have to deploy and see how they work. Adopters can then progress to exploring rules and attack with a lower maturity level and so on. I think this will really help drive adoption of Falco even further. Then we're also continuously striving to enhance the efficiency and robustness of the Falco source. The upcoming developments in threat detection capabilities include, for example, SimLink resolution of executable files as well as a redesigned DNS resolution mechanism. Additionally, there will be a wealth of new guides available to further improve the adoption process for users. Furthermore, we believe there is never a wrong time to embark on ambitious endeavors. As such, we're currently in the design phase for providing anomaly detection capabilities directly on the host. We're actively working on this exciting moonshot project. I think that's all we have to not steal all the thunder from future talks. Amazing. I think, is that a wrap for today? I think we've got all the questions covered in chat. Thank you so much for all the wonderful questions today, chat. And is there anything else I will share this? Okay, I'm gonna share a link to the slides right now. So you all have that. And I'm also gonna put it on the screen for a little bit. So, and slides, you basically have the references. So you can see like the two blogs that I talked about. One of the things that, if you like this, we have been doing a lot of different events. So if you just go to our virtual page, we have been going around with three workshops that are like two hour long or even four hour long. And you can just check if there is anything close to your city or virtual one that you wanna join to learn more about Falco. And my last one is, oops, wrong click. There is also the book. I read the book twice. It's actually a really nice book to understand a little bit more about cloud native and security. And it's for free. You've been just downloaded and have fun. This is also in the slide deck. So if you wanna stop my sharing now, people can just get the link. They would be happy. And we do have another question for you all. Do you consider Falco a DevSecOps tool? It's a runtime threat detection tool for Linux operating systems. That's what the primary use case is. Yeah, I would expand on that. And I would say yes. I've seen DevSecOps people using. I've seen like in for people using. So you have a lot of different teams that you have a single team that provides the infrastructure to the rest of the company. And people are just using Kubernetes cluster and whatnot. And Falco is a way of actually getting visibility into what people are doing within their clusters, right? So our people opening shells into production containers like this is not what you want people to be doing. So it's also a way for you to understand how people are using an infrastructure that your team provides, for example. So yeah, I've seen DevSecOps people interested in this too as well. And compliance people. Aligning with the rules maturity framework, we will also tag rules to their specific compliance use case if applicable. That's also very exciting. Amazing. Well, I think that's a wrap today. For instance, do you have any closing statements before I do my closing statements? Go for it. All right. Thank you, everyone. Thanks so much for joining today's episode of Cloud Native Live. It was great to have Melissa Kilby and Pavla Moussa here to teaching us about Falco. We learned about the latest features and we learned about the community and how to get involved. That was super cool. Chat, as always, you were amazing. Thank you for the interaction and the questions. And here at Cloud Native Live, we bring you the latest Cloud Native Code on Tuesdays and Wednesdays now. So we have another episode tomorrow at noon Eastern. So thank you for joining us today. Thanks to those who watched the recording and I'll see you again tomorrow, actually. So have a wonderful, wonderful day, everyone. Good.