 Hello everyone and thanks for joining us today. Today we're going to talk about the Falco Playground, which is a new project we recently launched, which involves Falco, an amazing open source project here at the CNCF, and WebAssembly, which we successfully employed in our community over the past few months. First, let's spend some words about us. Hey guys, my name is Rohit and I'm a 2023 Google Summer of Code contributor at Cloud Native Computing Foundation. During this period, I was responsible to build Falco Playground with Jason as my mentor. As the program ended, I maintained Falco Playground and contributed to Falco repositories. Now, as a member of Falco Security, I am excited to share my work done during this period. In I am Jason, I work at SISDIC as a senior open source engineer and I'm a full-time core maintainer of Falco, being in the community since about two years ago. Now, quick disclaimer. Before starting the WebAssembly project we are about to present, we had zero knowledge about WebAssembly. The only thing we knew was some high-level information you can read about online, about the technology and how amazing it's becoming. But we also think this is the beauty of the session. We want to share our story as beginners of the field and how WebAssembly is helping us, filling an historical gap in the Falco project, with the hope of inspiring others like us doing the same in the future. If we made it, you can do that too. So let's start. First, let me give you some context about Falco. Falco is a cloud native runtime security tool incubated by a CNCF and we are currently attempting to achieve the graduation level, so stay tuned and wish us luck. Falco monitors everything happening in your system and sends you an alert whenever something suspicious is detected. It is a powerful, efficient and expressive rule engine for runtime security rules. The main use case of Falco is to observe high-volume strings of events that are security relevant by collecting them very close to the edge with resource usage kept to the bare minimum. Think of it like a security camera but for cloud native environments and applications. Traditionally, this happens by instrumenting the Linux kernel and by collecting events generated from your system. Intercepting the system calls of the system gives you major visibility over every single process, file, container and how they behave around time. In Falco, kernel instrumentation happens with either a kernel module or a BPF, of which Falco is a great adopter. A nice feature of Falco is that the Falco libraries also allow recording security streams of events into capture files for later inspection and forensics. This feature will become relevant later in the presentation. And although the main focus of Falco's tool remains container and endpoint security, Falco also is dramatically expanding its use cases in the space of cloud security. For example, Falco can now also collect cloud logs from AWS and GCP and let you write runtime security policies on top of them too. Falco is also the most widely adopted runtime security project for Kubernetes and it is of course a first citizen integration that we offer. So yeah, this is just some high-level information about the project and of course I don't expect any of you to have everything figured out throughout from the beginning and this is not really relevant to the session anyways. But I firmly believe that after working in the project for about two years, the best way for understanding how Falco works is by taking a look at a real-world example to have a better feel about what the experience is like working with the tool. So let's take a look and here it is one. So what you're looking at is a real-world example of a Falco rule set. So Falco rule sets are the basic way by which you configure Falco to work at runtime and they just define a group of security rules that are evaluated one by one for each events of the data streams that Falco serves. So Falco rule can have a name, a description and an output that is formatted with information and metadata about the event whenever a security alert is triggered. Every rule has a severity level which you can configure as a minimum level for Falco to trigger so that you can make it noisy just as much as you need in your system. You can have tags and any sort of meta information to better manage your rule sets. And then there's the most important part which is the triggering condition of the security rule. This is evaluated for every single one of the events that Falco monitors at runtime and for every security rules in the rule set. It is a very simple Boolean formula in which you can use a rich set of data fields describing information about the event plus some more information which is remembered by Falco in a stateful engine given all the events that it saw in the data stream from the past. So the context of every event is much more richer than what the event contains just itself. For example, if you read through a file but you opened it before, Falco will remember right away what the name of the file was. This is very powerful. There's also more features like the possibility of grouping Boolean conditions with macros or lists and make them expanded around time whenever you load the rule set. So as you can see, the biggest challenge of using Falco is actually writing the proper set of security rules for it to evaluate a runtime. We as a community and maintainers provide a rich default set that is useful and acceptable for most environments. But of course, users usually write their own rules, but that requires some level experience. Amazing. So you got Falco, you got your own set of security rules. What do you do with it? The first thing that people do is usually testing that the rules actually work. And you can do that by running Falco as a process in your local Linux system and trying to reproduce some suspicious activity and check if the rules actually trigger. Once you're sure, you can just deploy Falco in your environment in your cluster. You can do that by installing that manually on every single host. You can run Falco as a container or just use Kubernetes like most people. We as a community provide a pre-built and ready to use own charts and deployment manifests. So let's say that you got Falco running in every of the nodes of your cluster and you want to open a shell. You do that and then you start typing something to do some potentially suspicious activity. If you got a rule configured in Falco for detecting that attack pattern, Falco will let you know instantly because it will see the event happening around time by looking in the system right in the moment you do it. And then Falco is also supporting a rich output framework in which you can just pipe the security alerts whenever a channel suits you better. And remember the possibility of capturing the stream of events in capture files that, well, Falco is also capable of reading those instead of actually doing live capturing the system. This is tremendously useful for testing the security rules worse as expected on a stream of events that you recorded. And there we just got closer to the problem we're trying to solve with the Falco playground. So one of the things you want to do for the playing Falco is writing rules and testing them. And how do you do that? Well, you define your rule set in a YAML format that Falco supports and use the Falco tool itself for validating that the rules is okay. If it's not, Falco will give you a rich output of everything that doesn't work with errors and warnings either in text role or machine readable format. And once the rules is acceptable by the tool, you get around Falco and then the configure that with your rules and start doing activity manually trying to see if the rules you wrote actually match the attack pattern that you want to detect. So it's a bit trial and error process. You have to go through the whole loop and it's not very handy. It requires a tiny bit of experience. And we didn't want that bar to be high. We actually wanted our contributors and our practitioners and users to have a much more easier life in testing that their rules at work and everything works as expected in runtime detection. Okay, okay. Wait a second. Hold on for a minute. Everything is interesting so far. Falco is of course a fantastic project. But isn't this a WebAssembly conference? What am I doing here? And what even is the Falco playground? Okay. Assume that I want to solve the gap that I just talked about. And I want to create an easier solution for people to write their own Falco rules and test them before deploying Falco to staging or production. Right? Other projects have something similar. Take for example, the Go playground that allows you to write and run Go code online and share examples with your friends. Can we do something similar with Falco? Answer is yes. Falco is the tool responsible of both validating rulesets and giving you indications in case something is wrong and then configuring those rulesets to be matched against security events at runtime testing that everything works as expected. We will probably need a backend server to run Falco into and then some sort of web platform for our users to use in order to play with Falco. Right? That is a solution but it's also costly to develop, maintain, and also deploy because we need servers in our open infrastructure. Can we have an alternative? And then it hit me. WebAssembly lets you compile programs in most languages in something that runs natively in your browser client side. C++ with which Falco is written is not an exception. So that's what we did with the Falco playground. This is the idea that I proposed as a Falco contributor and maintainer to the Google Summer of Code and we got selected. Here is where the collaboration between me and Rohit started. The Falco playground has been his project for Google Summer of Code and it's the new platform that we just bootstrapped for let people play and learn about Falco rules and testing them. This is all client side without the need of any backend thanks to the power of WebAssembly. I will let Rohit speak about this project from now on. Thank you Jason for the detailed explanation. Now allow me to introduce Falco Playground. Simply put Falco Playground is a web application used to validate Falco rules. A good example might be any online co-editor available on web which is similar to Go Playground or a Java compiler. The key difference is that while online editor or compiler use a backend server to compile its rules, Falco Playground uses WebAssembly to validate its rules. How is it validated? Well, the rules written in editor will be sent to Vasem module for further validation. In upcoming slides, we'll explore how Falco Playground functions in more detail. With Falco Playground, you can edit, create, import and update your Falco rules. It provides a quick, seamless way to compile Falco rules without the overhead of installing Falco itself. Moving forward, working of Falco Playground can be simplified into three components. The editor component, the memory component and the Vasem component. The editor is a client-side-facing application where you write all your Falco rules. The rules are written in the form of YAML file and stored in memory. The file is then fed to Vasem module that is loaded into the browser. And finally, Vasem module validates the rule file and provides the output. Now let's dive deep into each of these components and see what is going on behind the scenes. By now, we know that mscripten emits a JavaScript file and a Vasem file after a successful compilation of C++ code into WebAssembly. The JavaScript file contains information about the Vasem file, like its location and exported functions. It also contains information on how to instantiate the Vasem module into the browser. During this process, the exported functions in the Vasem module are mapped to individual JavaScript functions that can be referenced by Falco Playground whenever it is required. In almost all cases, an inbuilt function namely instantiate streaming can be used to fetch Vasem module. The next essential step is to store all the Falco rules within a single YAML file. Typically, this file is named as rule.yaml which is then supplied for Falco for validation. However, when using a web browser, things can get tricky because browser restricts direct file writing, updating, or storage. To come back this, mscripten offers a file system API that allows us to temporarily store files in memory. These in-memory files can be then seamlessly provided to Falco for further validation and processing. Here, we use memfs or memoryfs that comes packaged with mscripten. Now that the rule files are stored in memory, we have to supply these rule files to Falco for further validation. After supplying the files, we have to extract the information that Falco provides after validating the rule files. Falco being a command line tool, it outputs information in terminal as a standard output. However, in a current setup, this information gets displayed in browser's console, which is not really helpful. To solve this issue, we take a different approach. We override the print functions provided by mscripten and instead of directly printing the information to the browser's console, we store it in a local variable. This allows us to capture and manipulate Falco's output within our web application, making it more useful and accessible for further processing and displaying. In essence, we are just redirecting Falco's output from console to a place where we can work with it effectively. Now, let's set our focus to the error handling part of Falco Playground. These errors are displayed in multiple locations, the errors in the editor are displayed as quickly lines, and in the terminal as just plain text. The errors in terminal are also color-coded for visual cues. To list this further, let's look at two examples, one where the rule validation is successful and the other where it is not. In successful case, the console message on the right-hand corner of the screen displays output in a reassuring green color. This signifies that the validation process was successful. On the other hand, when there's an issue with the rule syntax or configuration, you'll immediately notice a different set of indicators. To gain more insights, let's look at a quick demo of Falco Playground. Upon loading, Falco Playground contains an editor, a terminal, and a group utility buttons. When you load the website for the first time, you can see an example automatically loaded into the editor. Now, let's try to create our own short and simple rule. This rule alerts whenever the file is opened or modified at a certain directory. First, we use fields such as rule name and description to tell Falco about a rule. Then, we use condition to write a logic. Output to display when a rule is triggered with some additional metadata, and last but not the least, the priority. As you can see, the editor behaves similar to autosave and auto compile features of a code editor. Autosave? Yes, all updates to the rule file is stored in a local store object. Okay, what about auto compile? Well, Falco Playground also listens to changes in the rule file and automatically violates it. Moving on, let's now talk about the terminal. The terminal is essentially the standard output of Falco. It contains messages emitted by Falco after validating the rule file. The validation can either be a hit or miss, which is indicated by using colors. In addition to squiggly lines inside the editor, the terminal provides additional contacts about the error. Falco also provides output in JSON format. It contains location of specific error in terms of rows and columns. This information is then provided to the editor and the editor displays the squiggly lines. Now, let us go over some of the utilities that Falco Playground has to offer. We know that Falco Playground automatically violates the rules whenever there is a change in the rule file. If in case when we would read to run it manually, the run button can be used. We can also import YAML file containing the rules. This, of course, is understood as a change in the rule file and is compiled automatically after importing. We can also easily download the rules written in the editor by using the download button. The file is then downloaded and named as rule.yaml. If you are testing a short rule, it can also easily be copied using the copy button. Falco Playground also has an option to load multiple examples. Each of these examples provides a base and a starting point to write your own rules. How do you check if a rule works? Well, Falco Playground provides a functionality where you can validate rules by using a SCAP file. This SCAP or syscall capture file contains kernel events that is occurred over a certain period of time. This helps us emulate kernel events without the actual need of kernel. We get a response whenever a rule is triggered against the event in the SCAP file, indicating that the rule was working as intended. What if you want to share your work? Well, Falco Playground also allows the user to share the rule file. This is done by encoding the entire rule file and creating a link. The link can then be shared and the rule's original return can be retrieved. Last but not the least, we also have an option to test our rules against our own SCAP file. In this example, the SCAP file contains kernel events emulating a socket event. When we write a rule to trigger on a socket event, we get a response containing important information about that event. While Falco Playground's file form might look simple, it was not all sunshine and rainbows during the development. To talk about this, I'll let Jason take over from here. Over to you, Jason. Thanks, Rohit. I guess it's time for the closing remarks. Let's go through some of the challenges that we found along the way. First, much of the code of Falco was still tied to the kernel-side system monitoring use cases. We use CMake as a project manager, given that Falco is a C++ project, and everything was a bit tied together, so we needed to modernize it in order to compile just we need and exclude all the parts of code and dependencies that were not compatible, either with WebAssembly or with the Playground project. Second, a big chunk of the Falco features is developed using multi-threading to achieve optimal performance, and we have to exclude that because we couldn't make it work with WebAssembly so far. Maybe it's our limit or our limit of the technology we want to find out in the future. Another big gap is that Falco recently to use over the past two years a very rich feature named the plugin system that allows users to load and run Falco's extensions around time and make the tool more powerful. We couldn't replicate that with the Falco Playground in WebAssembly yet, so that's a point of improvement we want to work on in the future. Speaking of the future, what are the next steps? First, we want to start supporting official web packages distributed for Falco built in WebAssembly target so that people can use that in their own projects they want alongside with the JavaScript loader. Second, we want to allow people to record their own capture files in data streams and then upload them to the Playground so that people can actually test rules not just against some of our example data streams but also with their own. Speaking about examples, we plan to provide plenty of more and improve the ones that we have with better documentation and comments. Last but not least, we want to integrate all these examples and snippets in our website and tutorials so that people can share pieces of Falco rules and play with them interactively while learning about the project. And then we look forward to have new contributors. So, if you're a web developer, you like frontend and you're interested in the WebAssembly space or you want to contribute to a CSF project, please reach out to us. We will be super happy to have your help. And thanks a lot for listening so far. Last, don't forget to come in our community and say hi. We meet every Wednesday in the weekly community call. You can find us on the Falco website in all the public channels or socials or on the official Slack channel in which we chat. We are super happy to make our first steps in the WebAssembly community and we look forward to have you as a new community member and contributor. Let's talk soon. All right. Thank you, Jason. So, this talk was a little different than some of the others, more about a high-profile use case for a CNCF project adopting WebAssembly and using it to improve their developer experience. Anyone have questions for Jason? I've got one, Jason. What about integrating the tools and this virtual platform layering into the CICD for organizations? Is that a valid use case for this? So, there was an argument about trying to install Falco inside the GitHub actions to create that action potentially to protect those. That is not strictly related to WebAssembly but we actually are exploring the possibility of using the fact that now Falco compiles to a native WebAssembly package, basically, to implement that as well. From the point in which we recorded this, now we have a pipeline for actually publishing those packages. So, we don't have official ones, meaning that they don't follow our official release schedule, but we have that packages for whenever we do changes, basically. So, that is fully integrated in the release pipeline of Falco as the other regular executables that we provide to people. Those are picked up by the CI of the playground page and, yeah, that's pretty much, it is self-sustainable from this point on. Any other questions? Oh, in the front, John. Hey, so, great talk. I think, were there any lessons learned that stand out to you in the process of porting, you know, a large native C++ code base to WebAssembly? I know, you know, kind of taking what originally was a native app and getting it to work and work well and have feature parity in scripting is something that a lot of people do, but I think a lot of people also learn a lot of hard lessons along the way. That's a good question. Thank you. Lessons learned. This is actually what we wanted to bring to the table. Like, this may be probably in the presentation less technical than others, but we wanted to share our experience actually using the technology first. Falco is a pretty large code base in its entirety. It's more than, I think, 300K ISO code. And it was like a challenge that we wanted to take on. Falco truly believes inside the playground to run in a Linux system entirely. So it just does everything that it does usually. We needed to strip off, of course, the connection to the kernel share buffers. So system call collection doesn't really happen, but most of the features, actually more than 80% of the features are still in place. For example, if in the capture files you do container executions and stuff, that is actually available in the Falco playground. The one thing we needed to strip off is the Kubernetes metadata support, mostly because it was very hard to isolate, but most of our entirely code was able to compile without too many issues with them scripting. With the exception, of course, of the multi-threading parts, which are there in some places in the Falco code base. So that was the biggest challenge, probably. Are you running just single-threaded, or were you able to use one of the various threading solutions for WebAssembly today? It is mostly single-threaded. I mean, the event processing model of Falco is single-threaded, luckily for us. So that was not a big issue. So this mostly works in that way. Another point in which we really struggle is the plugin system, which in Falco natively runs a shared library. And that was probably a cool use cases mostly because if Falco runs in WebAssembly, then you could potentially create maybe a plugin, like a Chrome extension that actually watches activity as you're running the tabs. That was a cool idea that we got, but we didn't explore that yet. So the option that we have right now is to statically compile plugins inside the tool. But that's not super easy because the plugin system is multi-language, and some of those are written in Go. So then you have a totally different challenge. Exactly. It just buys up. But yeah, we explored this as well. Thank you. A lot of great lessons learned. Any last questions for Jason? Thank you, Jason. Thank you.