 Welcome to this sincere online program about Falco. We'll see together today how to extend Falco with plugins and trigger some highlights from almost any kind of, or any kind stream of events you may have. Let me show you on my face, yes. So first, who I am, I'm Tomadalaches, I'm a French engineer. I'm currently working at SISD as an open source and ecosystem advocate. Before that, I was SRE for more than eight years, especially for tour players, media, press, online businesses, and my last job was SRE for online banking friends. I'm a contributor to Falco. I'm not a developer of Falco because I don't have C++, but I write blog posts, I do talks, I try to help people on the Slack channel. And I'm also the creator of Falco Psychic, a demon to connect Falco with your ecosystem. So a little reminder, what is Falco? Falco currently is a CNCF incubation global project. Basically, it's a cognitive runtime security project. It means it's de facto, right now, the Kubernetes threat detection engine you can install in your clusters, detect threats, and bad behavior that may happen. If you want to know more about Falco, what it is, how it works, what we can do with it, you can watch this CNCF online program made by Lovis DeGirani, CTO and founder of CEDIX, is a company which created Falco at the beginning. So no, it's still an incubation level project, but with plugins I will present to you in this presentation, we can think Falco more than the Kubernetes-oriented threat detection engine, it's now a global threat detection engine because you can, with any source of event, trigger alerts. It means you can detect bad behaviors, strange patterns in your cloud structure, in your local host, or any three more events that you can have. And for some statistics, Falco currently has reached almost 5,000 stores on GitHub, it's an increase of 30% in a year, and we have more than 45 billion of pools from Docker Hub. So the growth is quite impressive this year. Thank you. So a little reminder about the architecture of Falco. At the lowest level, we have what we call the drivers, they run in the kernel space. The drivers are either a kernel module or an eBPF fraud. I will not describe them much more, but they are responsible to collect these calls. Above the drivers, we have Lipscap and Lipspeed, Lipscap is dedicated to the collection of these calls from the drivers, and Lipspeeds will import them and extract them. And above that, for the matching of rules, we have a cool engine and output. Falco will be able to send the alerts, the events to a program, standard out HTTP and point, and this is how it works with Falco second, for example. We also have GFPC now. So Lipscap, aka library for system capture, it runs in the user space library, it's a user space library. The drivers are in the kernel space, but these libraries are in user space. They communicate with the drivers, then they read the CIS code from the rig buffer. This is where the drivers pass the events, and then after they forward them to Lipspeeds. So Lipspeeds, aka library for system inspection, it's also a user space library. It receives events from Lipscap and enriches all these events with matching states. This is where we can add some details about like metadata from communities, from Docker, and focus Docker, demand, content, etc. And it's also able to perform some filter. If we take a look at the evolution of Falco from the beginning to now, at its creation, it was only for monitoring CIS code events at the kernel level. So we have the drivers, Capspeed, and RoolNG, for a few months or years after, we had the ability to receive communities audit logs from the control pane. The idea is to create, to run inside Falco web server, and this web server is an endpoint for the control pane to send out to audit clouds. So Falco is, it was able to collect CIS code and audit clouds even. So we have two sources of events right now. It's also able to collect details about both names, etc. We have some drawbacks with these implementations. The API at Falco event was not really clear. It was hard to extend. And it means Falco needs to expose, much to expose the web server. So we may have some flaws, some bad pattern to expose, less secure pattern than just closed daemon. To work with the control plane, we have to just reduce the CIS code. We have to manage the TLS certificates between control plane and Falco. It can be a little bit complicated. And the big, big challenge with this implementation is it can't work with managed communities cluster, GKE, AKS, EKS or why? Because when you are running managed cluster, you don't have access to audit clouds directly. They are gathered and stored in the log system of your cloud provider. So we have to find out a way to get them back, insert them into Falco. Some people in the community, thanks to them, created kind of daemons to collect part of the log system, audit logs and re-enject them into Falco, but it's not really convenient. So this is why in the last release of Falco from January, 2022, we released the API for plugins. It means we can add any kind of plugins we need to Falco. Falco is no extensible with any kind of inputs. So we have a standard API with clear definition and we can expose easily resources from cloud providers, GCP, Azure, Amazon, OIS. We'll describe everything. So basically, plugins are dynamic cloud libraries which allow Falco to collect and extract fees from streams of events. So we can basically have events like cloud train from Amazon, cloud watch, logs from Amazon, Azure logs analytics, cloud logging from GCP, directly events from communities personal. We can also think about events at the company or dog and demon. And if we want something which is out of the world of in fact, of running web applications, we can think about one of the biggest stream of events in the world, Twitter. I will show you a demo of Falco. More technical details about plugins. As plugins are working as dynamic libraries, they can be .so or .dll, .iso for Unix and .dll for Windows. It means we can technically run Falco with plugins on the Windows environment. Just to be clear, right now, Falco is not able to collect these calls from the Windows kernel. It only runs on Unix systems, but technically we can run them for plugins on Windows. I don't know if it works and I'll try it myself. The API framework at the Falco level is quite simple. We don't have a lot of functions, methods or other system, we call that C symbols. You have to know all of them are not mandatory to grant a plugin, only a few subset of them, but they are there and the documentation, the developer guide is full and you have every detail about all methods and .so. Technically, we can create libraries .iso from any language, basically. Right now, we only have two implementations in C++ and Go, and we'll see in the next slide why we choose to offer this key for Go, for example. It's only available from Falco.31 from January 2022. We have two flavors for plugins. First ones are what we call third source plugins. They run with Lipscap. They are responsible to open and close event sources. They generate a batch of events and then send them back to Falco. They must have a unique ID, we'll explain that after, and they can also directly extract data from the event. They can extract data to fields, by the way. And we also have extractor's plugins. They are more running at the Lipscims level. They are responsible to extract events. For example, the main extractor plugin we have currently is JSON plugin. From any plugin source plugin, which collects JSON events, we can use after the JSON extractor plugin to get the values of it. It can be generic or tied to specific data source. It depends what you want. For example, JSON is quite generic, but we are creating extractor plugins or audit logs, communities audit logs because whatever the source is, GCP, Azure ML, the format of the JSON is always the same. So if we take a look at the sequence diagram for the source plugin, the Lipscap library, so in Falco, open, ask the source plugin to open and it retrieves batches of events with an height level of iterations, really preference. And after all the events have been collected, it can close it if we want. And for the extractor plugins, this is quite the same idea, except the library Lipscims library will call the GetFills method from plugins to get a list of all fields which are available for the plugin. And then after we extract fields, it will look to extract fields. How to enable a plugin in Falco? It's quite easy. You may already familiar with the Falco.server configuration file, which is the main configuration file for Falco. So now we have a plugins section and load plugins list. So load plugins is just the list of plugins we want to enable. So in this example, we just add one plugin. So we have the name, important. This is the exact name you will see in the plugin configuration. I will show you after. Library path is where the plugin is stored on your system. It can be relative or absolute, I like to note. And you have the init config. This is where you will set all parameters to run your plugin. So either in the handle, either in JSON, it works. This is why I'm showing you both methods. You have to know one important thing. When you enable, when you load a plugin in Falco, it currently, this is just a current drawbacks, technical drawbacks, current technical drawbacks. You can't, you disable automatically this call collection. So you can run a Falco instance for both this call collection and plugin. It will change in future, of course, but right now, this is the situation. So, technical cavities. For developers, you have to know the flows are simplified compared to this calls. For end users, it's not really just for people who wants to write plugins or develop Falco. You have to know that. The memory locations must be owned and managed by plugins. This is why it's useful to use Go, for example, because we have a garbage collector, but if we think about Rust, and it's a powerful manual to manage memory, it would be nice to have plugins in Rust, too. You can load only one plugin, one source plugin, by instances of Falco. It disables of this call collection, and you have to take care to not overlap ideas when you create a plugin. Why this is because the source plugins may be used by Falco and Sysdig open source, and you can create catchers. You can record catchers with Sysdig, and the plugins we have created, the events inside the catcher, they're hiding its ID. It's a number, and this is this number. So, if we don't want to create conflicts, between catchers and plugins, we must have overlaps. And you also have to take care of the extractions of fields to not have the same fields for different plugins. It's not a big deal, but you have to know. And, but a big, big challenge right now is the official Falco mShot to install and manage Falco is not yet ready for plugins. Why, because we have a big challenge currently, how to manage different flavors, kinds of instances for Falco. Let me explain. Right now, for most of the plugins, you just have to run one Falco with the plugin, and that's all. If you take the example of code watch logs, if you run several instances of Falco, with each instance, your code watch logs plugins enabled, you will collect the same event several times, and you will have the duplications of audience. And, on the contrary, if we run some, if we want to run Falco with the second agent plugins, these plugins must run at the host level. So we have, we need one Falco per node in your cluster. So in one case, we just need one Falco, another we need a Falco on each node, like a demo set, for example. So we have to deal with that. So to enhance the user experience for people who want to write plugins, we created an SDK in Gollum. Gollum is quite easy to write the popular language in the cloud native and open source communities, and also the developer, so I can't tell the opposite. It's quite easy to deal with the interface. The interface is quite easy to interface. Go along with C, remember Falco is written in C++, because we have a C goal and all those stuff. We can manage directly the memory that we have the garbage collector, so to allow a good interface between C, which has this manner to manage memory and go, which has another manner to manage its memory. It's quite nice to have a kind of framework in SDK to not make people aware of that and let make them only focus on their logic, not on the low level questions. So we created an SDK in Gollum, it's quite easy to use. Let me show you. So first we have the Falco.channel file to enable the plugin. It will be loaded in Falco, Falco will use it to know what he has to do and use the plugin in the format of a .iso we created. So in this example, it's a dummy example, I hope we have written it to import the SDK, we need to create some structures, et cetera, et cetera, to build it with the same version of the plugin, the same context than Falco to rent. For each plugin in Gollum with the SDK, we have the imports, so we need to import all details from the SDKs, all base structure we need. Let me show you there. We have to create two structures, one for the plugin itself with its configuration, another, and we have to import plugins.bat plugin. It will automatically add all mandatory fields into your structure. Same for the instance, what we call an instance is an open stream. For example, when you create a client to treat a stream API, this is an instance. You have the plugin which is created by Falco, with the details you put in your Falco.channel, and at the moment it opens a stream, this is an instance. So, the first to do with your plugin is to register it to Falco. It's done by the init function. You have to register the source and the extractor if you want one. If your plugin is also an extractor, then we have the init with the capital K for the i. This is responsible to map the configuration between your Falco.channel and your structure. This is where you can set some default values here. And then we have the info method. Really important because this is where you put the ID of your plugin, the name, and remember the name is what you have to set in your Falco.channel file, and you also have the event source. This one is also really important because, for example, if we want to create rules for Kubernetes audit logs, we may have GCP, Azure, Amazon, or directly the control plane as sources for these events. So, we may have three, four, five, whatever, different plugins, each with its own name, but they will have at the hands the same format for the event. So, we will use the same event source. And the event source is what we use in the rules files to enable the rules for the plugin, which I will show you after. So, we have the OpenStream method. This is where you connect to the external API or you connect to, I don't know, Kafka, Repetential, whatever. This is where you create the instance. And we have fields and extract fields in the, if you remember, when I show the extractor sequence diagram, this is where the file code is able to get the list of all fields available for your plugin. And this field will be available in your walls, of course. And extract is when your plugin is also an extractor, it's where you have to map the fields and key with the fields menu. Next batch is the last method used by Falko plugin framework to collect event batches. Just have to know this is supposed to write with a writer each value. You have a map size of your batches and the number of events your batch contains has to be written by the method. So to prefer a way to the community to propose their own plugins and let the rest of the world know they have contributed and they have already write, they have already wrote a plugin for some tools. We also have registry. It contains method data and information about every non-plugin. This is when you have written a plugin, you can propose it with a different ID than all the other, of course, to this registry. This is where you can check which IDs are available to avoid conflicts. And it's also right now where we store the plugins managed right on by Falko maintenance. For example, we have a CloudTrail, JSON, and dummy. A dummy is just an example in C++ and go to offer you a way to dig and understand how it works. In future, we'll also create shared libraries. For example, for authentication to cloud providers to get logs from CloudWatch logs, for example, because we always use the same model. So what we want is when you want to create a plugin for new service for GCP, Azure, and Azure, you will just have to import those shared libraries of shared models and the authentication and the creation of clients will be already there. You just have to get focused on which bits you want to extract, which configurations you need, et cetera. You just make you focus on your logic and not on the whole stuff. So in the registry, read me, you will find out this list, for example. So you have the ID, the name, name used in your Falko.jambore to enable the plugin, the name of the event source. This is why, for example, we have K8S underscore how-did. In the future, we'll have the same with underscore EKS, underscore AKS, underscore GKE, et cetera. And the event source will be always the same. And you will find out which, the names of authors, et cetera. For example, ThinkCompagent is the first modules created by someone else of the community. Thank you, Ivan. And you have the source plugin and the extract plugin. So an example of plugins we propose, we see HID number two. It's a AWS CloudTrail plugin. So it extends Falko to your CloudFacture tool by collecting events from CloudTrail. For CloudTrail, you have to know we have three sources of the events. We can collect them from F3, from an SQS queue of directly from a local fight system in JSON format. Of course, this plugin is also an extractor. So we have new fields, city.name, city.user, city.info. All details are in the readme. You can find out with the link here. And we have a new event source for your roles called AWS underscore CloudTrail. So with this new plugin, we can create this kind of pool. For example, this one will detect any login to your Maven console by a user with that MFA. So as you can see, we have the source and we have AWS underscore CloudTrail. And we have the new fields, city.name, city.error, et cetera. And we also use for this one, the JSON extractor we mentioned earlier in the slide. So it's really powerful because you can know, detect events which may happen in your cluster, but also at the infrastructure level and with this kind of plugin. So for the JSON plugin, like I just told you, we have some new fields, which works with any events in JSON format. What is useful with plugins is you don't need any configuration. It works out of the box. You just have to enable it. For the JSON plugin and the CloudTrail plugins are managed and created by Falco mentors. They are already embedded in the Falco images, Falco Docker image. So the demo, in this one we'll see, let me remove that. So in this one, I will show you the Docker plugin. So I'm running Falco with the Docker plugin loaded. So it will connect to the CompnerD demand and extract all fields in a right time. And I just created dummy rules to print out any actions which are for type Compner. We have the source, Docker, et cetera. Let's create the Compner, first one. And we see the basic workflow of the Compner creation. Create, attach, start. We have the image and the name of the Compner. It's exactly what I used to run my command first. And if I exact something inside, we also detect the exact command inside my Compner. So we add the world command, the image and the name of the Compner. So it works, basically. This plugin is just to is just to demonstrate how it works. It's quite easy so you can read out the sources and understand much more how it works for Falco. This one is for EKS. So this is just a demo. This one is a work in progress idea, like I explained before, is to offer shared libraries, share modules to developers. And this one is a book to see how it works if it will fit all needs. So this is just a demo, POC proof of concept. So I just created two rules. First one is for success actions. And another one is for error actions in error. You can see the source is different. Suppose it's running as a POC. So this is why for now it's different. And we can extract like we had with the current implementation of audit logs internally to Falco. We have the same fields. Let's see that. If I run the plugin, you can see we have a lot of events. I could write a rule with and the user dot name, guy user dot name, contains format, et cetera, but just to show you this one, let me get the list of pods in the default namespace. Just wait a few seconds. Here we go. We have my user name, the target, the verb list, and the URI, which has been called and the response to 200, it worked. I can also describe a pod, for example, this one. We wait a few seconds. And we have gate, which pods, which target it is. So we can see the user, tomah dot labrissas gets this pod and I received 200 HTTP error code. It means I was about to retrieve data for this pod. Right now I have some issues with the creation and deletions of pods, so we will not show you this right now, but you have an idea of what we could do in future with this kind of plugin. And the last one is more for fun. This is a Twitter plugin. So we are able to connect Palco to Twitter between tweets in real-time, sorry. So in this example, just you have to know the syntax for is used for the plugin is quite the same as the Twitter stream API. So it doesn't work really well with rules because we have rules over rules, but this is just for a demo once again. Just to show you, we have a new source and we can get information in real-time, like the old tweets, which refers to cats or dogs with an image. So we can have in real-time if people prefer to share images of their cats or of their dogs. This is just a demo for example. Here we go. As you can see, people prefer to share images, pictures of their cats because we have more new cat image. So we have name, RT when it's an RT with a retweet. So we have the content and the link to the image. It's just an example. But you can imagine with this kind of plug-in, we can also monitor in real-time major tweets like Jory can alerts. I don't know if you should need to make photographs to make pictures of Tunder or I don't know, you can in real-time have some Slack or WhatsApp or push-do-let or I don't know alerts. Thanks to Falco about tweets which mention lightnings or health in your country or in your region and be able to run out with precautions to run out to take a picture. For example, we can imagine things like that. So like I mentioned, for the shared libraries and shared module, let me show you my face again. So we want to offer to people shared modules for authentication, for service A, service B, etc. and make them and allow them to create plugins based on this shared module. For example, for the Humanities Audit Logs, we can have one module for authentication to Amazon, same for Google Platform, for another one for Azure, the module for connections to the dialogue services and an extractor for the fields of Humanities Audit Logs, the Jeldon event. So we have three different plugins. Two different source plugins, but they will share the same source, event source in four words. This is a basic idea. If you want more fun with plugins, this is, I just, it's awesome. A guy from, a team from Sysdig created a plugin for pet surveillance. The idea is to get the stream of events from a camera, a video camera, run image recognition with OpenCV over these images and create events then to falco with a plugin and be able to alert when you see a cat in your, from the video streaming. It means you can be alerted when your cat is destroying your living room when you are outside. I like this idea. And if you want, I think a great idea to someone on this falco community, a falco Slack channel mention, I don't remember his name, sorry, but mention, he would like to create a plugin to connect his car, I think a Tesla, his car with falco through a plugin to be alerted on some events. For example, someone opened a door or I don't know, an issue with the car engine. I like the idea. So you can find out some useful links, for example, the documentation about what are plugins or they works. The developer guide, really useful, you also have some details in the plugin registry. You can find out some dummy examples, some demo, et cetera. I don't hesitate to dig it, to dig in the registry to have more readme than more info. We also have to use the blog post. First one is the plugin announcement, the reason behind what we want to do with what will be the future for plugins. And if you feel confident to write your first plugins, I also wrote a blog post. This is a getting started blog post about how to start with the column SDK to write your first plugin. It details all functions and method I just show in my slides, open fields next batch and how you can use them, et cetera. Last words, if you want to more details about Falco itself, of course, you have Falco.org, the main website. You can check out the project on GitHub and be always welcome to discuss with the partners and the other members of the community in our main Slack channel, Falco on communities.slack.com. You can follow Falco on Twitter. And if you want to discuss about some interesting topics, we have a community call each Wednesday at 4 p.m. UTC. Just come and discuss. We are more and more each week. It warms a lot. Thank you everybody. Hope it has been clear for everybody. And welcome into the community. Merci.