 Hello everyone. Great to see everyone in person. Again, thanks for joining us today. This is my cash truck for five years. So I'm going to present to you the project and you will be able to make this case. This is the last cash truck. So, first, very good information about us. This is Jason and he is my speaker, Luca. We are from Cormand, Venezuela, by the end of this versus in Arizona, Sicily. It was four times five, and they were able to support. You can do it better. First of all, as well as code and non-code, and also a couple of blocks that we don't have. So, you can find us on here, so if you agree to each other, say hi. First things first, what is FATCO? FATCO is a ground management security tool that watches everything happening in your system. It's a new alert. It's something suspicious. FATCO has an issue in development. FATCO has eventually contributed in 2017 to the CCF. Currently, FATCO is well-off in the open-source community. It's recognized as a default standard for protection for days. It's an individual-level project in the CCF. Currently, we are in the process of applying for regulation. Many security tools decide to collect security events, store them, and use them for query and later inspection. FATCO does a different approach. It's strange, at the right time, it tries to detect security using the exact moment it happens, so that you can identify as fresh as possible. Then, when you receive an application from FATCO, you can forward it to a bunch of different outputs. FATCO can be integrated with other applications in your system, and you can actually store them and just watch them. FATCO was designed for collecting events coming from the kernel, because it's not the way FATCO has in the system, so containers and all the processes and applications run. This can be stored again with either the kernel module or the PDF of which FATCO is generated in your author. Recently, this is one year ago, where I showed you the FATCO software, we introduced an in-plugging system that allows us to bring the power of FATCO engine to cloud security in a broader sense, and makes you basically FATCO digest security events of different kind. For example, we're able, as you can see, to connect cloud to the cloud stream, and pull all the event from that platform to see everything happening in the infrastructure, and write security rules there as well in the several languages. The same goes for the logs and octa, and there are many more in the list. This is actually the plug-in system I introduced in version 0.31, which basically is a standard way for now to create and develop cloud extensions. Each idea has been greatly appreciated but, of course, we'll never forget about system security, and there is actually development on that aspect as well, and the G-Virus integration is a big knowledge that I want to use over the past few months, which we will cover later. So, let's start talking about what we are all here for, which are analogies in the project, which is the last one to extract. There has been plenty of exciting stuff, so let's go. Since part of the project crew, now, many of you can say probably that this is not a big deal in 2022, and you're probably being a little unexpected, but I can assure you that when there's parents who are initially involved, this is very, very much real, and we had to compromise the efforts to achieve this goal. But now we're all set. So, for now on, whatever new fund version gets released, we can provide official packages, content images, any of the plug-in artifacts for both NAMD64 and R64. And, you know, many of you probably already know that we also provide a big repository for free builds, turn-on-models, and APM codes for our users to download instead of attending a local combination of the system for different carrier versions, and now we support NAMD64 for those as well, which greatly enlarges possible multiplicity in the user systems. Another major change recently is the way how to take on C's stream of states of events. So, the best way to explain this is by taking a step back and looking at what part we'll have before applying the system. So, back in the days, I used to support mostly all the carrier non-vents, and I actually used to have the support for many days on the blog. In C++, that will decide the public security rules. So, the two would actually run together and users had, you know, the choice to actually enable, turn on and off, each of them depending on which case. So, the way this was implemented was very hacky, you know, if you do not standardize, and you have like great safety measures for the public perspective, nor your deployment in this case. So, this, for example, caused many users to run both system calls collection and community solution consumption, both in the same instance, even when it was not necessary. So, then, when the blogging system was introduced in version 0.31, we had a standardized way of doing the sources that would actually each blogging can implement a new one, right? A great example, again, is the best CloudTrip blogging that I've mentioned before. So, what I reckon that we did, in version 0.32, was bring on end the support to create this own blogging as a plugin, entirely real, and, you know, which feature part is the one we had before. Which is great, much more maintainable and, you know, easy to read and allow for, but there has been some differences from before and the biggest one is that initially my system was able to only run one source per configuration. So, for example, if you wanted to do both system call collection and call AWS CloudTrip, you had to deploy two different instances of that, for example, which was not for handy, and many can be followed smoothly, you know, because it was in different from what we had in the past for the Amazon plus. So, since version 0.32, I think it was released literally a couple of weeks ago, we introduced five instances of work for multiple sources in the same one business simultaneously running. So, events and security detections over an isolated and safe area of the workload in the same one business with very little overhead. And there is four of each of ours who have many modules, each configured with one event source and one single bot running all those event sources in the same instance. This creates, like, a big new opportunity for security use cases and new deployments in these cases of that practices. One thing that we're trying to say is that the designer developed this feature totally in the open with a great participation in discussion from the community. So, if you're interested in seeing the basis goals and the design and discussion, you can check that issue in the link. Now we can go on with the latest updates. Thank you, Jason. So, another feature that we recently introduced that is support for device work, and you might be asking that what is device work when you're able to see this technology. As a security person myself, I'm really excited about it because it's a sandboxing technology. It's been developed by the open source team at Google and it's used in the KACP, and it's completely open source and it can be deployed in a variety of systems including signals of connected. So, what does this have to do with hardware? So device work is a sandboxing, which means that it's going to make our containers even more containing. Containers of course already have security properties, but the device is going to sandbox them even more limiting the system growth that they can do and actually elevating parts of the Linux operating system kernel in order to perform actions more securely and not adding an extra layer of the filesystem elevation and also second libraries. So this means that a lot of privileged escalations can simply not happen when running our work load on the device. This is great for a lot of high security workloads and high security cases but what if we can, if you want to also observe the workloads. So, but only if we want to continue them, what we want to take a look at and see when something suspicious is happening at within our containers. For example, for compromised workloads or for potentially malicious. We can try to plug it in, but it's not really going to tell us a lot because part of books into the actual kernel and since divisor has a divisor kernel running in the user space, what happens within that divisor kernel will stay there and will not reach the kernel the regular, the system kernel directly. So what can we do? Well, thanks to collaboration with the open source team at Managers divisor we were able to create a new way to plug events into Valko directly into LeapStack so that not only Valko is compatible with it, but any open source product that uses the same libraries code base that we have and in this way we can also directly to the user space we can have one instance of Valko running on collecting all information from all divisor sandboxes and forward them to Valko so we can do the same Valko but if you think about maybe an expectation on that we might find that A is blocked because of the sandbox and also washed because of our Valko if you want to learn more, there are tutorials in our blog and it is quite a block so as a secret person myself, I really like seeing new security rules even in default I love when a process doesn't open takes too many knots and slashes to be legitimate and also when a process tries to take a look at other processes environmental environmental secrets so I am very grateful for the community of this rules and also we are all very grateful for a cleanup of the rules that got us much better performance for some use cases that have some rules enabled and some disabled in the later versions so now I would like to talk to you a little bit more about the libraries Skype kernel module and need to get forward because those are used by some of the parameters that are not necessary at all and we are improving that a lot in the Valko community recently actually it is the most exciting fact to work on the libraries because we are seeing the highest amount of contribution to the libraries ever even before the contribution of the libraries themselves to the CNCS and now we need a lot of people from different organizations that are contributing a lot of school issues features it is called and many of them to the of the libraries this is very exciting and it is really it is really a pleasure to work with these libraries now but it comes with a little bit of price I want to illustrate with a bit of both from the libraries and the contributors that exist downstream they were saying every time you go from your thoughts we get some quote and quote surprises so I'm sure they weren't so pleasantly surprised by the news is called by their performance by everything but I think some those quote and quote surprises were not completely pleasant and were not completely joyful and as a part of the community we understand because there are more clients more people that want to use the library to do some cleanups to the libraries themselves so we got now versions for everything we've got user space, we've got kernel space they both give our versions as this we're just there adding version numbers we've got versions for the interface as well so when you load a driver maybe in a company or client they get an error instead of a crash and we added more testing to the libraries themselves before historically the clients like HACO or other downstream have tests not so much so what should we test first and what the most critical part well it does not be the systems offer so everything is critical we've got critical performance problems so when we pull a lot of events and we've got terminal code we've got EPF so where should we start well let's start with all of it so we've got an end-to-end testing framework that allows us to just run some programs and see the output line but this is great because we are testing the same system that varies from I think with the kernel module this might really fill a lot of colors it can get into the test sometimes so we also added a critical test for the parts in the installation especially you see the depth events and creates the fields that we all use in the output so we can check that on certain sequence of events and we get the right the right fields and the right data so we also have a feature that was highly requested by our clients if a client knows what they're doing they can actually decide which system they want this doesn't translate from that line of training because the client really needs to know what they're doing otherwise we get very consistent result of our clients instead of the ones in the public community that we know are super smart so they figure it out easy also speaking about the community we've got a great system we always have products and open source projects that use PAPCO and then integrate with PAPCO our very favorite PAPCO site ticker can now send events to even more even more integrations so also we've got the unsupported we've got the policy reports in the Scansian format we've got the cosine thanks to the material on the cosine project it's really growing and even the UI that you might be knowing from PAPCO site ticker is growing a lot because it's been completely rewritten it's as good as ever when I was using PAPCO and I had to say that it doesn't have the limitation that the old one had about the number of events per project then we've got new contracts so of course Alma is used by a lot of us to deploy PAPCO and by saying new mining someone just went there and replaced the PAPCO with a new one and this is great because it's got client support it's got client support it's more modern and should make it easier to deploy PAPCO in our Kubernetes structures and speaking of ease of deployment I believe Facebook has some updates for us about the very hard test that PAPCO has to supply for so many different kernels and so many different questions yeah, actually thanks time to talk about kernel crawl kernel crawl is a new tool that we added to the ecosystem very recently which basically crawls for all the new kernel versions supported by Kubernetes distribution so the whole thing for us is that it can generate priority configurations which is the kernel solution that we use on the site we have a lot of code and kernel modules we need to provide to install with the partner the privilege that they don't have so before we had that we needed to actually enlarge the build grid manually every time you could never really ask for it or every time you had to decide for it we had to ask for it now the kernel crawler has us automatically we have today this build grid it always has the most optimized kernel versions that we use so this makes much more easy to install and much more accessible for people and we provide many more previews to pros as a matter of fact we provide to many so during the last few years we actually brought the CI so we need to work more but yeah, there's really a lot so model's point here we store all this information that we crawl in our database which is the one you see at the link and it's basically a source of truth about all the latest kernel versions for each of the distributions that we monitor and we think it's great because we couldn't find that certain information on a single place on the internet, we're really looking for it so if you're interested in working in a similar area please check out this part it may be a photo of it this quick shot was taken from one blue website building in this slide and basically gives you a hint about how bigger our grid grid grid is and as you can see we now have multiple architectures maybe we'll probably more of a common feature and those are all the distributions that we would like to cover and we do work for every part of these all of this information is made available to the cloud developers so every time you start talking you will look for this pre-built and you will try to add cloud combining locally which is really better for many environments in which cloud is installed and then there are major updates in the clouding system as well so we introduced the clouding system back in January and during the last container strike in Valencia you know, it has been around for only a few months now I've had the time as well last year and there has been major growth in the space so among all the cloud is that currently supported artificially I think this is one of the ones that actually is a very professional nation so the first one is to create something called one which is a successful portal for working at C++ inside Cloudflow into a cloud plugin and then they create a new one C++ support plus there is a new cloud view access to the clouding system which is that we actually have different flavors of this integration and this is the case for example for VKS plugin which is in development but where it doesn't is that we are using 90% of the code on the brand as a cloud plugin but I'm supporting events entirely from where this is structured so you know, since in the effort to actually believe in VKS error there is also a new plugin for data which is basically able to secure a new core organization it's able to attach the plugin to the data platform and you know, it's a project with events and you know, that's a secure stuff for example like committing a secret in your code repository which is a good thing to go with and then we also you know, improve the plugin that we already had developed as first ones which are the cloud-driven one and to give you like an example the power role that's like think about where that shopper is now I mean, you have the plugin in this case of course we have not prevented that but would that be able to affect that in a very tiny manner which would probably be better for everyone in the case and you know, one of the plugins has been improving a lot too and we have applied a better performance in the features as well and there is a C++ SDK in the making which is almost ready and almost ready to do this as we should and the next thing is last thing we might be able to take a few moments on the updates in the morning from a more technical perspective which we think is just as important as the rest of the updates in our patch for applying to your vision you know, there is a big community effort in improving and improving and strengthening the governance of the project so we try to listen to everyone's voice here and improve the talks to be more formal, precise and exhausted and we are very proud of this project for having helped us all the way by the way and on the updates they made a lot of work that we've cleared up all the roles in the community so now we have a few of the features for this little trigger for the reviewers, the holders corporate leaders and the payers now it is also like a good partnership and it is much more clear how we see the making of this with respect to the roles that we have before and then we also have a few special bets for the trigger because we have a long way to go and when and how the supplies can be seen in the process last but not least we also hope that it recognizes this topic and we are very proud of the payers for all the new experience that used to go into the project but they no longer have time to do that because of that we are also very proud of the work we are doing now for the talk of the strengths we can't go over and deeper the only minor updates so the first one I think the rest of the problem is the awesome work that has been happening on the UDF promotion and experimentation to let us buy some of our own OVD members there has been a proposal merged in the UDF repository for this year in January to develop for scratch a new UDF probe with the goal of like using all of the longer accommodation feature of the technology without having to compromise and then we are going to be outside the program so in the future they will no longer be required and two features here the average include like an ADB buffer faster on the access and better for the UDF recovery so yes the future will be the case in which you don't have to build a UDF probe for everyone because it will be already built by the computer system right? this has been like a very very accurate level and there is a project in the future where we won't be at the time in the UDF probe so we are almost ready to build it out most point, this case with an ADF microwave that's sweet we basically ensure that there is a solution in the case and performs as expected as we go ahead with our brushes in the future we just wait for all the stakeholders of the project including other products and the early measurements also show that this UDF probe is sort of more performative than no one so if deciding I am looking forward to see that in the next couple of years is this making our code 0.33 that's perfectly confident that by the next eventually the next one again we will see that in the future normally 0.44 is the target thank you the other best exciting in the future is the link for us to manage our clients so before we won't have the loose file to deploy it on the site so we have our customers in whatever way we want sure that could be enough for an ADF but now things are getting much more complicated because we got clients that are buying multi-platform buyers and then we got tools and sometimes we don't want to do all the measurements and we might have multiple custom sets of rules and all the repositories so of course we will try the most great to manage all that but as a part of the community we wanted to try and have something that is directly from the community and tries to get what to achieve what most people actually want so we decided to try and use a tool that already existed that's called how to scale that's already used to do some administrative tasks for our code such as for example managing TLS certificates and we are thinking of more features so particularly this acts as a little package manager and it's able to push and pull the packages that contains the rules and packages when we decided on the format that this package could have we wanted to have an OCI performance because this way it's just like holding a lottery OCI is supported directly by the processor that you might already have three more packages that you don't need to do anything that already has support for every type of OCI package and in fact we have already published all our plugins as a mobile platform OCI packages right from the OCI organization of the project and also OCI gives us even more features to use in the future such as the support for signing people want to sign our artifacts and we are looking at some things for software running in the cloud so this allows us to get a much better experience at anything when going out on business clients and you might be asking when you will be able to try this well, you can try it today because we have a preview of this tool that we have developed over the last several months and you can directly use this tool if you are a partner or you use the public because it has some built-in and you will be able to download the right plugin from your architecture so if you are interested, please go and check it out, play with it tell us what you think and if you have one tool this is where you can go so even if you want to contribute you are familiar with the tool and you want to add features so to say hey, I would like this or the feature, we are always listening so I think we told you a lot of things about CloudPro and I think there's a lot of exciting things that are happening in the community right now so I would like to review my tool with my idea again that you can just sign up for the CloudPro community or the channel stack with the community call at the Wednesday and I would really like to say that the community is open to all and everyone although there is one tool there is one requirement that you are nice and respectful to have an outside level nice to work at and that sounds like you so we try to wait for that to happen thank you I see you added that you guys support your e-fans open on that this is a great point we had a great discussion on it and like the divisor case the issue with those cases is that when the system goes to happen not in the same bolster not in the same environment we are kind of blind so for Kappa you can do, I know some people that have done something to allow Kappa within if there is some kind of way to have something some answer that can be put out of the Kappa or the Kappa microbe and any extension it will give the entire structure to allow Kappa to get multiple sources so if there was a way any kind of communication we could build a specialty for Kappa that we have we could build a new integration and have support right on top of the boss like we are doing for divided and I hope that that's cool basically because Kappa code doesn't get free that would be it thank you everyone again there is another spot here for that so if you are involved in a project like you there are a lot of shares and shares all of the kind of stuff so thank you guys for coming see you all next time