 So, welcome to Cloud Native Vibe, where we dive into the code behind Cloud Native. I am Mohamed Shahryar, almost a commuter, and I am a sensitive ambassador. So I will be your host tonight. Every week, we bring a new set of presenters to showcase how to work with Cloud Native technologies. They will build things, they will break things, and they will answer your questions. In today's session, I'm stoked to introduce Mohamed, a security engineer at ISOvalent, who will be presenting on Introduction to Daily Jet Dragon, TVPF-Best, Security Observability, and Runtime Enforcement. This is an official livestream of the CNCF, and as such, is subject to the CNC port of contact. So please do not add anything to the chat or question that would be in violation of the port of contact. Basically, please be respectful to all of your fellow participants and presenters. So with that, I will hand it over to Mohamed to pick off today's presentation. Okay, let's add Mohamed. So hey, Mohamed, how are you? Hello, thank you for the introductions. Yeah, so as was introduced, I'm working at ISOvalent, I'm part of the Tetragon team. And today we are going to speak about Tetragon in general and do a little introduction to Tetragon with some nice demos I hope that will work. So maybe we can start the session by, I can show my screen and show around the project. So tell me, is it fine? Yeah, just do let me know when I should add it to the stream. Oh, you can share that, I think it's fine, that's good. Yeah, you can just start your session. Thanks. So yeah, so as it was introduced, Tetragon is an EBBF-based security of stability and runtime enforcement software. And we have like, I think the entry point for the project is the data repository. So this is the repository, you have plenty of information here. And we have this big readme that has been, we worked very recently into this new website. So if you want to learn about Tetragon in general, please stay in this session. But you can go to the websites, you have some information about who is using Tetragon, what does it do, how does it work, and more videos for in-depth learning about Tetragon. So yeah, in general, let's see and evolve your Tetragon. So Tetragon as an agent, which is the user space site, so we have the Tetragon agent running on each node of the Kubernetes clusters when you deploy Tetragon and Kubernetes clusters. And we have a part of Tetragon that is running inside the Linux kernel, which is the BPF programs, which are the BPF programs. So with those programs, you can basically hook into pretty much anything in the Linux kernel and you can observe process execution, sysco activity, and all these examples that were written on this page. So the good thing about Tetragon is that it's Kubernetes aware. So the idea is to deploy Tetragon on your Kubernetes cluster. It's the stuff we are going to do on the demonstration. And then you can observe what's happening inside of your application and your bots, so you can get all this information. So I think to kickstart, yeah, maybe we should start by do the first demonstration, maybe. If it's fine. Yeah, so I have prepared this little setup. So here I have a Kubernetes cluster is running. So it's actually deployed on AWS, but never mind. I just installed Celium on it. So it was just because I wanted, actually Tetragon has no, you don't need to run Celium to run Tetragon. They are completely independent. So first things we are first thing we are going to do today is to install Tetragon so you can use the Tetragon end chart and install Tetragon with the basic default just like that. So if you install Tetragon, you will get a few pods. So you will get, sorry. So maybe I should zoom a little bit. So yeah, you'll get the Tetragon pod. So it's up and running. So I think I will get the logs from Tetragon here. Let's do some logging here about Tetragon, maybe it could be a bit big like that, okay. So yeah, we just have the starting logs of Tetragon. The first thing I want to show you is that basically the first thing by default that Tetragon can do is to observe the process execution of what's happening inside of your cluster. So this is the first use case in the documentation. And the idea is to get events from Tetragon in a way to observe when processes are starting and processes are exiting. So I will try to show you that. So here Tetragon started by default, it loaded a few BPF programs to do this life cycle observability thing. You can see. Can you zoom in a bit? Yeah. Oh, no. Okay. Okay. I hope it could be readable because it's pretty big now. Yeah, I think. So, all right. So for example, you can see on the line in loading BPF program, you have the BPF exit, BPF fork. We have already a few BPF programs, some BPF maps. And Tetragon is already able to listen to create some event around the execution. So let's do that. So for example, let's create a pod in the default name space. So let's create a pod that does basically nothing. We'll just create a pod that sleeps with the Ubuntu image. So to see the Tetragon events, here you are seeing the logs of the Tetragon agent, the daemon. But if you want to see the events, you can basically do just like that, like that. And it defaults to the exported out, which is a container inside of the deployment that just basically tail the output file for the events. So here if we run this pod into the default name space, just like that. So it's creating. And we should see that something happened here. So we add our first event, Tetragon detected like an execution. And the reason why you don't see a lot of execution going inside of the cluster right now is that Tetragon is filtering the pods, sorry. So yeah, you only see the events of the execution of pods in the default name space because Tetragon is filtering on the name space and you don't see any activity for the cube system name space, which is the one in which Tetragon is installed and this pod are installed. But if we retrieve the pod from default, you can see the pod name sleepy here. And we have the process. So let's dive into this event a little bit more about what Tetragon told us. So what we end up is with an event called a process exec. So we see that some process started into some pods. So we can retrieve the most of the Linux needed information, the PID, the UID that started this process, the current working directory, binary, all this stuff, the arguments. But what's interesting here is that we also have the, sorry, we also have the Kubernetes related information. So we can see the namespace of the pod, the name of the pod. We can also know the exact container in that pod when it was started, when the pod was started and not like the execution, the label it has, this kind of stuff. We can even retrieve the parent of this execution so we can see that the thing that started the execution of the infinite sleep was container, the emergency. So we can see the container runtime that actually started the process. So with this in mind, it shows that Tetragon is already able by default to see what's executing. So let's see a little bit. So what you can do with Tetragon here, you have plenty of information already. You have all these details, but we have Tetra, which is the Tetragon CLI that you can use. And you can pipe this big JSON event into it and use this output compact form so that you just retrieve like a process was executed, the namespace, the name of the pod, the name, the binary name of the process and the arguments. So here, if we execute into the pod I just created in the default namespace called sleepy. With the bash, we will basically see that batch was executed and that batch executed a few things to start my interactive session. What's interesting here is that we discover a new event. We have an exit event. So what it shows is that this process was started and just after it's exited with the exit code of zero and same for the colors as well. But we can see that the batch session that just started and was not exited yet, we just have to do the creation process. So with that in mind, if you execute anything in the pod, you can see the execution. Here you can see the execution of the LS. We can see the exit of the LS. So already with that part of Tetragon, which is again shaped by default, you can get a lot of your cluster activity and you can try to gather that information and maybe put that into something like Splunk in which you will process this information like what happened at this specific moment in time or this kind of stuff. But here, let's go a little bit more in detail for a different use case. So what we saw here is the process life icon. So this is the default thing, but maybe what you want to do is a bit more advanced. And what I wanted to show you is that Tetragon has this thing called tracing policy. So there is documentation page about it. Tracing policy is a Kubernetes custom resource and the idea is that you can use this sort of configuration files, policy files to extend what Tetragon is able to observe. So basically they take this kind of form. It's really nice because in the end, what it means is that you can basically write some YAML file that will describe some YAML files that will describe what you want to observe. What you want to do with that, we will see a bit of enforcement after that. And the idea is that Tetragon will read these YAML files and transform them into BPF programs that will be loaded into your kernels, into your kernel and kernels of all your nodes so that you can extend and observe exactly what you want inside of the Linux kernel. So let's see actually this use case. So what we want to do now, let's keep that up above. Let's quit the execution. So we have our Kubernetes cluster. Here we have the Tetragon pod. We have the new pod I just created. Let's just before let's install curl inside this pod because I know it's not installed by default. So here you can see all the activity going on when I have it installed in the kernel. So this is just to have curl available. So now what we want to do is to load this tracing policy I was speaking about. So here we have this connect TCP connect tracing policy. So it's just another Kubernetes custom resource. So there is this API version with Celium v1, alpha 1, the kind tracing policy. It has a name like every Kubernetes resources and a spec. So the spec is the custom part of it. You have this documentation that tries to explain you how to build those. What we are going to use first is something called K probes. So K probes is not something from Tetragon. It's something from the Linux kernel. K probes basically are basically a system in the Linux kernel to put a breakpoint anywhere into the kernel and observe something. So with Tetragon you can use K probes to hook into symbols into the kernel. So here what we want to observe is TCP connect, TCP close and TCP send message. So this event will be associated with network activity. And the idea is to hook the TCP connect call, the TCP close and TCP send message. So here we can see that we have this thing here called syscall false. So it means that the specific K probe we are going to hook into is a regular kernel function. It's not a syscall. And we have some argument description. We'll talk about that a little bit later. But the big idea is that we have these three K probes, connect, close and send message. So what I will do next is load this tracing policy into the Kubernetes cluster with the Ctl apply. And the Tetragon will actually pick up this thing. So we'll see some activity here. Loading the new VPF programs generated from this specific tracing policy in order to gain some observability. So I load this thing. We see some activity up there. So it just added a new K probes. It just added a few maps. So let's see. Let's see that in action. So now it's listening for events. Let's take it again into our pod that is opted by Tetragon. So again, we have bash. If I execute anything, we'll see the execution. But now what's interesting that if we curl like Kubernetes.io, this is some interesting thing happening here. We have three new events. These events were basically created thanks to the tracing policy we just wrote. And we can see the connect event, the send message and the close. So this is the compact form. But basically we extracted some arguments from these events. And we can retrieve the IP address that was finally contacted to do that network connection. So this little demonstration was to show you that Tetragon can do some stuff by default. But basically with some like with some knowledge of what you want to observe and how you want to observe it into the Linux kernel, you can write these policies and extend Tetragon's capabilities. So I think we so what I wanted to see an observatory, everything is good on your side. Everything is fine. Yes, everything is fine. Yeah. Yep. Okay, okay. No questions. So this is like the part about like observatory thing, but I want it now to show you maybe a little bit more about enforcement because Tetragon can of course observe, but you can also do enforcement with those tracing policy and it's super and it's super efficient. So let's let's extend Tetragon a little bit more. So we can remove the tracing policy that was I didn't unload the previous one. So let's remove that one and let's remove so yeah. Now what we want to do, sorry, we have our bots and what we want to do is to do some enforcement with Tetragon. So let's look at we can look at this one tracing policy that I prepared. This maybe this one, sorry, oh no, it's this one, let's let's in a first step maybe remove that. So yeah, we have another tracing policy. So this one is a little bit different. We still use K probes, which are this kernel thing. And we hook into this time of syscall, which is called syslink, which is a syscall that you can use to create this link on your Linux machine. So no, we write that syscall is equal to true. It will be actually useful for Tetragon to retrieve the arguments. We'll talk about that a little bit later. But what I wanted to show you is that you can add selectors. So selector are used to filter what you want to observe or enforce on, but here we'll just use the selector called match actions. That allows you to add an action when you witness an event. So the idea behind this tracing policy is to say if someone tried to call this syscall called simlink at. So maybe I didn't introduce that, but syscall is basically the interface between user space and kernel space in the new kernel. So the new kernel exposed some calls for the problems to use. And syslink at is one of those that you will use when you are created syslink with, for example, ln-s something to some file. So yeah, we want to hook into the syscall. And the idea is to use the action called override that will override the return value of the syscall with an error. Here it's minus one, but you can put another value. So let's see what it does. So right now, if we are going to exact into our port, we can create syslink. So a way to create syslink, as I said before, is to write something like that. So for example, I want to create a syslink to a tc-password here. So if I do that, I will just end up with a file called here that will point to its c-password. So it works. So now let's apply this tracing policy that I just talked about before. So we apply it. We see some logs on the top side here. So now it should be loaded. And now if we execute again into this pod and we try to create a syslink to, for example, a tc-password here, we get fail to create symbolically, operation not permitted. Because basically what happens is that the system call that lm-s was calling return minus one. Here on the event side, we can see that this syscall happened. You can see that actually this is prefixed by the architecture of the nodes that are running the cluster. I choose the RM64 nodes because why not? And here you can see, so the syscall was observed, the return value was overrided and the operation was blocked, essentially. So we can basically not create any syslink on this Kubernetes cluster. So let's see a little bit more about this tracing policy. So let's remove that one. And let's see that. So yeah. So yeah, syslink.at is syscall. You can have its documentation by looking at the Linux manual. So if we look at that, we can retrieve that syslink.at as three arguments. And when you observe events, you might want to know what were the arguments that were used in the symbol that you are witnessing and seeing at the moment. So what you can do is define the arguments used by the key probes, the call that you want to observe. So here we say the first is a string. The first argument is a string is the constructor pointer, which is a string in C. The second one is an integer. And the third one is also a string. So what it will tell us is that if we use tetragon again, now we will be able to extract the arguments that were used when someone or some pod called the syscall. But I have a question. Yeah, sure. So yeah, so the question is basically, does this block all links created on the system or only the ones created from inside containers? So no, we will see that a little bit later. But at this stage, it blocks all syscalls that are done in the whole Kubernetes cluster. It was the question why it's like, can you just repeat the end of the question? So the question is like, does this block all links created on the system or only the ones created from inside containers? Ah, no. Yeah, no, no. In this situation, it will block everything basically. It will block everything because we did not put any filtering. So tetragon will retrieve all the event from the host. And basically it will retrieve everything and it will override the return value on everything. So this is not really something you would like to deploy into your production cluster. So we'll see how we can perform more in-grain filtering later on, based on Kubernetes namespace and pod labels. But it will be just for a little bit later. So what I wanted to show you just right now is that you can retrieve the arguments from the call and you can write a selector that is a bit more sophisticated. And what you can do here is match on an argument. So you could say, if the first argument matches with this specific value, ETC password, so it means the person wants to create the same link to this specific file, do this action. So here I will deploy this new policy. So we can see that it was just loaded. Here we have some logs that appeared. And now if I go inside the pod again and try to create a same link, so if I do a same link to ETC password to here again, it's just blocked by tetragon and we can see the event here. But now what's interesting is that if we try to create a same link from here, it's all right, we have the right to do it. So here point to ETC, it was created, but the whole idea behind this demonstration is to show that tetragon is able to do some internal filtering based on the value of the arguments of the calls that you are looking into. So here we refine exactly what we wanted to write. So this policy is not that great. It's just an example of on the same link, but you can imagine something more sophisticated on file filtering, network filtering. As long as you have actions and selectors on how to match on your arguments, you can write pretty specialized tracing policy to do what you want to do. So now maybe to answer the question a little bit more that we received just before, I just want to show you a feature that just got into tetragon pretty recently. So let's remove the tracing policy we loaded. Make sure we don't have any policy. So it's still in beta, so we have to enable the thing with a configuration. So yeah, we'll just pass this end comment. So it just upgrade the tetragon end charts and set tetragon enable policy filters to true. So we'll do that and we will restart tetragon. It was restarted, but never mind. It's literally restarted anyway. So now we still have the tetragon pod. It's running. So let's get the logs again. Tetragon. It's listening to events and we just enabled the policy filter. So let's create some new pods. The idea here is that we will be able to use Kubernetes things or use a space thing from Kubernetes metadata. So the namespace, the labels to apply policies on some pods and not apply some policies on other pods. So let's create a new namespace. Yeah, we have at the moment, we have mostly things running into the system, the default. So let's create a namespace for example called livestream here and let's create the pod into that, into that live stream namespace. And it's called, or you want to why not, that slips as well. So let's retrieve the, yeah, let's do that here and here let's retrieve the event from tetragon. So logs pipe that into Kubernetes to have the compact form. So let's create a new pod. So here we see in the live stream namespace, we have a new pod called Ubuntu is just sleeping, right? So we have two pods sleeping, one in the default namespace and one in the Ubuntu namespace. So the idea here would be to apply a tracing policy. So apply some enforcement or observability, but in only one Kubernetes namespace. So let's take the same example as before. So now maybe you can, you can see that the kind changed a little bit. It's now a tracing policy namespace. So the results I showed you just before the tracing policy is a cluster wide resource, but this one is actually in type one specific namespace. And this one just deny all seeming creation. So this is very similar to what we did in the first example, but we are going to apply it on each one. So if we load that, oh, no, I just load that in the in the system namespace, I think. What, what did I wrote into that one? Oh, no, sorry. Yeah, I load that here, I think. Yes. So let's just never mind. Let's let's just delete this that one and creating the correct namespace. So apply namespace and let's deploy it, for example, into the live stream. So here we can see some activity from Tetragon that it picked up the new newly created tracing policy that is namespace. And if we get the tracing policy namespace from live stream, we should see deny all seeming creation in that specific namespace. So here the idea is that if we go into the pod in the default Kubernetes namespace, we should be fairly fine about like creating some, do I have any seeming here, creating new seeming here. So again, it's the password here. It's all fine. We can create those. But if we go, we free exec into Ubuntu in the live stream namespace, Kubernetes namespace, here we can see the activity. And if we go into the, go here and try to create that similar seeming, it's just not permitted. So what you just witnessed is the fact that Tetragon is able to perform some in kernel filtering with actual Kubernetes metadata information. So you can filter your observability, your enforcement based on the Kubernetes namespace. So that's good for everyone. I wanted to show you one next step is that's how you can use labels to do very similar stuff. So it's, it's very similar to what we just saw. It's just, it's not using a namespace as a filtering mechanism, but using a pod's labels. So let's unload that one in the live stream namespace. So it was unloaded, the Macware unloaded. And what we want to apply now is the last one I crafted for today, which is the label you name. So I just changed the syscall. We are going to hook, we are going to hook into the sysnew you name. So your name, you might know if you might not know it. It's the syscall use when you are trying to retrieve information about your Linux host. So if you want to retrieve information about your cannell version, the host name, this kind of stuff, the utility you name is using that syscall to perform that action. So it's a pretty inoffensive syscall, but it was just for the sake of the example in this demonstration. So here we have the sysnew you name. And again, we are using the action override on that. So the idea is to prevent the utilitary to use this syscall. And what's new here is that we are using this cluster wide tracing policy, but with some pod selector. So I think those are similar as Celium network policy pod selector. And the idea is that you can use match label on that. And if any pod has a label called app with a value slipper, this policy will apply. So let's load that policy. So that one is cluster wide. We don't really care about where we put it. We don't need to specify the name space. So it's loading that policy. So if we go into our Ubuntu or sleepy pod from the default name space, we can pretty much use your name, right? We can retrieve the all the information about the channel. But behind the scene, I think we could even see that with S-Trace if I install. I don't think that the package is called S-Trace. It's not very... Oh, yeah. Let's find policy school. So what I'm showing you right now is just how S-Trace is just a way to see what Syscall is calling, like what Syscall your binary is actually calling under the hood. So if we grab with uname... Oh, no. S-Trace, uname, pay. And we redirect your puts. We can see that the uname Syscall is actually called tree time. So, yeah. And it returns zero, which means success in Canada Syscall language. So with that in mind, let's see what happens. Again, basically talking about, are these logs from Tape Dragon stored somewhere or destroyed? So the logs that we can... The evidence, you mean, okay, right? Yeah. So the one we are seeing at the moment, they are stored. So if we go into the Tape Dragon pod, yeah, the Tape Dragon container in Tape Dragon pod, you can see in Viren, you have these files. And in Celium, Tape Dragon, you have the Tape Dragon log files. So this file, if we look into it, you have all the events that are written to this specific file. It's the case because we just set a specific flag in the end chart. So the end chart, you can find it here, some documentation about it. And we just have this thing called export directory. And we say put the logs of the events into this folder. So all the events we are seeing, they are stored in this file. And the idea is that you can use stuff like, for example, Fluendi to maybe fetch these files and put that in some database. But here in my example, I'm mostly reading straight from the kubectl logs thing. So it's just that in the... If we look into the Tape Dragon pod, a little bit more in detail. Well, we have another question regarding that. So the question is, does Tape Dragon have something similar to Falco sidekick where log entries are visualized? So I have not really... There is no Tape Dragon sidekick. But you can actually use something called the WUI to visualize some parts of Tape Dragon, I think. So WUI is a project... Beginning was done for Celium to visualize flows, visualize an observability thing on top of Celium. I think some part of it, I'm not 100% sure that you can use with Tape Dragon to visualize process execution and this kind of stuff. But it's mostly, as of now, it's mostly writing events to this file and you have to process them. So one way of thinking about it is to fetch these files, retrieve those events, put them in some database and perform some queries on them to actually see what's happening inside of the cluster. But this is out of the scope of Tape Dragon as a Falco sidekick is out of the scope of Falco because it's its own separate thing, Tape Dragon only on the export tool to a file. And yeah, I just wanted to show you that the reason why we are seeing the events in the container name export STD out is that if we just look at the deployment, we can see that we have three containers and yeah, we have three containers and one of them is just above export STD out, which is a container that just tail this file to the STD out so that we can retrieve it using Qtl log Tape Dragon by that. So I hope I answered the question. So if I get back to my little demonstration about labels, here we are in a situation where we can use the syscall. No worries, no problem. Tape Dragon does not emit an event and Tape Dragon does not overwrite the return value of the syscall. But the idea is that we can label this this pod, the sleepy pod in the default namespace with the app, the label with the value sleeper and if we do that, we end up in a situation where, oh sorry, if we exact again into sleepy and we do uname dash A because we cannot get this terminated operation not permitted. And what happened in the background if we have trace or execution, so just like that or apparently I cannot even trace. Okay, so now I can't be, we can't see it with this trace, I don't exactly know why maybe it's using. But anyway, what we what we see is that the syscall is blocked. We have an event here and we can't use it anymore. And actually, if we just unlabeled that pod again with label app minus, I think it's unlabeled. So the label was removed from the the pod deployment. We can use the syscall again and it's not blocked anymore by the dragon and it does not exist in an event. So I hope this show the way you can use this tracing policy, you can apply a tracing policy at the cluster-wide level, basically apply that policy on everything, you can apply that by namespace and you can apply very specifically using Kubernetes labels. So with that in mind, I'm pretty good with my demonstration, maybe we have some I just top screen sharing, maybe we have some, we can discuss and talk about all that stuff and do some other demonstration, I don't know. Yeah, sure. Okay, so if you have any other things to present, you can do that as well. Yeah, we have time. But yeah, there's some things, some query from my end as well, like how is tetragone filtering different than other projects? So if you would like to add something. Yeah, so on the filtering side, so what I showed you, I don't know if I should know, maybe let's just discuss like that. So the thing with tetragone is that the filtering, all the filtering happens kernel side with the BPF programs. So the, the main difference with some other projects that with a lot of projects, what happens is that they create some events from the kernel, because they have to hook into some part of the kernel via K probes, via trace points and things like that. So they create some events, they push these events to user space where an agent can endow them. And then they treat like the filtering in user space. So they push everything, all the activity out of the kernel space, do the filtering on the user space and event maybe, maybe optionally they react to those events with tetragon is a bit different because it's using BPF, it's as the ability to hook the event of course, like the other solutions are doing, but the filtering and everything happens on the kernel side. So the event is never emitted as is from kernel side to user side, user space. It's filtered straight from the kernel side. So if I show you for example, the demonstration I showed with this one I think, yeah, this one had some filtering enabled. So basically looking this Cisco and passing the arguments, the string, the integer and the other string and trying to do some filtering on top of the first argument. So what happens is that this comparison of the prefix of this argument will happen on the BPF side in the kernel. So it's pretty nice because from this you will get less overhead than with something that will export everything and do the filtering in user space because less events were emitted and it's more efficient to do everything straight from the kernel. And on top of that, if you are doing some filtering and you want to do some enforcement, like I showed you with override, with such a one you can also do like C kills or this kind of stuff, but it's way more valuable to have in kernel filtering when you want to do enforcement because if you want to do enforcement and you are going back from kernel space, emitting an event, do the filtering, take the decision to perform an action and go back to kernel side to actually perform that action. It's in synchronous and there will be some time when the application will be able to perform these actions before you actually do the enforcement. So with Tetragon, it's different because everything happened on the kernel side. We can do this action immediately, synchronously, before it actually happened. So what I showed you here is that I wanted to code this specific C-School. What happened is that Tetragon hooked at the very beginning of the call of this kernel function, which happens to be a C-School. And the override actually replaced the function definition with just the return on minus one. So the actual function named C-Simlink executed in the end. So you have very effective enforcement in this form. I hope that's clear, but that's a lot of information. Yeah, so yeah. Okay, so here another question. Can Tetragon emit these odd blocks as Kubernetes events? Oh yeah. So I don't think so at the moment. I'm not aware of that. So I would say no. But yeah, no, they are exclusively these events that are like this JSON events in the file. So yeah, those are no like Kubernetes events. And yeah, I don't know. I don't know if that would be a good idea and what would be the use case for that. But the person wants to discuss that in further detail, you can of course ask the question. So by the way, I did not spoke about that. If you want to, if you have any question like during this session, after this session, you have to repository where you can like open issues. But on top of that, you have the Slack, the Syllium Slack, in which we have Tetragon specific channel in which you can ask any question you want. So if you want to start using Tetragon, it's a good way of getting started. You can pack the documentation is the good if you have some troubles and stuff, you can ask your question on the Slack. And later on, if you want to write some tracing policy and something is not working, or you don't understand why your tracing policy is not in, of course, go on the Slack and interact with us to maybe see how it's the best way of doing that kind of stuff or and so and so. So yeah, yeah. Okay. So I think another question, like, can this be used for cluster wide check for indication of compromise, given we have some SHA or add some custom tracing policy to check for some inbound or outbound profit and deny such kind of connectivity over any protocol? Yeah, so I, I'm not sure I got the whole question, but just for the beginning of the question, like, can this be used as a way of detecting like malicious activity? Was that correct? Useful cluster wide. So let me repeat it again. Okay. So can this be used for cluster wide check for indication of compromise? Yeah, something like that we're getting that given we have some SHA Yeah, so the answer is yes. Then it's the the only question then is how you what's your indication of compromise? So if you're in your, in your case, your indication of compromise is, I don't know, like triggering some kind of functions on this call with specific argument. There's a way of writing a tracing policy that will catch that so you can retrieve the events. What's on top of my mind? So this is like very specified. So for example, I don't know the, yeah, for example, you could try to catch the syscall that allows you to create a username space. If someone is like exploiting username space to do some exploit afterwards, this kind of stuff. But there is something else that goes to my mind, like, you can also, like retrieve all the logs of process execution and try to filter on top of them to find some indication of compromise, whatever it is, to see if like some bad stuff was executed inside of your cluster generally. So this comes without any trace like the process execution and maybe it's already enough for you to see what's happened cluster wide and when and how. So the idea is that you will get a process event like I showed in the very beginning process exact and process exit. And you will get a lot of metadata information to evaluate if it's a comp like a compromise or just normal activity. So I would say yes, you have multiple ways of doing that, which is like monitoring the execution or trying to write the tracing policy that is very specific to what you want to observe. So I guess we have another question as well. So the question is, could you please sum up the actually requirement with Azure AKS in mind? The requirement, you said? I guess there's some mistake in the writing, I guess, but there is a requirement. Oh, the requirement. So basically the requirements, I don't have the exact answer to that, but Tetragon, the thing it needs is BTF support. So BTF is like this, maybe I don't need to show my screen. So yeah, BTF is like this file in the kernel that describes all the structures of the kernel and Tetragon needs that to load its BPF programs. So you need that. What is nice that most recent kernel and most like mainstream distribution of BTF enabled by default now. So I guess this is one of the requirements. If you don't have BTF enabled on your distribution because of whatever reason, you can try to build your BTF file yourself and provide it to Tetragon afterwards. So it's not a real blocker. It's just like you have to have that in order to load the BPF programs. But otherwise, if you want to deploy that on an AKS, AKS cluster, I don't see any particular requirement. Yeah, you just basically deploy the M-chart. It will deploy the demon set and then you can figure it out and see what you want to do. But I would say that that's pretty enough. Then there is maybe like, yeah, it depends on your cluster, but the more recent kernel version you deploy Tetragon on, the more features you will be able to use on BPF and the more features you will have access to. But I guess if you just deploy a new cluster on AKS, it will be pretty, you will have a pretty recent kernel and you will have access to all the features. So yeah, if you have a very specific kernel version in mind, it might come as an issue, but most of the time it's pretty fine. This is another question. It's like, when you have two blocks to wipe, the same name is the problem for the cell door. Is it something that goes with this question? So I'm assuming there is a typo, but what could it be? Yeah, okay. I don't know if it is not. No, I think the question is interesting. I think the question is about the behavior of the tracing policy when you are writing ambiguous selectors, maybe like selectors with contradictory contradictory selectors, this kind of stuff. So this is a pretty good question. You have this part in the tracing policy that is about the selector semantic. So it tries to explain how selectors are associated. So in this example, do you, I mean, you don't see my screen, sorry, let me share. So in the tracing policy, sorry, documentation, you have this thing called selector semantic. It's at the end. And it shows how selectors are associated between each of them and how like relationship or relationship, what happens if I put multiple values? Is it or in this kind of stuff? So you can have more information there, but for very complex and very advanced use case, I guess the idea is to try and to see what happens. But for most of the case, you should end up with a pretty good idea reading this piece of documentation on how selectors will be combined, let's say. Yeah. Okay. So there was another query, but you actually answered that question as also, but I also want to mention about that does Max question basically, like, would you mind just showing the YAML file where you were able to block on specific arguments? I think you were able to block it creating a select on ADC password, but other sim links were hallowed. So and also when you were answering some queries, you actually bought the answer actually as well. So just I just mentioned the question and also the sample answer. Yeah. Yeah. So in, I just want to say something. If you want to see more tracing policy, because the one I showed were very basic, you have a few use case here about this is the network thing I showed you. You have also file access and more interestingly means process credentials, things related to credentials. But in the tetragon repository, you have this example folder with tracing policy, and you have a bunch of them there with more, I think maybe you have something about sim link here. So you can have more sophisticated one. This one is pretty similar to the one I showed you. So you can find it there. But you have more sophisticated one as maybe, let's say, like this one with more like complex specification with more selectors, different actions and this kind of stuff. So it can be nice to try those to get familiar with tetragon. And you don't need document as cluster as well to run tetragon. You can run tetragon on Linux. There is a guide on the documentation. So if you want to try these things out, it's pretty handy. Okay. So we have another question as well. So there are a lot of questions coming up. That's awesome. So tetragon doesn't request CLM to be used or deployed in cluster, right? No, it does not. So at the moment, some of the metadata are used using CLM. But we have like, actually, I think CNCF in turn working on a project about how we could completely remove the dependency to CLM. But at the moment, you will just get a little bit less information on some metadata. I don't know exactly which one. But the idea is that tetragon is a standalone project, and you can completely run tetragon without CLM. And you can run CLM without tetragon. There is no like dependency relationship between the two of them. Okay, that was really awesome. Actually, answering a lot of questions. Okay, so that was awesome. Okay, so I think that we have ended up and there are no questions left, right? Okay, so if you have anything that you would like to add related to tetragon, because we have already mentioned how to contract the team. And if there are any queries, how do they show something like that? But if there are anything you would like to add, you can just add that right now. Yeah, like if anyone, someone who can't do it as well or something like that. Yeah, sure. So, yeah, so I already said that. But of course, I think the two stops that you can get to are the repository and the websites. On the website, you will find links on the CLM Slack, which is a really nice entry point to ask the team about stuff and to speak between community people like using tetragon. Some people are actually helping people in there, which is pretty, pretty nice. And I'm pretty often answering questions in there. And if you want to contribute to tetragon, we have a bunch of good first issues. So you can basically go to the repository and type good first issues. And you will find some stuff that you can do. I put some stuff about documentation because it's a nice way to get familiar with tetragon and we need more documentation and better documentation. So if you want to help the project, it's a really nice way to start. But yeah, I think that's pretty much it on how to interact with the project. Okay, awesome. But yeah, I think that's one question. And yeah, is there any for the user to know beforehand this tetragon? So if you want to use tetragon, not really, no, because you can just start tetragon and start to see the process execution, which are automatically enabled and everything. So you don't need prerequisite. Like the execution was already crafted by people writing tetragon. As a user, if you want to start to write your tracing policy, you might need some kernel knowledge because you will need to understand what is the gate pro, what is the trace point, what is the syscall, this kind of kernel stuff and where to actually hook inside of your kernel because like all hook points are not equal and some are better than the others and doing like observability and security might be a bit complex. So it's better to have some kernel knowledge on that. We are actually working with the project to write an interface that will be simpler than the tracing policies because the tracing policy are already pretty low level, but they are pretty nice because you can write emails that translate into BPF programs, but they are still low level and you need some knowledge. So that was as a user. If you want to participate in the project as a developer, a contributor, I guess if you just want to make some PRs to get familiar with the project, you don't need plenty of knowledge. For example, in the good first issue I showed, there is some documentation about how do you read tetragon matrix, for example, like the primitive matrix. So if you want to write such a guide, it will be very helpful. We have some documentation that you can write and you with very basic knowledge, you will just get to learn tetragon and learn about it. And then if you want to start contributing on code contribution, it might require some knowledge in the user space, maybe less than the BPF part, but the BPF part is the BPF part. So you'll have to know about BPF a little bit to contribute on that part for sure. Yes, so yeah, with these, I guess there are a lot more questions left. Thank you so much Mahesh for the great session and a lot of questions that thank you so much guys for being responsive and asking so much questions. It was really awesome. So I guess we can now end the session. So let's end the session then. Thanks everyone. Okay, so thanks everyone for joining the latest episode of Cloud Native Live. We enjoyed the interaction and questions from the audience. So thanks for joining us today and we hope to see you again soon.