 All right. I think it's a good time to start this webinar, this introduction to Cillium Tetragon. Welcome, all of you who have joined. Once more, go over housekeeping and logistics. All of you who have joined, you have been automatically muted to make the experience as least interactive as possible. If you have questions, feel free to ask them in the Zoom chat as a message to everybody and we will either answer them on air as we have time or answer them in the chat directly. This session is recorded and will make the recording available afterwards. Presenting today will be myself, Thomas Graf, co-founder and CTO of ISOvalent, as well as John Fasterbent, who has created the Tetragon project originally and has also been a longtime Cillium maintainer, also senior staff, software engineer for ISOvalent. So let's jump in. This series will be introducing Tetragon, EBPF based security observability and runtime enforcement. What we'll cover today is why Tetragon, look into security observability, cover runtime enforcement, look at a lot of examples of what can be done with Tetragon and then host a Q&A section. So let's jump right in and get a first overview of Tetragon. So what is Tetragon? Tetragon is essentially an agent that can run on any machine, any Linux machine. This could be a Kubernetes worker node. It could also be a non-cubinatus node, essentially any machine, and it will use EBPF to extract security relevant observability and also provide runtime enforcement. As you can see, there is a lot of different layers at which Tetragon can extract value from or do enforcement in, starting from the lower levels such as data access, file access, the network, a variety of protocol parses, then namespacing or like namespace technology in the kernel, whether these are network namespaces, CPU, mount namespaces, as well as capabilities or privilege access. The virtual file system in terms of file access, TCP IP layers, for example, to introspect TCP sequence numbers, identify sequence number attacks, as well as the system and process execution layer. But it does not only cover the system level, it also covers applications. So we can also, for example, extract function calls or function traces, look at executed code, and so on. Very important, Tetragon is transparent, which means no code changes are required. All of the observability, all of the enforcement capabilities are provided completely transparently. All the observability data, all the policies that come in, they are integrated with other systems and you can see many of them listed above. Metrics, for example, Prometheus Grafana for a lot of the security relevant events, they will typically go into an SIEM, can be streamed via fluency to other systems, as well as, for example, Grafana, elastic search, as well as open telemetry or the raw JSON output. Tetragon is part of the SILIUM project family, if that is automatically part of the cloud native computing foundation. So it is essentially an independent project from a technical perspective, independent project under the SILIUM umbrella, but it essentially benefits and is governed by the SILIUM open source governance model. Let's jump into why Tetragon. In terms of runtime security and security observability, what is needed and why we created Tetragon is because the security has to be done in real time. So when we protect workloads that are running, we need to be able to detect malicious activity in real time. We need to be reporting when malicious events occur and then even better prevent them before they perform any damage. So let's look at a variety of examples on how that can be achieved. These lists some of the activities that we need to monitor in order to detect and report malicious intent, such as network traffic, file IO activity, running of executables or process execution as well as consistent call activity and changes in the privileges and namespace boundaries. This can be done or has been done in a variety of different ways in the past. So we'll cover essentially why we have created Tetragon next by looking at existing solutions and existing approaches and then compare that to Tetragon. This includes LD preload, ptrace.com, LSM and LSM EDPF as well as other approaches of EDPF to perform this type of security. This is probably the oldest or one of the oldest approaches LD preload, so the ability to load a library into an application without the awareness or without changing that application. Again, with LD preload, we can essentially load a library that will inject itself into the application and have all the system calls that the application performs be handled by that library instead of by the kernel itself. This is called system call proxy or LD preload proxy. This is great, but it can be bypassed for obviously if the binary of the application is statically linked LD preload will have no effect and we lose all visibility, as well as any enforcement that is done there is ineffective. So essentially it is as quickly been abandoned from that perspective. We can do system call checking when we enter or when system calls enter the kernel at the syscall entry. Examples of this are ptrace.com as well as EDPF K probes or syscall based entry or syscall entry based EDPF checks. This is already massively better than LD preload because the application cannot easily bypass the injection of this, but it is vulnerable to so called TOCT TO new attacks so time of check versus time of use. This means that the hook point when the system call or when the EDPF program or that the solution sees the system call is before the last moments the application can change system call arguments and you can see this in the picture here. Essentially the hook is at the entry, but the system call handling copying the essentially the memory that contains the system call arguments is af that is entry point which means the application could actually create a system call present arguments in the system calls such as I want to open this file and then the hook point runs it validates and afterwards the application could still change what file it wants to open. There's a couple of references here to conference talks that have listed details around this attack. The most well known one is probably the phantom attack that has been covered at fcon 29. So what is important is that we need to make this check whatever system call a runtime enforcement check before it needs to be done at the right level to be effective. Some of you may have heard of LSM or Linux security modules. This is relatively old API itself it allows to do Linux security checks or additional security enforcement at the right level and it is a stable interface. And it is a very safe place to make checks but it is very static and it requires essentially additional kernel modules when it's from most of low decision additional LSM probes. Better known or better suited is actually EBPF or BPF LSM that allows to use EBPF to make LSM dynamic. This is already a major step forward and actually pretty close to what we want. The problem with this is that it needs a kernel version 5.7 and it is limited to the hookpoints that LSM itself provides. So if any additional hookpoints are needed, we again need to change kernel code and the kernel requirement goes up even further. This is essentially why we have created Selim Tetragon. So we want the same properties in terms of safety security and hookpoints as EBPF LSM. But we want to have or avoid the recent kernel requirement. And we want to add additional hookpoints that are not found in LSM as well as have the flexibility to have multiple EBPF programs share state with each other using maps. This is the silo or the database icon that you can see on the right here. This allows for multiple coordinated EBPF programs to work together and we will see examples of why that matters later on. It also allows us to do in kernel event filtering. So this is the basis of the high performance observability that EBPF and Tetragon can provide. So let's jump right into observability here. What is the type of observability that Tetragon can provide? And I see the first question that came in as well. What will be one of the main differences used of Tetragon with the Datadoc agent running runtime security features? That's what can actually address that right away as we go through the observability page here. So the basis of Tetragon is this agent which uses in kernel collectors EBPF base to collect a variety of different observability data or observability data types, process execution, system call activity, file access, TCPIP, metadata, namespacing, information capability changes, privilege changes, data access on the storage side, on the file exercise, a lot of different network activity visibility functions, including raw layer for layer four as well as different protocols. Because of EBPF smart collector ability, so EBPF has specialized map types and functions such as stack traces and ring buffers and metrics and hash maps. This can be done very, very effectively. So we can combine this deep visibility as you can see across the stack. We can extract visibility from lower levels network storage all the way up into the application. We can combine this deep visibility with the transparency. So it's agnostic and no changes to the applications needed. So far this is in line with other collectors as well. Many of them also have pretty deep visibility where it where it becomes unique and difference is the low overhead. You can see this smart collector item in the kernel portion of the Tetragon piece there on the left. All of the filtering, the aggregation is done in kernel, which means we can massively reduce the amount of data, the amount of observability data that is sent from kernel. So from the kernel runtime to the Tetragon agent. This is the arrow in between kernel and the bigger being on top. And that is typically the biggest overhead. So if we send a lot of observability data from the kernel into the agent in user space, that will impose a lot of overhead. So the more filtering, the more aggregation we can do in kernel, the lower the overhead make a concrete example. It is massively more efficient to, for example, collect metrics such as a rate or a histogram in kernel compared to sending individual events to the user space and accounting for the metric in user space. So that's the main difference to existing or other collectors. EBPF gives this like foundation this framework to provide massively low overhead observability. Very similar to how Perf some of you may have heard of the Perf performance troubleshooting and Perf trouble or tracing, tracing utility. That also uses the same mechanisms to provide high performance visibility, more into the function call and memory and CPU consumption or memory and CPU usage aspects. Lastly, integrations. All of this visibility is useless if we cannot integrate this into existing systems. What we currently support is Prometheus, Profana, a variety of SIMs or SIMs, Flu and D, OpenTelemetry as well as Elasticsearch. But with the JSON export, and in particular with the Prometheus capabilities, this kind of, for example, also go into a DataDoc dashboard or into a variety of monitoring platforms that cloud providers offer. If we go into a bit of more details and then into examples, first of all, context is everything in terms of security, right? So we need to understand as much content as possible and we'll see that as we go into the examples next because the better the context, the easier it will be to understand for security teams in log files and the more accurate the alerts will be. This means that based on logs and alerts, we can quicker and easier identify what is the cause and what is effective. So let's look at a couple of examples and we'll start very, very basic and then go further. Starting with very basic network interface metrics, like how much traffic on what network interface. Boring, but yes, let's look at this as well. Let's go further and let's look at, for example, TCP latency. This is already a lot more interesting. Transparency measuring the round trip time for TCP connections combined with DNS visibility. So we can see the round trip time over time to a variety of external DNS endpoints or external endpoints and essentially labeled by the DNS name that was used. So we can see the latency to stats.profile.org, api.twitter.com, a variety of AWS endpoints and so on. Already pretty interesting. And all of this is done completely transparently. So you can identify what connections, what endpoints are subject to, for example, higher round trip latency. But then also traffic accounting. We can see what in this example, a dashboard shows which Kubernetes part is egressing or transmitting how much traffic. This is in cases is on a pod level. So I'm just the pod name level. But this could also be annotated with the label that represents the namespace, the region, the availability zone. So you can measure cross-regional, cross-AC traffic easily with this as well in another Prometheus metric example. We can look at TLS and SSL. Two examples here, for example, matching or extracting the SNI name. So what are the different SNI domain names or host names that connections use? So we can easily see what our apps or what host names are our apps reaching out to as well as TLS handshakes. So understanding what connection or which type of endpoint, network endpoint is receiving TLS handshakes. We could just annotate this further with, for example, TLS version or Cypher. We'll see examples of that next. As I mentioned, all of this observability can go into an SIM, such as Elastic Shares, Splunk or something else, and then you can query this. So this is an example query to detect weak or vulnerable TLS versions. We can see that we are querying all events where we have TLS information that imply TLS version 1.0 or 1.1 and then also want to show things like the process name, the namespace, the pod name, the SNI, the port, the IPs, the start time, the PID and so on. So we can get rich context while we detect weak or vulnerable use of TLS. Diving deeper into the networking side. This is an example of the networking related events when a connection happens. So we can observe everything from DNS, HDP and TCP. So if you go from the top to the bottom, you see at the very beginning, a process is dotted curl and curl is essentially invoked with the argument celium.io. We can then see the DNS resolution. In this case, this is a Kubernetes pod, so it will attempt to resolve a variety of different Kubernetes service names to essentially expand this into what could be a Kubernetes service name. This all fails or it does not resolve until we actually go and resolve celium.io. We see the IP returned. Then we see the connect system call. We see that this is a TCP connection. We see HDP here. We can see that celium.io actually returns an HTTP 301 to essentially redirect us to the HTTPS version. And then we see that as soccer gets open, we see the amount of traffic that was caused on that soccer, both on receive and transmit. So you see a variety of different observability data here from process execution to DNS layer, to connect system call itself all the way into HTTP traffic parsing. But then we can go more and go more into the security side. For example, auditing, what are all the ports on which applications are listening on. So we can query our entire database of tell me all the parts which are listening on particular ports. You can see the result at the bottom. So we see what are the parts with all the labels that are listening on, for example, port 9080 or port 53333. We see the actual binary. So we see that in one case this is Netcat, essentially listening on port 53333. In other case, this is a Python application. We can also see who has been invoking this. So we see that in one case, this was directly spawned from a shell. In other case, it was container de shim. We can detect DNS bypass attempt. So let's say a pod instead of talking to QtnS or QtnS DNS attempts to directly talk to an external or outside DNS server, we can easily identify such network flows and query them. In this example, we see that there was a workload with a set of labels running in the tenant jobs namespace that attempted to directly talk to an egress DNS when bypassed or attempted to bypass the QtnS or QtnS DNS server. Let's go further and detect, for example, nmap or network map scans, in this case, filtering for a specific value in the user agent field of an HTTP scanner. We can see not only when that scan occurred, what was the user agent, but we also see what was the process name, what was the HTTP parameters, what was the time, and so on. So we have full context into when a particular HTTP nmap scan happened or occurred. Of course, moving away a little bit from the networking side, Tetricone can also do raw system call and process execution visibility. In this case, it's showing the raw JSON output. So on the right, you can see a tracing policy and distracing policy essentially indicates that I want to observe all mount system calls, and it also shows what type of arguments we are interested in. On the left, you can see that a small subset of the full context that we can provide, obviously the process itself with the binary, the current working directory, the UID, the PID, the start time, the pod label information, so what pod name and what namespace the pod labels, but then also all the way into the container image, so container ID, the image to show off the image, the docker ID, as well as the entire process ancestry. This is just a very small subset of the full context that we can provide. Every event contains a massive amount of context that goes along with it. Now it gets even more exciting because we can combine the two together so we can combine this system call observability with network visibility. This is an example of the UI version of this that actually shows, in this case, a cluster, a mini cube cluster, a namespace tenant jobs, and there is a pod crawler. And you can see the entire process ancestry tree of not only the container itself, but also the control plane of Kubernetes, including Cubelet. You can see the process executions and which process makes or attempts or has established what network connections, the arrows or the lines, those are the network connections. So we can see that there is a variety of, in this case, a node app invoking server.js is reaching out to an external IP to elastic search and to api.twitter. And we also see that there is a reverse shell that has been invoked by a NetCamp. This is the line at the bottom, which is reaching out to another domain, like this Blober-ish, not a reverse shell. And we can see which individual process made this request. From a networking perspective, it would be very hard to spot this reverse shell from a system call perspective only would be very hard. In this case, it may be easy because the attacker was using not a reverse shell in the name to make the demo easier. In reality, it will be very hard to spot this without this combined visibility. This is showing Kubernetes-specific example, but this functionality is actually not Kubernetes-specific in any way. This works for any process running on the Linux machine. In fact, late process execution, so actually very common, you will have containers or workloads that run a single binary and you want to identify what are containers, what are workloads that have a process or a binary executed sometime after the container was started. And this can often reveal a compromised part or container because this is not what the application tool was. Let's say you have like a single statically combined binary running as application. You can easily rule out that this container will never start a process or a binary, like 10 seconds or one minute after the container has been started. So you can easily identify, hey, let me know which containers have had processes or binary started one minute or 30 seconds after the container itself was started. This is very likely actually reveals a compromised container or pod or some other malicious intent. Monitoring file access, so going down or moving over to the storage site, this is showing a splunk integration here that shows which pod, which container, which workload is accessing certain files. In this case, we are monitoring in couple of files, such as Etsy Password, Ash History, Shadow File, and we can see which pod, but also which process is accessing what file and what is the file operation, what is the operation they are performing. That's just a monitoring side of things. Then we can go further and look at, for example, network policy compliance and look for what connections have been subject to what policies. So we can look at all the allowed connections and identify what was the policy that was used to allow this traffic and even more importantly, we can identify what was allowed without any policy at all, for example. So we can clearly validate and audit whether we are achieving from a policy perspective what we intended. You can observe HTTP and GRPC and this is showing an example where we show and detect cross-crypting attempts in the URI, essentially querying, in this case, splunk with particular search query that will request or it will show HTTP flows with the name script in the URI. In this case, it just surfaced a simple cross-crypting attempt here. Now switching gears a bit and go into the enforcement side. So we've seen the full width of observability that we can provide, like from network to file to system call. We can do enforcement on a vast majority of this observability. But before we go into concrete examples, a couple of high-level points on how this enforcement works. First of all, it is preventive security to the cornerstone, the most important aspect of Tetragone. So essentially preventing malicious actions or malicious attempts before they can do damage to the system or to application. This includes the system but also the network, the file system, as well as application behavior. It is synchronous and we'll get to that. So it is essentially doing this in kernel. In terms of policy, we have a couple of integrations. You can define policies with Kubernetes CRDs. There is a JSON API as well or a JSON configuration method, as well as open policy agent that can be used. And we're looking to convert or looking to support converting from existing rule sets such as Falco rule sets or security policies as well. So if there are other forms of intent where you essentially already define what your application should be able to do or not, we will look at supporting them. In terms of preventive actions from user space, this is what we are trying to avoid or this is what Tetragone is not vulnerable to, which means that typically systems that rely on a observability with a user space rule engine are essentially vulnerable to the following. The part or application or the process is compromised or has malicious intent and performs either an exploit or a malicious attempt in the kernel and changes behavior or attempts something maliciously. The observability piece in the kernel, let's say it's K-Pro based or it's second based will export this visibility with a asynchronous notification to the user space agent running there and you have a rule engine there. This rule engine will consume this observability and will detect that, oh, this observability indicates that something bad is going on and will then kill the container or kill the process. This is asynchronously so it happens essentially after the malicious attempt has already been performed. So while it is strictly better than not doing anything, it can often already be too late. In terms of preventive action, what Tetragone does instead is doing this filtering and this rule engine in the kernel. So I think one of the question was, how does this compare to Falco? This is one of the big differences that instead of using EBPF primarily from a visibility extraction perspective, Tetragone does the filtering and the rule engine part in kernel, which means that as it processes the observability data in the kernel, it can immediately kill the process and even prevent the activity itself. So let's say we have a system call that should not be allowed. We will not allow that system call to be executed at all. We will not report that the system call happened and then kill the process in hindsight. Looking at a couple of examples here, this is an example how we can prevent access to a sensitive file. For example, in this case, EDC Shadow. So we have a policy that essentially the policy is not matching the examples, no worries. So the example is showing how this is done to protect authorized keys filed for SSH. The example is showing this for EDC Shadow. In a very similar use case, we want to prevent right access to a particular file and Tetragone will immediately kill the process that attempts to write to that file. Obviously, all to be doing so just to open the file or read from the file and so on. This is an example where we want to allow reading from a file, but immediately prevent any process or a particular process or particular parts to write to a particular set of files. We can also do things like detecting remounting of the root file system. This is an example how this can be done using the pivot root system call. Let me check for the question in the chat. What are the available actions other than sick kill if any? So obviously there is an action that can provide visibility itself. There is an action to sick kill. And for some of the points you could essentially prevent or essentially change the return code. So for example, for a system call, you can you can have the action say don't execute the system call and return with an error instead. So essentially when you are or when you're operating at a hook point where you can change the verdict, then obviously you want to prevent that photo processing and just return. If you are detecting something where the word it cannot be changed then sick kill is is the best cause of or the best next step. And other option is just to provide essentially an event to users face that particular event or particular behavior has been spotted. Monitoring and preventing capabilities abuse. So this example is showing when the monitoring of capabilities is enabled and we're seeing here process execution that shows a pod test pod. So actually using NS enter to essentially change them the amount to PID the network to UTS and the IPS namespace. You can do so because it has capsis admin privileges or capabilities, so it is succeeding so we know we see that the NS enter process or command is executed. We can also see that it performs set namespace functionality to change or adjust namespacing context. We then see a bash invoked and we see that we can open and close at to see to see password. But then when we try to write to the password file, we can still kill the process so this is making the important point that we can. This is not automatically vulnerable or not automatically subject to the point that anybody with capsis admin can automatically access any file. For example, so Tetragon is independent from that perspective this shows both obviously the preventing file access again but also the ability to monitor capability changes and capability context of any system that you're on time behavior observed. I think John already answered this question, so we'll move on and actually start summarizing a little bit. So we've seen a variety of things at this point we've seen Tetragon be able to provide both observability across the stack so we've seen file access we've seen data access we've seen a variety of network behavior both from a connectivity perspective protocol parsing HTTP DNS TLS. We have seen capabilities tracing so what are the capabilities is capsis admin is a cap net admin is a cap BPF so seeing what are the capabilities of a particular system call or process execution or some auto criminal activity as well as privilege escalation so being able to understand the privilege that a particular system call is subject to or is is is equipped with the file access. We've seen the TCP IP visibility with the round trip time visibility and one of the initial slides, as well as the raw system call visibility what are the system calls being made. We've seen process execution, including the process ancestry so understanding not only what is my process but who has found me and who has found the process that spawn that process and so on. We've seen examples of Prometheus metrics we've seen the example of Grafana dashboards we've seen particular to splunk integration. We've seen the chase on output that can be fed with fluency into any system you want for example. Into an into an elastic search cluster. The tracing and the metrics can also be exported using open telemetry if there is a desire. All of silly and all of Tetra gone is available on the following get our people and we'll talk in a couple of minutes about what is has been released as open source and what is available under isovalent tetra gone enterprise. Before we do that, let's jump back and answer this question, are there any plans in maintaining rule sets or is this already part of tetra gone. So yes, let's go through this slide because it mentions this. So there's essentially tetra gone that is available in the in the in the silly and tetra gone repository. So this part of the open source repository is the following from a visibility perspective, the process and system call visibility that we've seen all the layer three layer four network visibility and file access monitoring, as well as basic base capabilities and namespacing visibility and on the enforcement we can do the system call based enforcement based on K probes and trace points. So in that isovalent offers a tetra gone enterprise distribution. First of all, it's, it's a hardened enterprise distribution of tetra gone so it has, for example, extended end of life support. We of course offer enterprise support for tetra gone as well, but then in addition to that it has advanced capabilities, including extended network visibility is for example includes the round trip time or the latency measurement on the TCP side, as well as visibility, the HTTP and HTTPS visibility with KTLS, as well as all of the TLS visibility that we've seen, it features the SIEM, the SIEM integration directly to splunk, splunk caps, the process ancestry three versions, so understanding the full context of who has spawned whom, as well as high performance protocol parsers and extended aggregation and filtering logic on the file access side, while the open source version features file access version, the enterprise version can also do file integrity monitoring with Duchess Shaw 256 as well. On the runtime or on the enforcement side, the enterprise version features extended runtime enforcement capabilities that are more automated. The ultimate call based enforcement in the open source version, COD based or JSON based can enforce rules as written, the advanced enterprise edition has additional automation around Kubernetes, and it has a baseline policy set which can do threat detection for known threats, as well as simplify the installation of enforcement rules as well. I already covered a couple of questions but I see more questions coming in. Can you run Tetragon on a cluster mesh deployment and would applying Tetragon policy follow the same principle of CMP where you need to apply to each cluster manually versus a single touch point. If you can apply Tetragon, you can run Tetragon in a cluster mesh deployment Tetragon can be deployed independently of Cilium, it does not require Cilium to run. If Cilium is running Tetragon will extract additional visibility from Cilium itself so it will benefit from a Cilium installation but it is not required to be there. The policy, the policies work exactly the same as the Cilium network policy in a cluster mesh context so you will have to install them or load them into individual clusters. In the enterprise versions we have tooling to automatically apply policies across multiple clusters. And I think, John, you want to answer a question here. Yeah, I'll just answer it here. There was a question about if we use the trace points versus K probes. Because our examples are K probes. So the Tetragon base also knows how to do trace points. So if you want you can do trace points. The problem with trace points is they need to be at the Cisco level or in specific spots already in the kernel. So if we use them in some places and Tetragon will try to smartly use them in the right places and use K probes where I can. Our policies might say K probes but under the cover that the sort of Tetragon agent is trying to find the best, best sort of mechanism that your kernel can support to do the filter. So on newer kernels you'll even get some of the more fancy hook points that are sort of more efficient. So I think that should cover that question. Great. Before we go to the Q&A, if you want to learn more, Tetragon is covered in the security observability for EVPF booklet or report we have done with O'Reilly. There's actually a bigger book coming on EVPF but for now you can freely download this security observability with EVPF that actually gives an introduction to Tetragon and gives some of the background why we have created Tetragon as well. Tetragon is also featured in the enterprise hands-on lamps which gives you a way to try out Tetragon and with Instruct actually get your hands dirty with Tetragon without having to install it yourself. So it essentially sets up a sandbox environment for you so you can try out Tetragon and play around with it. So as well as you can use the virtual summer school that starts July 19. It's an entire day focused on Tetragon, service matching variety of other topics. The link is in the slides and will make it available to all attendees afterwards as well if you're interested, you can sign up. And I see a couple of questions are already coming in. If anybody has questions, feel free to ask them in the chat and we'll be happy to answer. I see Cornelia is also posting all the links. That's great. See a comment from Matthias, a pack of easy installable security rules is missing like in Falco. In fact, we don't necessarily want to recreate everything. So as mentioned, we are currently implementing a Falco rule set translator so you can actually bring your Falco rule set whether this is the existing Falco rule set that is in the repository. You may have your own and enforce them with Tetragon. The benefit there is that you will essentially benefit from the real-time enforcement behavior of Tetragon so you can enforce incremental capabilities instead of going to user space. Let me check if there are other questions. If you have more questions, feel free to post them to the chat. Also check the Q&A section if somebody asked anything there. Also, John, if you want to add anything to any of the points, feel free to do so as well. Yeah, I think, I mean, just if I extend the kind of the low-level details of the tracepoint for escape probe, I just say that some folks at the Isabel here are working on, you know, further reducing the overhead of some of these hooks. But for Tetragon, it's not a, for most of the Tetragon hooks throughout of the hot pathway, which is sort of the advantage of doing these networking hooks versus sort of inline methods, like if you were to think of like S-flow or net flow where you grab every packet and then try to analyze the data. So Tetragon works at the socket layer and in the kernel. So a lot of these things are not hot path items. So overhead is kind of minimalized. It's not a per packet cost. It's a per connection cost. Yeah, I also see another question that came in that's not strictly Tetragon related, but we can, of course, also answer that. So I'm interested in a reaction on this and then a reference to the blog post from Boyant or D on sidecar proxies in service mesh. So for those without the context, we have released a service mesh part as part of SILIM in a better level last December and we are marking our, the SILIM service mesh GA with 1.12 coming out in about two to three weeks. And the big difference of SILIM service mesh is that it offers, in addition to the existing history and integration, it does offer a non-psych or sidecar free version of a service mesh that allows to run either or per node proxy allows some of the service mesh functionality to be done entirely in EVPF without a sidecar or to run the proxy in a different granularity, for example, per name space or per service account. And there is debate going on whether this is, whether this is the right model or what is the right model or what is the better model. And this blog post pointed out several questions or several aspects. I think there is a lot of good content in that blog post, some of which which I don't necessarily agree with. So I think from a multi-tenancy perspective, SILIM has been running in a per node proxy configuration for years very successful in very large deployments. I think the claim that this is a lot of hard work and impossible is, is a bit weird because we are running in that configuration since years successfully. And I think there's another angle which is very, very interesting. The claim or the aspect that a per node proxy is dangerous from a perspective of having a single proxy share multiple secrets, which is actually something that we agree with, what we have found, what we believe is a better solution, which is to actually extract the MTLS portion outside of the data path proxy entirely and make it separate. There is a blog post on this that has been released and we can link to it, which essentially shows a model where the mutual authentication is done with a separate user space agent that could be per node or it could also be a very minimalistic sidecar per pod. And it means that the secret itself is or the keys or whatever the way that the authentication is performed is not in the data path at all, which means that even if you run, for example, a web assembly extension, or you are running Envoy, which has complex HTTP processing, even if that proxy gets compromised, it does not compromise your secret. So we see that as a very ideal solution in terms of security from a mutual authentication perspective. So I think that's also not necessarily a valid argument against the sidecar free model. That said, we are not in a position where we're saying nobody should be running sidecar free proxies at all. In fact, we have done the Istio integration first and have been running that for years with users so that's still kind of the first implementation we have done. And then based on a lot of user feedback who has or have requested, can you find a way to run or provide service mesh functionality without a sidecar implemented this additional way of running service mesh. I hope this was a kind of sufficient answer to this. I don't think there is necessarily a right or wrong. We are trying to operate as much as possible on user feedback and implement and provide what our users are asking us for. We're not preventing or trying to burn anybody from running a sidecar if that's the model they would like to run. Another question is Prometheus exporting supported in the open source flavor. Yes, it is. So the metrics and Prometheus export is supported. The enterprise version does have additional visibility as laid out here in terms of DNS, HTTP, HTTPS, TLS, the process ancestry, as well as some of the high performance protocol parsers, and some of the network visibility is extended, but the metrics themselves, the metric export is all in open source. I don't know if this is the best place to ask, but I would like to know where does Tetragon write its events to I'm using export file name my file and then reading it with tail follow this works but I'm sure there is a better way. I'm running Tetragon natively on Kubernetes. Maybe John, can you answer that question briefly. Yeah, yeah. Many of our users will use Fluent D and then export this into their SIM, whatever that happens to be. You can also use Fluent D just to aggregate the logs and dump them somewhere else, which is what I do a lot of times in development. If you just have a lot of nodes, you want to see all the logs aggregated. So those are sort of the common use cases that are in production. There's also a GRPC endpoint you can attach to. We use it mostly for testing at this point, but you're welcome to hook to it and sort of stream the events out as well if that's interesting. Awesome. Another question. Is there any sort of admin UI maybe integrated with Hubble. There is integration with Hubble UI, which means that we have not released this yet. We will release it soon in a region where the visibility from a runtime perspective will be essentially visualized in Hubble UI as part of the Hubble existing Hubble UI. All of these events can also be fed into Timescape, which is part of our enterprise offering. That's a time series database where you can essentially collect all of this observability and store it persistently and then query it and again run Hubble UI on top of that. So the time series database actually offers plain Hubble API so you can run the Hubble observed CLI, the API, the Hubble UI and all the Hubble tooling on top of the Timescape time series database. Again, we will be looking into policy into runtime policy management as well from a centralized perspective. Right now we have automation with a variety of automation tools like CFM engine puppets and so on. But we will be looking at providing something similar as we have done with the network policy editor for the runtime site as well. Now see two questions in the Q&A section. What's the expected resource usage of Tetragon per node? It runs as a demon set, right? So yes, it runs as a demon set so there's an agent running on each node. The overhead will very much depend on the tracing policies that you load and the aggregation that you configure. So we go back here. Because of the flexibility of EBPF, we can do a lot of aggregation in the call. So depending on whether you want to see every single system call that is being made or whether you want to see, for example, only namespace changes or only access to certain files, overhead will be very different or will differ. It can be anywhere from 1% to 25%, I would say. So it really depends on how much point you want to see and at what granularity. You want to aggregate and see certain sensitive events or you want to have a full system call log. What's the pricing licensing model of Tetragon Enterprise? Tetragon Enterprise is part of Selim Enterprise from ISO Valence and we will embed this into the price. So it's very similar to Selim Enterprise. It's a per node subscription that is at the base with a scale discount as your infrastructure grows. Of course, as I mentioned, you can run Tetragon completely independently of Selim. So you can of course also purchase Tetragon Enterprise separately. If you run both, we will of course give you a discount. Let me see if there are any more questions. We have a couple of minutes left. So if you have more questions left, feel free to post them. But I think we have covered all the questions that were posed. So let me maybe repeat the follow-ups here again. The EBPF report, the booklet, great way to get involved and read more about Tetragon, the hands-on left with Instruct, great way within minutes you will have essentially a sandbox-driven environment with Tetragon installed. And you can try out Tetragon, but also other aspects of Selim Enterprise. And then the virtual summer school day on July 19 where we will host Tetragon as well as service mesh. On top of that, there is a Tetragon Slack channel on the Selim Slack. So if you go to Selim.io, you will find a button to the Slack and you can join this Slack or Slack server. There's a Tetragon channel with all the Tetragon developers on. And most importantly, if you want to get involved outside or in addition to using Tetragon, feel free to contribute. You can use Tetragon, open source repository, GitHub slash Selim slash Tetragon. We very much encourage contributions in all forms. Doesn't have to be code contributions, but also let us know what features would you like to see. We already got some feedback today, rule sets. We would love to have a discussion, what type of rule sets, what integrations do you want us to implement, for example, part security policies automatically that are getting deprecated. So something other than the Falco rule set and so on. With that, I would like to thank everybody for attending this webinar. If you have more questions that you forgot to ask, feel free to ask on Slack, feel free to reach out to me on Twitter to me or John, we're happy to answer there as well. Thanks a lot, thanks a lot everybody.