 Hello, everybody, and welcome to Enhancing Security with Mutual Authentication in Cillium Service Mesh. A little bit about me. I'm the field CTO at Isabel, and I'm focused on helping enterprises find success with Cillium in modern-day security tooling. I've been at this for about 20 years, and I remember a lot of it. I'm a student of perspective, which means that the way that I learn is by actually teaching others. I need to actually understand the way that you see things to better understand the way that things are generally seen. I can be found everywhere as MauiLion, so you can find me on LinkedIn, in Slack, Kubernetes Slack, CNCF Slack, everywhere, X, whatever that is, all at MauiLion. So our agenda today, we're going to jump in, and we're going to do a brief introduction of the importance of security. We're going to talk about understanding mutual authentication and what it is. We're going to talk a bit about service mesh architecture. Then I'm going to go into a demonstration where I'm going to show you one of the labs that we have developed at ISOvalent that will actually take you through that capability. Then we're going to talk a little bit about how we leverage network policy to enforce things like mutual authentication. Then we'll open up to questions. So at a high level, I wanted to introduce this idea of EVPF pretty early in the talk. As was mentioned earlier, if you have questions about these things, feel free to ask those questions in the Q&A, and I'll do my best to get to them at the end of the presentation. Alternatively, you can always join the Cillium Slack, so if you go to cillium.slack.io, you'll find a very good community of people who are answering questions about EVPF and Cillium every day in there. So definitely come join us there. EVPF, in a way, is very similar to, is a way to make the Linux kernel programmable in a secure and efficient way, and let me break that down a little bit. So it's secure because an EVPF program is actually statically analyzed before it is injected into the Linux kernel, and it is efficient because it is running at native kernel speed. And what EVPF allows us to do is effectively, if you think about the Linux kernel as an API, an event driven API, when I open a file, when I open a new network socket, when I make a connection from some application leveraging curl or what have you, all of these things represent API calls to the Linux kernel. And what EVPF does is it allows us to see those calls, gather context about those calls, whether those calls were successful or not, whether the, what the actual, what the metadata and context around that particular call was, and also it gives us the ability to manipulate that call. So in our case, we have the ability to, for example, transparently proxy traffic coming from an application based on the destination of that traffic through another process that we wanted to, to be able to better secure or manipulate the traffic coming from an application to another application, say in a Kubernetes or container orchestration environment. So EVPF is actually incredibly powerful, and everything in the Linux kernel is available to EVPF as a surface that can be configured, whether that means a file open, whether that means a new network socket, any of those things, we can actually do all of the, we can apply EVPF to all of those components. In this example, I'm showing you the difference between being able to manipulate, say a file descriptor or a block device and another one, and in another example, I'm showing you the difference between opening sockets, a TCP IP address, connection, a network device. All of these things are places where all of these areas are areas where we can apply EVPF programs. So Cilium is, so what is Cilium? Cilium is a networking and security and observability product. And it fundamentally is a way to change the way that we think about networking and observability by taking it from, by, instead of thinking about networking as a component that is below your application layer, we're thinking about networking security and observability at the application layer. So just like I mentioned in the EVPF case, when we actually implement Cilium, we're actually implementing this in EVPF. So when connections are moving back and forth between pods in a Kubernetes cluster or when we're determining whether to allow or deny traffic, we're making that determination on traffic at the Linux kernel layer, leveraging an EVPF program, instead of waiting for that traffic to go all the way down to the host layer and leveraging things like IP tables and other tools like that to be able to enforce that type of traffic. So we can actually do quite a lot in EVPF at that layer and natively in the Linux kernel before this traffic ever has to go down and be affected by the lower level components of things like IP tables or things like the Linux routing table. We can actually accomplish quite a bit up at the top. And from the security perspective, it's the same thing, right? EVPF gives us the ability to ensure that we make decisions based on network policy, whether to allow or deny that traffic, even before that traffic is actually egresses that particular container because we're making that decision at the Linux kernel layer, not necessarily all the way down at IP tables, et cetera. And observability, I mentioned this last, but in my opinion, it's probably the most important piece of it. Everywhere, Cilium or an EVPF program touches the packet, we can export or expose metadata or context about what we did and also what we saw at that layer. So we'd be able to show you for this given application what did it try to connect to, what did we allow it or deny it, et cetera. And so we actually, as part of the Cilium product and also the TetraGun product, we expose an event stream that shows you exactly what's happening in your infrastructure, both at the networking layer and at the application layer at all times. It's pretty powerful stuff. To get into the service mesh piece of it, I wanted to talk a little bit about this map here because this is kind of how we think about how all of this works. If today you're evaluating service meshes, you might consider a service mesh to be a collection of use cases. And those use cases might apply as ingress of the ability to authenticate between workloads, the ability to manage traffic in some unique or interesting way, like being able to do canary based routing or header based routing for traffic coming into your cluster so that you can do things like A-B testing or red-green testing, however you want to consider it. These use cases we think are very important to solving some of the networking security and observability challenges in the market today. And so instead of thinking of this as a separate product, we've actually started considering these things, these particular use cases as features of the Cilium components themselves. So Cilium service mesh is really just us taking those use cases that we see in other service mesh products and applying them to our existing product set. An example of this is, say, Layer 7 network policy. The ability to define that a pod can access another pod or an application can access another application, but only on specific API paths and only with specific HTTP verbs, for example. So I'm going to allow this application to communicate with another application. It can do a get, it can do a put, it can do those things, but it can't do a delete or those sorts of things. And it can only do those things on specific API paths. So in that case, that's something that we've actually had in Cilium network policy for quite a long time, even before we started getting into the service mesh space. That Layer 7 network policy is something that we have implemented in EVPF for quite a while. And from the authentication perspective, we've also had that one since the inception of the Cilium project, wherein we give every workload its own unique identity, and we also give every workload an identity that is coupled with a union of labels associated with that workload. And that identity is actually how we go about enforcing traffic on, or whether we allow or deny ingress or egress traffic in these environments. So Cilium is absolutely an identity-based solution. And when we talk about how we leverage this in mutual authentication, you're going to see that we actually couple that underlying construct of identity with Spiffy to better enable things like mutual authentication between applications. Let's get into it a bit more. I'm going to shoot through this phase because I don't think it's going to be super important to our understanding, but I wanted to lay this down. So thinking about service mesh and its origins, the ability to actually have multiple types of application or other individual types of application and the ability to do things like enforce traffic between them. Service mesh with sidecars. This is probably the most common model today wherein we have some application and we have some other application. We put a sidecar into the same pod and we actually use that sidecar to handle things like mutual TLS between these applications. That way the applications themselves don't have to be aware of the encryption between them. We move all of that work to the sidecar proxy. So the evolution that we're actually pushing for is actually taking things from the shared library model wherein we might actually encompass the service mesh library in our application, being able to handle things like encryption between applications, giving each application its own certificate, et cetera, to the sidecar model represented by a lot of the tools in the market today to the kernel model where instead of actually trying to actually create a sidecar per container or per pod, we actually tell the, we at the kernel layer are able to understand that we want to encrypt traffic back and forth between these applications and we use EVPF to identify that traffic that we want to encrypt and we handle that at the kernel layer. We're going to talk a bit more about how all of that works as we proceed. So, Syllium Service Mesh. What's different about Syllium Service Mesh? So one of the things that we've done here is we've actually reduced, because we're thinking of service mesh as a set of useful features associated with our existing product set, we've reduced the operational complexity because instead of thinking of this as a separate product that you have to install in addition to all of the other products that you have deployed into your Kubernetes or container orchestration environment, we're actually just extending the feature set of Syllium and its capabilities to include those capabilities. So you actually just deploy Syllium and as part of deploying Syllium, you also get all of those particular, you are also able to leverage those use cases that we've identified as before. And because of that, it's a reduced resource usage. You're not paying a per pod sidecar cost. That's one of the things that actually, I think it's one of the hidden costs of a lot of the sidecar models. It's a compute and memory for all of those sidecars. If you're paying it once per pod, that can get pretty expensive. We can ensure better performance because we're able to actually do these things at the kernel layer. And last, we can avoid the sidecar startup and shutdown race conditions that we see in other environments. So this highlights that efficiency model, right? If you had 30 pods and you had 30 sidecar pods, you would actually be talking about 30 proxies per node and you're paying that CPU and memory cost. In most cloud environments, you pay by the second, right? So every one of these things actually costs too much. In a scenario where instead we actually move to this model where there's a, where we're able to interpret every application wants to encrypt this traffic or handle service mesh use cases between those applications and we're able to implement that in EVPF, we can be a lot more efficient about how that's done. I am now going to show you some latency numbers and then I'm going to jump into the demonstration here and kind of show you a little bit more in a hands-on way about how all of this works. So latency numbers, if we actually look at these latency numbers with us versus Istio, which is one of the sidecar solutions, this actually gets into the fact that a sidecar proxy does actually incur some latency, which is pretty normal. And because we're actually able to handle the, we're able to manipulate that traffic transparently in the Linux kernel, we're able to be a lot more efficient about the requests per second. Max throughput actually works out to be pretty much the same. Pod ready performance, obviously, because we're only waiting for the pod to start up, we're not adding another sidecar. It actually does take quite a bit less time for things to get started up. Now, let's jump into some of the features of Cillium Service Mesh. Actually at this point, I think I'm going to jump into the demonstration, but before I do that, I want to talk about this slide and the one right after it. So mutual authentication, this is the topic that we're here to talk about. This is actually how we think about handling encryption or and authentication between workloads. Now, in this case, there are a couple of important things to understand. The first thing that's important to understand is that for these two applications, right? We have identity A, which represents maybe multiple pods and we have identity B, which also represents multiple pods. We are going to decouple identity from encryption. Because we already have identity associated with every one of these workloads and we already have the ability to handle things like encryption, back and forth, opportunity encryption, leveraging IPsec or WireGuard. What we're going to do is we're going to decouple identity and encryption. In most of the service meshes out there, these things are pretty tightly coupled, right? That sidecar is going to actually represent both the identity and the mechanism by which you are going to encrypt traffic back and forth between applications. But in our model, this is actually decoupled. In our model, we're going to give every application its own identity and then we're going to extend that identity to represent a certificate allocated, an identity allocated to that workload by Spiffy. And when Spiffy allocates a certificate to that identity, that certificate can be leveraged to handle things like mutual authentication between workloads. So when we require that mutual authentication happened between these two workloads, what we're satisfying there is the authentication challenge between them, that this identity is technically able to authenticate and prove its identity to another identity or to the other workload. And once that application, and once that authentication has happened, then we can move to the encryption part of it or the ability to ensure that that traffic is tamper-free by upgrading it to, by leveraging IP Secure WireGuard encryption to encrypt that traffic between the two workloads. But fundamentally, we are going to decouple identity from encryption. And we're going to leverage identity and authentication to prove that these things can communicate and that we can prove on either side that those applications are who they say they are. And then we're going to leverage encryption to ensure that the communication between those applications is tamper-free. This is an example of what the mutual authentication policy would look like. If you want to require authentication for connections to backends, this is an example of a Solium network policy that enables this. When you enforce auth-required strict, you're saying that all workloads that match the role backend when handling ingress traffic from match labels role front-end that we have to authenticate that traffic before we allow that traffic to begin. Now let's jump into the demonstration here where I can actually show you how all of this will work. Let me jump over here. Control three, there we go. Let's restart this lab. So this lab is actually available for anybody to use. And if you go to isovalent.com slash labs, we actually have a number of labs that are available to you to learn more about all the different features and capabilities of Solium, including our capabilities around Gateway API, which handle more of the advanced API model for ingress. We also have things that cover the Solium ingress controller. We have things that talk about Solium egress. We have the ability to understand things like TLS visibility. There's a ton of capability in here. And today what I'm gonna be focusing on is the transparent, nope, it's the, where'd it go? Oh, sorry, it's further up. I'm gonna be handling this one here, which is mutual authentication with Solium. And if you take this lab, you will be able to actually get a badge associated with the lab. So if you are interested in understanding more about mutual authentication, this is actually where you can go and learn more about how we think about the problem and how we're actually showing this capability. But for now, let's go ahead and get started on this lab. We've got about 22 seconds left. Any questions that you have during this presentation? Feel free to put them in the Q and A section and I will be sure to get to them at the end of the presentation here. Almost there. Takes a little while to complete things to start. This is leveraging the Instruct platform, which is a pretty neat platform for this kind of stuff. So Solium was installed in this environment during lab boot up and was deployed with the following Helm flags. And what these Helm flags do is they deploy Spire inside of the cluster. And then they also tell us that we want to install the mutual authentication components. To verify that mutual authentication was enabled, we can actually take a look at the Solium config to see if that was in fact enabled. And when we look at that Solium config, we can see that these components are enabled and deployed within our cluster. Let's skip that and we'll keep going. So let's get the demo app deployed here. Now to kind of take you through a story to share why mutual authentication is so necessary, we've created a kind of fun story that describes the Solium leveraging the Star Wars demonstration app to show you like why you would want to encrypt, why you want to handle these types of work or why you want to do this type of work. So let's take a look at the network policy that we have deployed. In this network policy, we're saying we want to create an L3 or L4 policy to restrict Death Star access only to Empire ships, right? And that means that the thing we want to protect is the Death Star. We want to protect it on ingress from endpoints and we want to ensure that we only allow ingress from match labels org empire and only two ports on port 80 via TCP. Let's take a look and see if the Death Star has been successfully deployed. Looks like it has been. And let's take a look and see if TIE fighters, Empire spaceships are allowed to land on the Death Star. And what we've done here is we exec into the TIE fighter pod and we curl the Death Star at a specific pass and we see, can we actually post a landing or can we request a landing to the Death Star? And we see ship landed, that means that we were successful. Now, if we try it from the X-Wing, what do you think will happen? In this case, it did in fact time out because we have connect timeout set to one so it timed out pretty quickly. Now, in this case, what we're doing is we're just using a L3 or L4 policy to enforce network policy back and forth between the different types of ship and the Death Star itself. The X-Wing cannot connect to the Death Star while the Empire security operators are aware an HTTP call to a particular path might cause the Death Star to explode. Surely no officers would want to cause damage to the Empire. Is the Empire safe or is it? No, let's talk about. So in our example, the TIE fighter is able to actually access the exhaust port and destroy the Death Star. And with no means to verify the identity of the TIE fighter pilot, the Imperial fleet was compromised and rebels managed to blow up the Death Star. Again, in the Emperor, it would be annoying. So we want to protect the Empire. So Emperor Palpatine wants you to implement an MPLS-based mutual authentication so that the next Death Star is safe. By using MPLS-based mutual authentication, the Empire can add strong identity authentication leveraging X-509 certificates and the time to enforce mutual authentication by updating the network policy. So in our example, based on our configuration, we can actually do that work pretty easily. So rolling out mutual authentication with Scylium is as simple as adding the following to an existing or new Scylium network policy given that Scylium was deployed with these features enabled. So let's take a look at that now. Let's take a look at the policy that you've defined. And we can see that we are enabling the ability to do post against request landing. And we're also enabling authentication mode required. The output of this command. So the difference between these two are we're actually enabling authentication mode required and we're also being very specific about what particular path we're creating layer seven network policy that says you can do a post against request landing if you are authenticated, if you're a ship that's coming from the empire. So now let's go ahead and apply this policy. Now you've improved the security in two different ways in this particular model, right? One is that we've said before I allow anybody to talk to the Zestar, I want them to show me their identification and prove to me via mutual TLS that they are authenticated to be able to talk to the Zestar. I don't want to give that, I don't necessarily want the empire ships to have a certificate that they prevent prevent to me directly, prevent to me directly. I would rather just be able to handle that as an implementation detail. So if I say authentication mode required, I'm actually going to leverage the spiffy implementation on the cluster to issue a certificate for that workload and then leverage that certificate transparently to the workload leveraging Cilium and EVPF to ensure that we provide an authentication challenge and response back and forth between those workloads before allowing that communication to proceed. And if things are authenticated correctly, then we further secure this by providing a rule that says you are only able to do post against this specific API path. This is usually referenced as a layer seven network policy. And those two things together actually greatly improve the security of the empire. So let's take another look at the TIE fighter and see if we can still access things. Looks like we still can. And let's take a look and see if we can access the exhaust port which was how it got killed, which is how the desktop got destroyed before. In this case, it says access denied, right? And that is true because of the rule set here, right? We're not, so even though you're authenticated we're still not allowing you to have anything other than request landing. But have we actually mutually authenticated? Like, can we prove that the authentication piece has happened? Let's take a look. So Palpatine is not convinced of the Death Star is secured. He wants to see an evidence of the mutual handshake he needs, observability. Hubble is the tool that we use to be able to show you all of the things that are happening inside of your environment, right? So all of that event data, all of that context that I talked about that we can gather at the EBPF layer. We push all of that down to an event stream and Hubble is the way that we actually can parse that event stream to understand what's actually happening. We extended that event stream to also understand where we are in the state of mutual authentication. Let's take a look at the Hubble version. We're version 12 CE. And if we wanted to observe the mutual authentication with Hubble, let's run that connectivity check again. And if we do a Hubble Observe of that traffic, we can see a policy dropped. The policy verdict for this traffic should be dropped by the L4 or L3 section of the network policy. Now let's look at the traffic from the TIE fighter to the Death Star. Network policy should have dropped the first flow from the TIE fighter to the Death Star, service over request landing. Because the first packet to match the mutual authentication based network policy will kickstart the mutual authentication handshake. So here's where we're actually seeing that drop packet. Again, this is expected. The first packet from the TIE fighter to Death Star has dropped as this is how Selima is notified to start the mutual authentication process. You should see a similar behavior when looking for flows with the policy verdict filter. There's the denied. And then after that, we actually see quite a bit more information going, right? We see that this traffic is actually now authenticated, leveraging Spire. So this is a way for us to actually show you this is the first connection back and forth between L4 and L3 were allowed. And then because we actually required authentication, that packet was denied because it wasn't authenticated yet. Once the mutual authentication had taken place, then we started allowing communication back and forth between the TIE fighter and the Death Star. And any subsequent events that we expose will describe that this handshake is... Now what's interesting about this is the authentication piece you might think is actually just the standard MTLS authentication component wherein we actually describe that these two applications, the Death Star and the TIE fighter. The TIE fighter has a certificate that's associated with it leveraging Spiffy. Selima will leverage that certificate to authenticate to the Death Star identity before it can actually... So it can handle that authentication handshake. As part of that process, there is a step that is actually leveraged which is called attestation. Attestation means that I not only want to verify that this certificate is trusted by a trusted CA but it's issued by a trusted CA. But I also want to ensure that the workload that I'm gonna go talk to over there is actually still present on that note. I want to do a third party or extended assertion make sure that the certificate that was presented to me as part of that authentication handshake is still valid, is still useful and should be trusted in those environments. Well, some of the things that we do to ensure that that continues to be true is that before actually allocating a certificate to a workload, we ensure that that workload was scheduled on that node and we do a third party attestation from that perspective. And when that workload is descheduled from the node, we remove that identity from, we remove that certificate from use which means that it would no longer be valid and we would not be able to actually authenticate to it. So this isn't just a simple certificate certificate check, there's also layers of attestation throughout the whole process which is some of the work that is actually behind this VIFI Inspire project itself. Let's go on to our next challenge here. I'm going to skip the quiz, skip. If you would like to actually take the quiz that is how you can get a certificate. So the emperor's paranoia is getting out of control while you impressed him with your knowledge of cognitive security. He wants to know exactly where the identities of the officers are stored and whether certificates are automatically issued and rotated. It's time for you to explain to the emperor how Cilium integrates with SPIFI. Identity management, I address the challenges of identity verification in dynamic and heterogeneous environments. We need a framework to secure identity verification for distributed systems. And this is actually what I was just referring to in that last second, in that last segment. The benefits of SPIFI are that it is a trustworthy identity issuance. SPIFI provides a standardized mechanism for issuing and managing identities. And it ensures that each service in a distributed system receives a unique and verifiable identity even in dynamic environments where services may scale up or down pretty frequently. And that identity attestation, SPIFI allows services to prove their identities through attestation. It ensures that services can demonstrate their authenticity and integrity by providing verifiable evidence about their identities such as digital signatures or cryptographic groups or even the ability to prove their identity through a set of circumstances that converge on that unique individual workload. The ability to ask the API server was this workload scheduled on this node at this time. So combining these things altogether, we get a very secure mutual authentication mechanism with SPIFI. SPIFI provides an API model that allows workloads to request an identity from a central server. And in our case, a workload means the same thing that a silly security identity does. A set of pods described by a label site. A SPIFI identity is a subclass of a URI that looks something like this. The SPIFI inspire workload workflow basically when a workload wants to get its identity, usually it's startup, it connects to the local SPIRE agent on the node itself using a SPIFI workload API and describes itself to that agent. The SPIRE agent checks that that workload really is who it says it is and that the workload is scheduled and running according to what it says it is. And then connects to the SPIRE server and attests the workload is requesting an identity and that that request is valid. The SPIRE agent checks a number of things about the workload is the pod actually running on the node is coming from to the labels match and so on. And once the SPIRE agent has requested an identity from the SPIRE server, it passes that workload and in the SVID the SPIFI verified identity document format. And this document includes a TLS key pair in the X509 version. An usual flow for SPIRE, the workload requests its own information from the SPIRE server. But in Sillium support, this is actually an implementation detail. The Sillium agent gets a common SPIFI identity and can ask themselves and can themselves ask for identities on behalf of other workloads. Let's learn more. So the SPIRE server was automatically deployed when we installed Sillium and when we enabled that mutual authentication feature and the SPIRE environment will manage TLS certificates for workloads managed by Sillium. We can take a look at the deployment of SPIRE by taking a look at this piece. The SPIRE server stateful set and SPIRE agent daemon set should both be ready. And let's run a health check and see if things are working correctly. Looks like things are healthy and let's verify the list of SPIRE agents. So we can see that we actually have a couple of different agents. Note that there are two agents one per node. So if I were to do cube kettle, get nodes. We can see that there are a couple of different worker nodes here. Notice as well the SPIRE server uses Kubernetes projected service accounts PSA teams to verify the identity of a SPIRE agent running inside the Kubernetes cluster. To verify SPIRE identity, let's verify the SPIFI. Now that we know that SPIRE service policy, let's verify that Sillium SPIRE integration has been successful. First, verify the Sillium agent and operator identities on the SPIRE server. So we can see the agent and we can see the operator. Let's verify the SPIFI identity for the Death Star. The identity for the Death Star in Sillium terms is identity 20318. That uniquely identifies that workload as based on the Sillium identity. It follows the SPIFI Sillium identity ID format. To verify the Death Star has, the Death Star pods have a registered SPIFI identity on the SPIRE server and let's take a look at that. So we're looking for that same path that we saw before, right? I want to do SPIRE server entry, show me the identity of the SPIFI ID at this URL path based on what we described before, right? SPIFI.Sillium identity and then the identity ID 2038. And we can see that the Sillium operator was listed as the parent ID. That is because the Sillium operator is responsible for creating SPIRE entries for each Sillium identity that mutual authentication is enforced for. And you can list all the registration entities with this command. There are as many entries as there are identities and verify this list with the Sillium get, the Kubekettle get Sillium identities component. And SVID is a document with which the workload proves its identity to a resource or color. And SVID is considered valid if it is signed by an authority within this SPIFI ID's trust domain. And SPID contains a singles SPIFI ID which represents the identity of the service presenting it. It encodes the SPIFI ID in a cryptographically verifiable document in an X509 certificate. One of the reasons for choosing SPIRE in this particular implementation of Sillium mutual authentication is that it will automatically rekey the SVIDs before their certificate expires. And when this happens, it will notify SVID watchers which include the Sillium agent. All right, so 10 minutes remaining. And I think I've gotten through most of the content that I wanted to share with you today that kind of describe both why mutual authentication is so necessary and also how we leveraging Sillium can enforce it. Let's skip this quiz. Again, if you want to get the badge, you can go through and get that through this mechanism. So as a mutual authentication recap, if you want to go through this, this lab will actually take you back through it, teach you the same things that we showed you before and you're welcome to continue in this lab to better understand the content that I've shown you already at this point. We have ton more capability in the Sillium service mesh component. So I'm not going to dig into too much more detail here. We've also talked, we haven't talked too much about observability, but I did show you that Hubble can actually show you when things are authenticated and not authenticated. Hubble can show you a lot more than that though, right? Hubble can show you things like how many flows were processed, what the L7 flow distribution was, what protocols are in use, whether workloads are actually being enforced or have network policies that apply to them or not. There's a ton of capability and a ton of functionality that we can expose through Hubble to better understand how your applications are operating in these environments. We can also do things like connectivity troubleshooting, the ability to understand what's actually happening. So a lot of times folks see this stuff and they're completely amazed with how much we can actually expose from the perspective of what's available from the data that we're actually showing at the underlying network layer. You can always learn more from us by actually visiting any of these resources. So these resources I'm going to leave up as we wrap up the session and move into questions. I haven't seen any questions pop up, but if you do have questions, please let us know. My friend Erica also points out that later this week on Friday, we're going to start a new session here, a new section called Elevating Kubernetes Observability with Sillium and Hubble. So kind of just getting in a lot more into the visibility piece of it or the observability piece of it and showing you exactly how that works. If you'd like to sign up for that, you can go to the link that's actually in the chat and you can actually come and join me. I'll be one of your hosts. We'll also have my good friend, Christopher McLean, Luciano, who will be also hosting us. We are going to be three different sessions. There's going to be a lot of really great content. Please come join us for those things. And lastly, of course, if you're going to be at KubeCon and you see a face that looks like mine, come say hello, I'd love to see you there. The resources, once again, if you have any questions, please put them in the Q and A and I will get them answered for you. With that, if there are no questions or if you want to reach out to me personally, you can find me in the Sillium Slack. You can find me on X, you can find me in all kinds of places. I'm at Maui Lion and I'm always happy to engage with folks. So if you have questions or if you want to hear more or learn more about any of this stuff, please reach out and let me know. All right. Thank you, everyone. Have a great week and I'll talk to you next time. Thank you so much, Deppy, for your time today and thank you everyone for joining us. As a reminder, this recording will be on the Linux Foundation's YouTube page later today. We hope you join us for future webinars. Have a wonderful day.