 Awesome. Thanks for joining, all of you that are joining live and for those that might be watching it afterward and as a recording. We'll be talking today about service networking and the security aspects to keeping that communication secure. We'll be looking at a few different technologies like WireGuard and TLS and how they come together to provide security in a way that's more amenable to our cloud native type architectures and workloads. My name is Christian Posta. I'm a global field CTO at solo.io. I've been involved in open source for quite a long time. I've been working on Kubernetes since before it was 1.0. Same thing with the service mesh ecosystem very involved since January 2017 or so. I've written a few books on this topic. Now, coincidentally, I just got an email from the publisher last night saying that all of Manning's books are on sale today. Maybe go take a look if you're interested. But I've worked very closely with a lot of solo customers and open source users that are looking to modernize their infrastructure. They're typically moving to some public cloud. They're adopting containers and technology like Kubernetes, and some of the models and paradigms that we used for managing deployments and observability. Those have shifted and changed and we're modernized. Now, they're getting to the points around security and networking. That's where we're going to focus our efforts here. We're going to look at an agenda here that we'll talk a little bit about the need for some of these more modern security and networking changes. We'll look at some case studies, look at some technology, and I'm hoping that if I manage the time correctly, that we'll be able to look at some demos as well. I work at solo and the background is we work on these application networking problems. We work with enterprises, we're adopting modernization efforts at scale, and what we've picked are we believe the best of breed products or project from open source that we then use to solve some of these problems. You'll see some of these open source projects mentioned in the presentation here today. We're going to start the conversation with what seems and appears to be a fairly simple diagram, but has a lot going on under the covers when it comes to networking and when it comes to security. Now, in this case, we're talking about service A talking to service B. But when you think about how the service A find service B to talk even in the first place, especially in this in a more cloud dynamic world, how can we observe what is happening, like what happens, are we going to know when calls fail, and services are out of SLA, and so on. More importantly, who is service A? How can we prove who is service A and who is service B? In the past, we would think about who service A is based on where it's deployed. We would think about writing rules and policies about it based on where it's deployed. It would communicate using things like TCPIP and so on. This is still fundamentally in place when we talk about cloud native networking. IP is still when you're talking about things like REST APIs and JRPG and so on. IP is still a foundational unit. Now, in previous generations of technology where we would use IP and network segmentation, we would use complex firewall rules based on these units of identity, where things live and IP addresses. We would use, like I said, firewalls and gateways and routers all to implement this. But as we move to adopting public cloud, as we move to things like containers and Kubernetes, this world becomes a lot more ephemeral. Workloads can spin up. Workloads can become unhealthy and get killed and move to different hosts and everything's a lot more dynamic. When you're going from on-premises to public cloud, you don't own that network anymore. That's AWS or that's GCP's network and you have to live within it and write some rules about how that fits. But sometimes those models don't fit the same way that they do on-premises. What we see as part of that is the bad guys are out there. The attackers are out there. They are finding all kinds of different ways to get into what you've traditionally might have thought of as your network or your corporate network. Using all kinds of different schemes, all kinds of different attack vectors, and a number of which can be used together to breach a particular network. This idea of thinking about security in terms of, well, if we just make a boundary and keep the bad actors out, then we'll be safe. Because that isn't the case. Actually, there's a number of examples why and where that isn't the case. I don't mean to pick on this particular one, but here's one that came to my mind back in 2014, where the Sony hacks, very sensitive emails were made public, even pre-production films were made public, and the attackers had very deep and widespread access to Sony's networks. But this didn't happen because they just left the API gateway unsecured or something. There was a number of things that led to this, including things like phishing and malware and so on. But one big part was that once they got inside of a particular boundary, they were able to move around. They were able to learn more and inspect and move laterally, and that enabled even more of the attack to unfold. And so when we started thinking about modern service networking and securing them, there are a few tenants or some contexts that we have to have in place that aligns a little bit more with what reality is. That these networks, we should assume that there is, there are hostile actors in this, whether it's internal or external, that the possibility of breach, even though you believe your network is just, within your own boundaries and perimeter, is susceptible to data exfiltration and service hacks and so on. So the way we should be thinking about standing up APIs, standing up microservices, standing up databases and caches and so on is to authenticate and authorize and encrypt as much of this traffic as possible, even though it might be within your quote unquote, corporate network. And some of the things that we need to have to be able to write policies about this is knowing who is whom. We want to be able to constrain who can call, what services can call, what other services under what circumstances. We want to also be able to audit and understand what calls are happening in our system. And we want to be able to deny or apply limits and constraints and so on, while keeping the interactions between these services confidential. We don't want bad actors that might have made it into our boundaries, into our networks to be able to see what's happening and understand patterns about what's happening in our network. And lastly, we want to, even though a service might have access to another service, we want to limit exactly what they have access to with fine-grained authorization policies. Now, this is not some pie-in-the-sky type of theoretical aspiration. A number of organizations have implemented a networking and security posture that lines up with some of these desires. And we're gonna look at an example from Google. And... So Google published a paper on their application layer transport security. Oops, there we go. That they built to address this, these circumstances and this type of infrastructure at scale. And they ended up developing this back in 2007, right around when TLS 1.0, 1.1 were out. And at that point, Google had seen enough attacks on SSL and TLS that they determined that that protocol is a little too complex, too many options, too many moving pieces and too many attacks against it. That they would need to, they should, and they wanted to go off and rethink how can they build a security layer in their RPC networks that eliminate some of those weak points in the security. So what they did is, and you can go see this paper, is that they tried to pare down what they thought were the more secure algorithms for implementing their security. They created an identity model that can be layered on top of the workloads that are not tied to where a workload is deployed. And they tailored it very specifically to Google isms. Google uses protocol buffs heavily. So they use protocol buffs to encode the certificates and on the wire protocol and so on and improve their security, like I said, by limiting to ciphers and protocol schemes that provide things like forward secrecy, authenticated encryption, and safe exchange of session keys and so on. The design and what they were hoping for was for this to be transparent to their applications, that their authentication and authorization rules would be tied to an identity model that could be run, could be interpreted regardless of where it run or what the server name was or application name was. And so they built this identity model. And the way that it worked is the workloads RPC clients, when they spun up, they would go ask for what they called handshake certificates that would allow this ALTS handshake to occur and to occur in a way that can be verified and authenticated. They had this notion of a signing key and the signing key would then issue the, would sign the handshake certificates. And then when the RPC clients tried to connect with each other, they would exchange and like I said, verify these based on the root of trust. The protocol that they ended up designing and building for this, like I said, was based on protocol buffs, but it's fairly straightforward. The client sends its initial parameters and requests to start a connection. The server will then use some of those parameters and its own parameters, create the session keys and present its certificate. And then at that point, the client can actually start talking in an encrypted way. And so you'll see this is a very simple one round trip time handshake that enables then the subsequent authenticated and encrypted communication afterward. So if Google built this, and Google built a lot of technology that predates things like containers and Kubernetes and so on, should we be looking at rebuilding this ourselves to be able to use something like ALTS ourselves? And the answer is no. We haven't really seen that adoption of something like ALTS outside of Google, because there are existing building blocks that can be used to solve some of these same problems. And more specifically, what those problems are, what we're trying to do is number one, get security, authentication, integrity, checking and so on into our network communications without having to go re-engineer and force all the applications to change and try to do it transparently. Things like focusing on simplicity, reducing the number of different ciphers and protocols and the complexity of a handshake to determine what to use. Let's keep that as simple and focused on safe ciphers as possible. We want identity to be the focal point, not where things run. So we need some way to assign identity and there are a number of ways to be looking at this. The first that we're gonna start off with is WireGuard. Now you may have heard of WireGuard. It is sort of a big piece to some of the popular open source projects or even organizations that are built around it, things like TailScale and Cilium, you'll see it mentioned. And what it is, is a way to encrypt packets, datagrams that are running at layer three to provide confidentiality between services and do it in a way that's transparent to the applications. So it actually, WireGuard lived, if you look at the Linux implementation, it lives as a kernel module. It is focused on simplicity versus alternatives like IP set, for example, which can be very complex, even if you just look at the number of lines of code in the particular projects. It actually uses a fixed set of ciphers and cryptography. Whereas you see other things like TLS where you can negotiate a bunch of stuff, it actually uses a set of fixed ciphers. It doesn't try to get too creative with how it encapsulates this encryption. It just puts it into UDP packets. And it allows the WireGuard components to focus on doing what it does really well and then the configuration, some of the pieces like how you exchange public keys and so on are expected to be done out of it. It doesn't try to conquer the entire world, like maybe some of the other implementations do. If you wanna look at an example between two hosts, WireGuard can be set up and configured using similar tools that you would for other network interfaces. So things like the IP CLI or if config. And all you need to do is give a WireGuard instance the public keys and the IPs, which those are assigned into a configuration. And then WireGuard does the rest. It will automatically encrypt the traffic going over this tunnel. And it will encrypt it using very strong and known curves and ciphers and hashing algorithms that are known to be safe and known to be performance. The idea here with WireGuard, however, is if there is a vulnerability found, then don't just continue to support it, just upgrade and move to something that there are no vulnerabilities known. And this is generally from a simplicity standpoint and from a security standpoint tends to be a good thing. However, there are some downsides to WireGuard. Being able to upgrade everything all at once could be difficult. And the implementers of WireGuard have made it very clear that they want to keep it, they want to support one stat of ciphers and algorithms at a time. Maybe they make some leeway for upgrades and so on. A bigger challenge is that WireGuard is not FIPS compliant. So if you're in an organization that needs to support FedRAMP type workloads or working with the US government that requires FIPS compliance, this could be a hindrance and it probably won't become FIPS compliant. NIST and the federal government are pretty, let's say they're bureaucratic about how they pick what ciphers they want to standardize on. And the process for doing that, the roadblocks for doing that are actually quite high and there doesn't seem to be any interest from the WireGuard community in doing this, going through this compliance process. But it's likely WireGuard will not be FIPS compliant going forward. And the last piece is that WireGuard is fantastic technology and I do wanna point out that just because it's not FIPS compliant doesn't mean that it's insecure in any way, but it doesn't do things like service to service mutual authentication or identity, right? You will have to layer something on top of WireGuard to achieve those capabilities. And out in the wild, things that we see people doing, things like, well, we'll just create JOT tokens and have them signed by some trusted secure token service. We have a layer in OAuth and we'll use these tokens to provide a layer of authentication while WireGuard provides a confidentiality and encryption. Or there are other options, maybe you create your own custom authentication mechanism and protocol, the Sillian project has done that and we'll take a look at that in a second. Or you could just layer TLS and client certificates on top which would get you to that next level of authentication. So in the Sillian project, what ends up happening is a way of layering on top the mutual authentication process and then allowing things to proceed under the covers as encrypted traffic using WireGuard. So what happens is in Sillian, in a particular cluster, a Kubernetes cluster, you have two different nodes on which there might be workloads, service A, service B. And then when service A tries to talk to B in that node where service A is deployed, Sillian will check, hey, are you authenticated to talk to service B? And if it's not, it will just drop the packets. It won't do anything with the, it won't allow the connection to proceed. But in the background, what it'll do is it'll go, try to create a mutual TLS connection to the node where service B is or some other node, just to prove that service A can talk to service B with client certificates and mutual TLS. And if that connection succeeds, it'll stop the connection, it'll go mark in a little map that, okay, A can talk to B. And then when the TCP retry occurs, the packet will then eventually make it across because it has been marked authenticated. Now in this separation of mutual authentication encryption, it's possible that the traffic goes across the wire unencrypted because the MTLS part is not actually tied to the record protocol. But if we enable WireGuard, like we saw in previous diagrams, then we can get that confidentiality between services that has been mutually authenticated out of band, right? So that's, we can achieve a level of security with an approach that builds on top of WireGuard. If we take a look at TLS, Transport Layer Security, we can see an alternative, right? Because TLS implements things not at the IP or layer three level, it implements things much closer to the application at TCP level where you know about ports and sockets and this type of stuff. And if we look at TLS 1.2, which is actually very common to be used these days, we look at the handshake protocol which involves a series of steps of exchanging protocol information, cipher suites, random data, free master secrets, all this type of stuff, certificates for authentication and eventually through this complex series of handshakes get to the point where we have session keys, we have authentication and we can start encrypting data and getting that confidentiality. Now, TLS 1.3, which was released in 2018, simplifies this a lot more. The handshake in TLS 1.3 looks more similar to this, where the client reaches out and says, hey, I wanna talk TLS 1.3, here's a cipher I wanna use and here's some parameters that I wanna use for key agreement. The server says, okay, I like that. Here's the key agreement we're gonna use. I'm gonna start encrypting my response because by the time you get it, you should be able to understand and decrypt it. And then at this point in TLS 1.3 in this handshake, we are ready to understand and encrypt data. In an M TLS scenario, in that client finished message, we can send the client's certificate to be able to perform mutual authentication. And if you look at the handshake here and we won't go into too much detail in the record protocol, but in the handshake, you can see this is the one round trip exchange. It looks a lot more similar to what we saw on the ALTS diagrams. And so for those reasons, TLS 1.3 is faster because it uses fewer round trips. It is safer. TLS has gone ahead and reduced the number of ciphers and cipher suites that can be used, including getting rid of a number of ones that aren't safe anymore. And I think the list of supported ciphers in TLS 1.3 went from, was it like 30 something in TLS 1.2 down to five in 1.3. So we've significantly simplified the list of ciphers that can be negotiated for TLS 1.3 and focused on ones that are known to be secure so that in a session establishment, you can't be tricked or downgraded into more unsafe protocols. TLS can do authentication, along with encryption and integrity checking. TLS does meet VIPs' compliance for those that need that. Instead of layering on Jot tokens and other things, you don't really want to share those because if a Jot token gets recovered somehow, it can be replayed and reused potentially. In TLS, we don't share those private key materials. We publicize the public keys. The sessions are ended and terminated at the applications, at the ports where the applications are listening. And like I pointed out earlier, TLS 1.3 actually does kind of look like the Google ALTS implementation, although it a bit late, but it does look like that implementation we talked about. Now, TLS doesn't have, isn't a panacea on its own. There is no way to specify or a standard way to specify identity. Things like issuing keys and identifying revocation and rotating those keys can be complex. Do the applications handle them safely? We hope and every library, every framework is a little bit different in terms of how it does that. So it's not a bulletproof way just by saying, okay, we're gonna use TLS to solve some of these problems around identity and transparency and authentication authorization. But in terms of identity, you may have heard of a open source specification called SPIFI. Now SPIFI aims to try to solve that problem that there is no real standard way of specifying identity in TLS. And SPIFI stands for Secure Production Identity Framework for everyone. It intends to solve that identity problem independent of what type of application on what network it's running or what cloud, public cloud or cloud containers, VMs, it doesn't matter. It's fairly simple and straightforward. Identity is specified with a resource looking string and is asserted and signed by some sort of authority that presents this in what's called a verifiable identity document. Things like an X519 or a JotToken. And the SPIFI implementations try to take into account that we prove identity by its context, where it's running, but how it's running. And we do a lot more checking. We don't just rely on a JotToken or a username or password or we actually go look at the environment and say, okay, you say your service A. Are you really service A? Are you allowed to be service A? And then we attest that and prove that and then issue these identity documents, the signed credentials that then service A can use to assert its identity. SPIFI operates kind of at layer four, kind of at layer seven, just depending on the implementation. SPIFI, the verifiable identity documents, they, like I said, can be used as X509 docs or as JotToken. So the application that has to deal with that JotToken. So now we can solve this problem of who is service A. So service A comes online, it talks to this SPIFI workload API and says, hey, I'm service A, give me an SBIT or one of these identity documents. And then the workload API goes and does some research behind the scenes. It doesn't just trust that service A says it's service A. It might go look at the host, it might go look at the process ID, it might go look at labels and other contexts associated with this process to determine whether or not it is service A and can it actually be service A? It's not somebody trying to impersonate service A. So this process is called attestation and the implementers of SPIFI have an engine for doing this. I'll point out a couple of implementations in a second. But then once service A has been attested, then we can issue a SBID document that can then be used to prove its identity. And this, like I said earlier, doesn't matter what host is running on. Doesn't matter if it's in containers or VMs or lambdas or anything. This is independent of where things get deployed and can be used across workloads and across platforms. Now, the SBID actually contains the string that URI that I mentioned that describes the identity, which is specified in a format of what is the trust domain for this particular identifier? And then maybe there's some sort of hierarchy or way that we wanna structure our identity names, but then eventually what is the name? In this case, it would be service A that belongs to the BAR organization that lives under FU and is owned by this trust domain. We can also issue JOT tokens, which then get signed by the signing authority in the SPIFI implementation. And then these JOT tokens then can be presented as proof of identity and signature verification to understand that it was actually signed by the right authority. Now, if we use the X509 documents for our SBID, we can plug those in to our TLS 1.3 implementation and now get authenticated and secure communications between service applications or APIs or microservices. Following a model, again, like I said, we don't have to go off and try to reinvent ALTS from the scratch that we already sort of have it. Spire is a implementation of the SPIFI spec. So if you're interested in learning more about that, go to spifi.io and there'll be a lot of information there about the spec, but also about Spire and actually running and operationalizing it. And I know at SOLO, we use Spire to implement identity and workload identity across workload types. So now what about, so we covered the encryption and confidentiality, there's a few different ways to do that, authentication, identity, we talked about that, how do we bring this together and do it in a way that's transparent to the applications? Because now we know who Service A is, but we need a way to use those identity documents in a TLS 1.3 communication and do this in a way that doesn't force all the applications to have to change. And an example of doing this is in a recently, well, I guess it's not recent anymore, it was a year ago where we announced the sidecar list version of Istio. So Istio solves this problem and has solved this problem for a number of years by injecting a sidecar next to each of the applications. That was the intent to try to be transparent. So we didn't have to update the applications directly, we could just capture all the traffic that's leaving or entering an app and force it through the sidecar. And then in the sidecar, we'll handle this. With Istio ambient, we don't need to do that anymore. We don't need to inject a sidecar, we can handle this stuff within the pods network namespace but not in a sidecar. And so take a look at ambient for an implementation of that. And in fact, I'm going to, I think, go into a demo now that shows some of these concepts that I talked about and illustrated here in action. So first, we're gonna take a look at our environment. There's a lot of stuff going on here but what we're gonna take a look at is in the default namespace, we have a set of apps. We have the sleep application and we have the hello world application. And these apps can call each other without any problems. So if we come in here and we do call combinations, we can see sleep v1 can call the hello world v1 and sleep v1 can call hello world v2 and so on. Right. But what we wanna be able to restrict who can call whom and to do that, if we take a look at, we will take a look at maybe using Cilium or CNI to be able to do that restriction. We'll take a look at mutual authentication and approaches for doing that. However, if you'll recall earlier in the talk I mentioned, using IP addresses as the unit of policy can end up breaking down at scale. Now let's take a look here. So if we look at, we'll start off with Cilium and we'll list our services again and we'll enforce a policy or set up a policy that says sleep v1 can only call hello world v1. And that sleep v2 can only call hello world v2. Actually that's supposed to say that. So what we're gonna end up doing is looking at our policy document, this happens to be Cilium, the implementation is not that important. But what we're saying is for hello world v1 can only be called by hello world or by sleep v1. Sleep v2 cannot call hello world v1. So let's apply this document, this configuration and let's also set up a policy for sleep v2 and hello world v2. Those can sleep can call hello world v2 but sleep v2 cannot call hello world v1. So let's apply this as well. And now let's take a look at what's happening under the cover. As I pointed out, Cilium does layer a MTLS mechanism or sorry, a mutual authentication mechanism on top of encryption, on top of WireGuard. So what we're gonna see down here is, I hope I didn't mess up the demo script here. I did. Give me one second. Why didn't that work? What are the chances that live demo fails after I've gotten it working seriously 20 times? Let's see if we can do that. All right. So what we're gonna do is we're going to capture packets in Cilium and when we make those calls, what we're gonna see is the MTLS handshake happen. Remember that's how we assert the identity for mutual authentication in Cilium but then we don't use the encryption part. We delegate that to WireGuard. So if we now make some calls, we should see that some calls succeed. The ones that we expect to fail actually do fail. And in the bottom pane, we can see that authentication handshake happened using MTLS but then we can also see that sleep v1 cannot call hello world v2 and by subversa that sleep v2 cannot call hello world v1 which is what we want, right? Right, number of policy is enforced. Now the way that it is enforced under the covers is Cilium has a mechanism for specifying identity this Cilium identity is not spiffy but we can take a look at what it is. We can see that in the default namespace we have four different identities, it makes sense. We have four different services. And if we look at one of those identities, nine zero five eight, which represents sleep v1, we can see that it's created by a set of labels. This identity is created by a set of labels. So we go and we see and understand some context about what a workload is and then we assign an integer to it and that's the identity. If we look inside of Cilium's agent, what we can see is that this identity under the covers for Cilium to understand and apply network policy it has to be looking on the wire at, well, what is the workload? How can we identify this workload at runtime? And it does that by using IP addresses. And so it builds up this list or this map of IP addresses that are mapped to identities on a particular host and whenever connections are made it looks and said, okay, are you this IP address? Okay, you're this identity. Okay, I have this policy attached to this identity. Now, what we're gonna do here is we're gonna scale up the number of sleep v1 replicas and what we're gonna take a look at is how this identity mapping with IPs is not as foolproof as we want it to be. So we'll give it a second for our scale up to happen. We scale it up sleep v1 and if we look in this cache now, we can see for this particular identity we have a number of IP addresses. We have a number of, let's come over here, we can see sleep v1 services, each with their own IP address, but they're all mapped to this one identity in Silio. Now, what we're gonna see here is this identity and this mapping between IPs is susceptible to some types of failures. All right, so let's simulate in a scenario where the node on which some of these workloads run can't for some reason communicate with the CUBE API. Now this can happen for a number of reasons, network issues, the Silio agent is inundated with updates and is slow or doesn't have enough resources can't process these, the CUBE API could be slow. Maybe one of your cloud providers is doing something on the backend that's causing some issues with the CUBE API to be slow. So what that means is the node that is watching for pod changes and IP address all this stuff that's trying to map to the Silium identity could get out of sync, right? It's intentionally, CUBE API is intentionally or has been designed to be eventually consistent and can get, you can get into scenarios where things get out of sync. But now what we're gonna do is we're gonna try a few things here to see if we can get the IP address that was assigned to sleep V1 to get recycled and assigned to sleep V2. Because if that happens, we could potentially sleep V2 could call Hello World V1, which it did in this case. We ran a test, cycled some IPs and in fact a sleep V2 IP address did get assigned to, sorry, sleep V1 IP address got assigned to sleep V2, which then under this type of mapping to IPs for identity can get tricked and we don't want that, right? So let's take a look at another approach. Instead of delegating the policy to an identity that's mapped to IPs, why don't we enforce policy that's tied to identity that is asserted on the connection itself? And that's what we're gonna do here with this deal ambient. So to do that, we're gonna put the cluster back into a state that allows the node to reconcile and to get all of its information. So we'll eliminate that communication issue between the node and the QBPI. And then we're going to install its deal ambient mesh. Now, its deal ambient mesh is a side carless implementation of service mesh that actually implements the stiffy spec and ties that to a TLS 1.3 connection like we talked about in our talk here. So if we take a look and cross our fingers, hopefully things are coming up. Uh-oh. Give me one second. You run a bunch of demos. The last step, which is just setting it up for the live demo, that part gets gipped. So give me a second, let me download the, and I was hoping to pre-cache these so they would go a lot faster, but instead we're gonna have to wait for some of these images to come up and to get loaded correctly, which it looks like now the SDOD control plane is and some of the other pieces are starting to come online. Maybe I'll give it a second. So while we're waiting here, hopefully it'll eventually come up. But what we're gonna do here is we're gonna set up its deal ambient mesh, bring those workloads into the service mesh. We're gonna run that same test that shows the node failing to talk to the Kubernetes API and then show that since the identity is asserted and authenticated on the connection on the wire, it doesn't matter what these different IP caches are doing. And what this will show is how, you know, we talked about earlier, some of the attacks that we've seen at other organizations, they don't happen because there's just one big gaping hole. There's a series of difference weaknesses in the system that could be exploited at just the right time, just the right circumstances. What we want is defense in depth and to be able to have networking policy to control things that the IP layer is good, but we also need to have a layer that asserts authentication, so on at the application layer as well or the higher up in the stack. Okay, so we have SEO ambient mesh installed. We're gonna bring those workloads into the service mesh and specify authorization policies like we saw earlier. Sleep v1 can only talk to Hello World v1. Sleep v2 can only talk to Hello World v2. Right, so now we got that in place. So if we come over here, we look at the ASCII system namespace. We do have a number of components, including the control plane, the workloads in the default namespace are, let's see, are part of the ambient mesh. We've labeled them correctly. And now what we're gonna do is we're gonna demo that wrong identity again. We're gonna get into that same state where the node cannot talk to the QVAPI server. We're gonna cycle through and get to a state where a sleep v1 IP has been assigned to a sleep v2 pod, which would allow us to go around that network policy, but then we'll cross our fingers and hold that. Since we now have defense in depth, that we'll catch that at connection time and just allow the connection. So let's see, we're starting to scale up, scale down, try to get the system into a state where sleep v2 has a sleep v1, an old sleep v1 IP address. Now we're trying, and we did get into that state, now we're trying to call it and it's hanging up. It's not doing anything. We're not allowing that connection to succeed. And this is because on the wire, as the connection is being made, we are doing the mutual authentication in line with the rest of the security. And we're doing this transparently, as I mentioned. The default, if you look at these workloads, they don't have sidecars. Istio is running and applying its policy, but it doesn't have sidecars. We're able to do this in a transparent way. So that's all I have for today. I think we did pretty good here on time. I'll leave you with a few additional resources, including links to things like ALTS to the WireGuard spec, understanding a little bit more. I purposely didn't use the phrase zero trust in this session. It's become too much of a marketing term. I tried to cover the concepts directly, but obviously there's a lot more to this and I left a few links for that. I'll leave a link to academy.solo.io. This is a place to go and get your hands on with these types of technologies and do that in a free way with a self, we provisioned the lab environment, it's a couple of clicks, it's running on instruct and we guide you through understanding how Cilium works, how Istio works, how Envoy proxy works and some of these other components that can be used to build a modern and secure application network. So with that, I want to say thank you. I want to, I'll leave my contact information here, both email and Twitter address here or handle here. Reach out to me anytime, happy to take questions offline, happy to take a look at people are asking for a copy of the slides. Yes, the slides will be made available. People are asking about a copy of the demo. Yes, that's already on GitHub. I will, I'll make sure to put a link in the slides to the demo as well. Yeah, so with that, thank you all for joining and like I said, reach out if you have follow-up questions, slides will be available and hopefully this was worth your time here. So thank you. Thank you so much, Christian for your time today and thank you everyone for joining us. As a reminder, this recording will be on the Linux Foundation's YouTube page later today. We hope you join us for future webinars. Have a wonderful day.