 So first I want to thank you all for coming. There will be parts of this talk that have some technical portions, but at the same time, we're also want to dive into some of the organizational side as well. So with that, let's jump in. So first let's start a little bit of talk, let's talk a little bit about reality versus assumption. So one of the problems that we have is that as the industry tends to change over time, our technologies change. And that creates a gap between the assumptions that we made and in time, the reality continues to shift away from those assumptions. And many of the attackers actually find their opportunities within that particular, within those gaps. And so of course there's different effects that don't have to jump into what the effects are here because we're all very familiar with it. But also part of the problem we have with organizations is that the regulations and policies that we generate will often ossify those particular decisions. Things that were valid at the time, but those assumptions are not necessarily valid two or three years from now or further on. And so a couple examples of things that we used to do in the past was we used to have like static network architectures assumed that they didn't move. We put in various diagrams and so on and assumed that that's exactly what it looks like. We would track our services and ports using things like IP address and ports rather than looking at a different type of identity. Things like network segmentation were mandated, but they were mandated across certain boundaries that if once you're inside, then those boundaries don't protect you anymore because the attacker has access to your sensitive systems. And so as part of the drivers of what we're looking at for things like zero trust is that we have things like the security posture is starting to shift where we're looking for more granular perimeters. We're looking to move away from reactive and move towards something that's actually proactive. So we're not trying to just respond to the incidents, but instead are trying to work out, well, what incidents are we seeing now and how can we actually adapt to them? And the reality is that slow adoption of key technologies is becoming a risk, not necessarily only to security, but also becomes a business risk as people gain more capabilities over time, they can outperform your business. So we have to balance this between what do we adopt and how do we secure it, but do it in a way that over time, we don't put ourselves in too much risk. And so with that change comes risk. Just for a quick definition of risk, usually we look at one definition, there's multiple definitions of it, but one definition is you look at the magnitude and you look at the probability. So what is the magnitude? If something happens, how bad is that thing going to affect the system or your business? What is the probability of it happening? So you have to actually look at both. And the question then becomes how do we actually start looking at the blast radius of a given failure or of a successful attack? And if you start looking at things from a blast radius perspective, that actually gets you a bit closer. You'll see a little bit more as we move forward. And but at the same time, we also want to constrain the complexity because as we move towards the zero trust system, the systems we bring in have a lot more complexity than they used to it because they're a lot more granular. So you have to make sure that what you're doing works across a wider and much more diverse set of systems. So let's jump into a simplified definition of zero trust. So this definition actually comes, this is something I wrote up before the NIST definition existed. And so the way that I would tell companies at the time, before we would call it zero trust or at least with this particular area is we want to establish an identity. And when we talk about identity, we specifically mean a cryptographic identity you can bind against. So we already have this with users. So users, every application used to have its own identity. Eventually we got a single sign on so that we could conform, we can have all the identities managed for a single location. And then eventually we got federation of those identities so that you could have a single identity that manages your external third party services. So if you have to shut down an account or you have to change a password or add to factor authentication, it works across all of your services. We want to do the same thing with workload identities as well. We want for the workloads to have cryptographic identities that are continuously redeployed and reissued, but also do it in a way that we can federate across multiple systems. And once you have a comprehensive identity system for your workloads and they're able to identify each other using that, then you can actually drive it towards policy. And that policy allows you to consume the identity and you write your policy so that you say application A is allowed to send these type of messages to application B and it's allowed to send these type of responses. So instead of looking at the IP address and port or similar, you're actually tying it down to this cryptographic identity has these capabilities. And we also want to have control. And by control, we typically mean automation. You want to have the automated deployment of these systems, you don't want to make them, you don't want to set it up so somebody has to manually do something. You want to have not only automation of the deployment, but you also want to have automation of the observability and automation of some of the responses. And so, but before we can really drive into, we also want to look at contrast the difference between perimeter defense where we were at and in zero trust. So the simplest definition of perimeter defense I can give you looks something like this. Something untrusted goes through a security control and is now connected to a trusted environment. So everything on the left is untrusted, goes through the security boundary. And then once you've passed that security boundary, you're now in a trusted environment. And typically it looks something like this in practice. Well, you end up with trusted networks and you have VPNs in the center and you may put firewalls at the VPN level. But generally once you're in the network, not to say there's no controls but the controls are limited. And what we want to try to push towards is something that looks more like this where you have secure connections that the networks are untrusted. Doesn't mean they're uncontrolled, but they're untrusted. And you want to create secure connections between the workloads where the workloads are validating their respective identities. So we want to shift from something that looks like this. Sorry, so this is the general census as to what it is. So once you an attacker comes in because they're in untrusted network, then they don't necessarily have access to everything that's in there because they have to go through additional controls. So we want to push away from something that looks purely like this towards something that looks more like this where the controls, you have controls that are effective, that are closer, they have more context. If you have something that is like on the right, you have that the attacker trying to connect into a system that they don't have access to. The controls there have more context in terms of what it is they're trying to protect. So you can actually put more fine grain rules in terms of what's allowed or not allowed on a per service, per communication basis. There are some risks here as well. One of the ones I'll call out immediately is that there's first, not just from the complexity side, but there's also a lot more processing in here as well. So you have to be careful. That's like, hey, we're gonna put gates everywhere because that'll also fail. You end up with a lot of latency and in some scenarios that latency is unacceptable to the business context. There's techniques that can be used to avoid that. We'll get into some of them later on, but I just wanna point out that you wanna find a balanced approach where you're checking things coming in and going out in effective locations, but you don't necessarily have to do it in every single transition point that where some bit comes into one spot and then you're running these really expensive rules. So just as a heads up. So again, how do we achieve this? Going back to the identity policy and control. And so when we look, we can use a previous example that we've had again with user identity. And so with user identity, we mentioned about single sign-on. Does everything require a login to use that? Many third parties don't actually support single sign-on. So that's actually ends up being a major gap for them. So this actually becomes really difficult with major enterprises that have controlled information where, or very sensitive information where a developer or user might login to a third party system to do their work that is not necessarily in their control. And if they leave the company, how do you shut down those particular systems? So there are gaps that exist there, but for the most part, these tend to work really well. One of the things that we have been pushing for as an industry is to stick with the standards. So there's a lot of custom standards that are out there, but as more people circle around JWT, around SAML and similar, that means that you have a single set of, or a smaller set of things you have to comply to and it allows for a lot more integration. So we started talking about workloads and how do we establish workload identity in a zero trust environment? These are the basic steps. So you have to establish what's called a trust domain. So trust domain is, and you'll see an example of it soon, but you think of a trust domain as a set of things that belong to the same family of services, like it could be like an application or it could be a business unit or it depends on where you wanna draw that scope. You attest to your workloads that are inside of them. You then establish policy in terms of how they communicate with each other and then you establish the, you can then federate and establish trust across trust domains. So in one scenario, you could say, let's establish a trust domain and we'll use standards. So we'll use standard PKI in a scenario. So we'll set up a CA and that CA will represent a trust domain. So every CA you set up independently is a separate trust domain. The next thing you do is you then assign, you then assign that PKI, you register each workload and that workload results in an X519 certificate. And that X519 certificate allows that you're basically a testing. You're saying this particular workload follows my process. I have attested that it meets my patch set, that it's on a device that I own in control, that the device meets the set of requirements. So there's a lot that can go inside of the attestation here. In the more advanced scenarios, you actually tie them to things like the, if you're on-premise to the TPM, so how would you get cryptographic attestation that that thing is what it is? In the case of cloud, you might tie it to like your AWS identity document or your GCP workload identity. So you wanna tie it to something that's cryptographically sound when you have the opportunity to do so. Not every system can do that, but many systems do have that capability. And so once you have that in place, you can then write your policy. And your policy in this scenario, you say this particular identity is allowed to perform these particular sets of actions. So in other words, this one's saying that some front-end service is allowed to make a request to some storage service. You're pulling the information directly from X519, something that you've already validated. And you're making a determination on what you wanna do with it. So now this one is an example from Open Policy Agent, but that's not the only one that's out there. So you have things like Qverno, you have, there's a new one from, I forget the name of the company. But there's several companies out there that are starting to produce both open source and proprietary that you can bind into it. So the main point here is not to say go use Zopa, although that's not a bad choice. The main thing here is that you have something that is declarative. You're explicitly saying that this particular workload based upon its cryptographic identity of what's within the certificate is allowed to talk to, what it's allowed to do and who it's allowed to communicate with. So once you have that in place, you also, you can then establish trust between organizations as well. So we have this scenario two CAs. And so if I trade CAs between each other, that means that I might have something from one system, a workload on one system. It can communicate to something in another environment. Another might be a different application in your company or might literally be a third party organization that you're integrating with. So as long as you can share those CAs and you can make sure those CAs hit the workloads, you can then check the identity, especially if you're using mutual TLS. You can actually lift the identity straight from the TLS transport. So you're not asking for a long living token that can be stolen and reused. Instead you're saying I have this X519 certificate and I am using it actively in this communication and eliminates man in the middle attacks as long as TLS remains unbroken. And as long as your keys are strong enough. And so in this particular scenario, you can see I've actually have an example where the domain has been specifically set. So I can say this specific domain for this identity is allowed to make these particular choices. And what's really nice about this is that we have the previous example with the front end and the back end service. This allows us to specify the exact name of the organization or group that I'm working with so I can actually treat it. I'm not creating special tooling for a third party organization. I'm using exactly what I used before in order to validate the third party systems. So when you start taking a look at applications, the assumption, what all this is leading towards is that you're not assuming that the network is secure. What you're doing is you're actually taking the control, bringing it closer to the application. There's some debate as to whether the application itself should own the control or whether you should have something wrapped around it like maybe you stick Envoy or something similar around it. But in both cases, you're still bringing the control closer to the application to the point where you have that context of what the application is trying to do. And so if you look at an advanced version of this, this is actually where I think many of the systems are starting to head towards. It's not just A, connecting to B. In this scenario, we have a multi-party edge compute and there's also several other examples of this as well. You can pretend that the center part of this is connected to with a 5G edge environment. That's a whole thing that's coming forward as 5G infrastructure starts to roll out. So they got the radios out, but they still have the back-end infrastructure they're trying to deploy. Once they get the back-end infrastructure, you're gonna see lots of edge data centers. And inside of that edge data center, you might see things like firewall as a service, intrusion detection as a service, data loss prevention as a service, things various companies have put in that end up going through like Equinox or Equinex or some other similar type of company. The main point here is not to promote that but is to point out that you have multiple things that your system has to traverse through. So it's not just my application talking or my workload talking to some community service in the cloud. There's also things in the center that you have to consider the identity of as well. Like how do I prove that something went through a firewall? How do I prove that something went through a data loss prevention or through other environments? And in some scenarios, some of those services may be done as a service. So how do you demonstrate that? So there are answers to this. One example is within Spiffia. I didn't write it in the Spiffia slide, so within Spiffia there's work that's going on to look at that transitive identity. And it's a really good example of transitive identity missing from Mutual TLS. It's like you take three services. Let's call them A, B and C. A connects to B with Mutual TLS, so they validate each other. B connects to C and with Mutual TLS and they validate each other. How does A validate C? There's no, you have to trust B that it's doing the right thing there. One of the things that we have is we're able to, is there's ways to build up a miniature chain that basically says like, what's the audience of the next thing I'm connecting to? And then the thing that receives that validates that it received that particular token from that. And you're able to build up this miniature chain on the fly for a particular use case so you can prove that things have gone through. So there's work being done in the community to solve that, but I want to point out that is currently a gap that you'll see in many systems, is that there's no way to do the transitive identity in an easy way. And so I mentioned about wanting to push it toward the identities towards the infrastructure and that also includes when you make those initial connections. You want to make sure that in time we're going to start to see these identities permeate two things like the VPN gateway and VPN concentrators. And this actually produces a really interesting effect because if you listen to a lot of the proponents on zero trust you'll hear, everything needs to be on the network. Everything needs to be connectable. Like we want to make sure everything can be connected with each other. But the reality is that when you look at many of the regulatory systems many of the regulations say that you have to have certain additional controls. You can't just do everything on the network or publicly. And also there are sometimes bugs we find in mutual TLS. So we also want to have layered, we also want to have a layered defense. And so you can't just make the assumption that your mutual TLS is going to save you in every scenario. So we're still going to have VPNs. We're still going to have these type of things that exist. But I believe the best case scenario is that we work out paths to make these particular things work so that when you have those initial connections come in we can establish those connections on the fly. They're attestable, they can get built as when needed. When you no longer need them to go away and everything has a cryptographic identity you can track. Which means that you end up with something that from an infrastructure looks like this. You're driving something, an identity that drives all the way from the hardware TPM up to the app, but that same identity has become compatible with the infrastructure and similar. I'll give a few moments for a couple of photos. So let's dive into the actual applications themselves and talk about how applications can move towards your trust. And so historically you have something that looks something like this. And I should actually be a little bit careful on how I word this because I'm not saying that there's no authentication that occurs at all. Very often the client will have a user identity or something similar that they use this to prove that who they are. Goes over the firewall, the application server is responsible for validating the JWT to make sure that it can log in. And assuming that's not broken that it gives you something that's defendable. The problem is that what happens when this particular system has been compromised, the application server. Can they do a scan of the database? Can they do a horizontal attack on others and gain a foothold into other environments that might be more sensitive? And so the question is how much, again, we mentioned it before, what is the blast radius? You have to ask the question and when you're designing a system, what is the blast radius if this particular component is compromised, what is the blast radius of fact compromised? And so what if we were to tie in the same things that we mentioned before but tie them down to the application? So a client comes in, has a JWT, we're able to say that a particular system has to have a JWT. But also the application server and database server all have their own cryptographic identities as well. So in this scenario, in the database server, I have an identity. And that identity, so it's saying I will only receive things from the application server that has this specific identity. And I will only process the request if the JWT of the user has been forwarded to me or an equivalent that is an equivalent token that's been swapped out through some third party identifier. So I have something that's cryptographically provable from the user before I process. So that application server has been compromised. If it tries to just arbitrarily access the database server and it doesn't have that external identity that's attached to it, that should go to my observability platform and tell me I have a problem. So it increases the risk to the attacker and reduces the value of the attack to the attacker. Again, not saying that this is 100% perfect and impossible to break but it's a large part of security is about economics. You want to try to increase the cost to the attacker while simultaneously keeping the constraint of the cost to you and you want to decrease the value so that if someone is successful, they, you minimize the value they get out of it. And so it turns into an economics game. It's like how expensive is it to break into a system? So I'm tying it in. This identity provider from that JWT that we mentioned, that's actually coming from a third party environment and not from your systems. Your application should never be as much as possible. It should not be an identity provider. It should be an identity consumer. That also goes for the workload identities. The workload identity thing that actually issues out the certificates should not be in your application. That should be something that your application works with, presents evidence that, hey, I'm an application server. I have this image hash. I have this TPM that I can prove, et cetera, et cetera. And so that way that when something connects them, you're like in this scenario, you don't actually have a way to mint the token yourself so it becomes difficult to impersonate. So one of the standards that I'm a promoter of is Spiffy. And so Spiffy provides a structured way of providing an X519 certificate on how you structure that out. And also allows you to generate JWT off of that so that you can then identify your applications either through either the X519 or the JWT path. Both of them are important. You wanna always have mutual TLS when possible if you're using the X519 certificate. The JWT is useful for the transitive identity problem that I mentioned before. Again, not a perfect solution, but it gives you a little bit extra that you can bind against in order to say, hey, I have something here that proves that I at least spoke to the originator that had access to that token. And so once, so let's back up a little bit and now that we've given a definition for zero, a definition for zero trust, and of course, this is not comprehensive. There's a lot more if you read through the NIST definitions through the cease definitions or they talk about how do you get the devices, actually provision, how do you get the user, smart cards or similar or they talk about data. They're actually still trying to work out what to do with data. So it's like, well, something will happen in the future. But like, so there's a lot there, but when you're trying to establish a new paradigm, that new paradigm is not gonna, you don't wanna start with everything. You wanna start with one thing, like start with identity or start with policy. You don't wanna just jump all in. And so the very first thing that I like to focus on is actually looking at the organization. Like you actually wanna have a conversation with the executives too, so you get that buy-in. Like nothing will happen without the executive buy-in. Actually, if you ever take a CISSP, that's actually one of the big things that they talk about is you have to have executive buy-in. And so when you're talking to a CISO, what is the purpose of a CISO? So granted, we're at a security conference, so I know many of you know the answer to that. But at the end of the day, it's not just about improving your information security, it's also about constraining the risk and making sure you constrain the cost of that risk so that the business can continue to do business. So information security 101. I'm sure you'll see this document everywhere, the confidentiality, integrity, and availability. Confidentiality, you wanna keep your secret secret. Integrity, you wanna make sure your data remains untampered with and availability that you can access it when you want, which is conveniently CIA. So if you have trouble remembering it, just remember CIA, it'll help lead you towards this. So the way that they tend to do this is they tend to push towards that set up policy. There's also a term of Lord here. So from now on, when I mention policy, unless I explicitly say it's like an OPPA policy, I explicitly mean organization policies at this point. So policy are things like how do I, all devices, all encrypted devices, or sorry, all data at rest must be encrypted. Like that's a simple policy. Standards are like how do you do that? Well, our standard is we use AES with a specific key size. Procedures, here's how you do it in Linux. Here's how you do it in Windows. Here's how you do it in Mac OS. These procedures must be followed. Guidelines are best practices. Like best practice might be something like if you're resetting a password, maybe you trim the drive. You don't really need to, but maybe it's the best practice you do. So once you get the executive buy-in, or once you're pushing towards that, the next thing you wanna do is you wanna find out who owns the policies and who owns the standards. Because the reality is that if you're looking at a zero trust implementation, it's very unlikely that what you're implementing. Not to say that it won't meet the policies, but you will almost certainly not meet the standards. You will not meet the guidelines, which means whatever you do is gonna be outside of those standards and guidelines that have been established. Cause it's a new paradigm. So what you have to do is you have to enter into conversation with the, to work out what are the policies? What are the current standards? What are the current guidelines? And again, this goes back to the executive buy-ins. That way that you can work out who do you need to talk with so you can learn what is necessary for your infrastructure. And ideally you wanna try to find a Greenfield application where it suits really well and where you want to experiment with it. And get the green light from your executives of whoever's allowed to make that final decision. And before you write a single line of code, you actually wanna loop in your security teams. Now this is something that I think is a good practice, even if it's not zero trust. You want to loop in your security teams early on your application design because usually what happens with application teams is they'll start developing and then they'll hit a brick wall and it's like we have to do all these additional things to comply and in the meantime your application security standards have actually suffered because you've not bothered to bring in the experts. And so having those conversations early on means that the cost of implementing and getting to that secure place, not only are you going to be more likely to hit the standards and get that approval but you're also gonna have a lower overall cost. So it's important that you get into the habit of working with or if you're on the infosex side to be receptive to that as well. So if you're on the other side of that, like please, if you have a developer comes around and they're trying to follow this pattern, like please help them out because that's how you get the transformational change. Are you architected? You start something small. I recommend identity and mutual TLS as an initial starting point because even if you don't have the policy, the fact that you have membership that is cryptographically verifiable improves your total security posture. Just flat out it just improves it, just if that's all that you get out of it. And then you would write. So once you have that initial first thing that you want to do and you've worked towards getting an implementation, you also want to make sure that you look at the observability. Every single action you do has to be something that's observed. And that second line I put in there of a tree falls in the forest and you did not observe it, did it really fall? Well, you can change that word to a tree fall to if you had a security incident or a security breach. Just because you don't know about it doesn't mean it didn't happen. And so even worse, if it's discovered and you don't have information to show what the scope is, you have to also assume the worst case scenario, which it might be like, oh, everything is gone. So, and you're actually part of the reason why I recommend you look at observability as a mandatory part of your first step is because it turns out that observability is often defined based upon the previous set of practices. So the number of places that I have seen where the way they identify their service is actually IP address import. So you come in with, I'll just use a simple example, not even looking at the zero trust side. I just come in with Kubernetes. That IP address is gonna be reused across so many different clusters. So that IP address import, I'm not gonna say is meaningless, but it's pretty close to meaningless. Like you're not gonna be able to, you're gonna have a lot of extra work to try to work out. And if you look at what the pod has access to, the pod's network only sees its private IP address. It doesn't know what its public IP addresses. And so you just don't have the information as the application. So this is why I recommend you focus on adding that observability early on is because it's something that is crucial and you have to find out what those gaps are so that your teams are able to build it into their SOC and various other systems. And so you'll likely still need to capture the IP address import to comply with the previous tooling. But the way that you deal with that is you add the cryptographic identity you had before. You add it as a field within your observability. So that way it gives you something to start with. Doesn't break your existing tooling, but it gives you something you can bind against. The third part is after you've implemented something as small as it is, you want to document it and you want to talk with others about it. Generally what, from an education perspective, I tend to recommend my teams do three things. They have their initial architecture documents that they wrote and you have to be careful here because you don't want it to devolve into a waterfall method. You want it to be iterative and so those documents need to change over time as you learn more as well. But you want to make sure that you actually have something that you can say, here's what we did. And then the reason for the high level deck of slides is that especially in larger organizations, when you're trying to convince others to follow in the same path as you or how to integrate with your system, your architects are not going to have time to read through a 50 or 100 page document describing all the details. So you want to have something that's short to the point that gets on the high level information, but also give them a point or two where they can find the details when they need to. The reason for the baseball cards, they may sound like a useless exercise, but again, executive buy-ins, if your executives say, you know what, this is starting to look like a cost center, we're going to back out, well, you're done. So you want to have the cards there. So that way that you're describing to your team, to your executive teams, you're managing up the chain, giving them what they need to know in order to continue greenlighting you. And at the same time, making sure that you're enumerating the business value that you have in terms of like, well, we spend money to implement Zero Trust, but these teams were able to accelerate because of these particular reasons. We were able to market faster and were able to integrate faster because of the score. We were able to prevent and discover these additional attacks that if you're in a production environment, so you can actually give numbers to say, like, here are things that we were able to mitigate simply by the design of it. And also make sure to present to others, not only at conferences, but also internally, you set up time to meet with others to discuss. On the side, the reality is very large organizations will have legacy infrastructure. Like, if there's many systems that will have many startups that grow into large companies will sometimes have a model platform that you can work on and you can make all your changes there. But most legacy systems actually have, like, you pick the technology, it's probably in there, especially the older companies. Like, they'll have mainframes, they'll have very major supported version of Microsoft, they'll have various Linux distros in there. So it's like, and you pick the database, it's probably in there. And so when you start looking at it from that perspective, you don't have a single thing that you have to work on. So you have to actually, and not every system can be retrofitted with your trust. And so in those scenarios, you don't want to just say, oh, that's gonna stay as perimeter defense. What you wanna do is you still wanna issue out an identity. So you might have a system that you can't, like I say, it's a mainframe. You can't modify the mainframe, but you can give that application that the mainframe is running on. You can wrap it around a proxy and give it an identity and say, this system, this mainframe, can only receive messages from these systems and you can only communicate with these systems. So you now have something there. Again, it's not to the level that we would love to get to, but it still is a compromise. It gets you to a place where you can still reason about it within the broader environment. And one of the things that I found is also useful is, I would just split the inter and intra communication, and I'll use Kubernetes as an example, just to split those two into two different areas. So like intra service communication is things like, I have a pod that wants to communicate with another pod. How you secure that, you wanna have some flexibility because many applications are actually bound to the infrastructure that they run on. Like there's certain applications that you must have Istio to run. It will not run without Istio. Same thing might happen with Kuma or LinkerD or similar. So when you have a vendor come in and they say, well, this specific service mesh and that's what it works with, like you don't have a choice at that point. And by separating out the treatment of inter, where inter is like, I have an application communicating to another application across boundaries. The reason that you wanna have that in place is that it allows you to also bring in the previous things that are legacy. So that way that when you have an application communicating that application, you can still reason across those boundaries. And it also gives your infosex something to bind against that is a little bit easier than every single service. So you can say the developers will state what the application is going to do internally. But once you start crossing those boundaries, then the applications have to go through some policy that is controlled globally by infosex where the infosex can say, yes, this communication is allowed or not allowed. You can actually write those policies as pull requests. So the developers will say, I wanna connect A to B. And then the infosex person is the person who reviews it and approves or rejects it. And if they reject it, they can give feedback as to why it was rejected. So they can iterate on it, which turns into the entire policy as code. How you do that is for another presentation. There's so many different ways to do that, but that's the pattern to aim towards. And then you want to, again, going back to the automation. So automation is just a set of control loops that manipulate state in some way. So you observe the state, perform an action. You observe the action or you report on the action and you repeat this over and over again. So part of the key here is that when you're working on a larger environment, if possible, you wanna generalize what you're doing. So you don't want it to be bound to your specific application and every single application has a slightly different way of doing the exact same thing. So if you can generalize what you are doing as part of the automation, then the initial time it's expensive. The second time it's still expensive, but it's way cheaper. The third time you're basically getting it for free at that point and also gives a central place where when you update the system, you're not updating each one that spoke. You're updating one spot and then you have a path towards updating everything across the environment. You do have to be careful there because it also allows you to quickly roll out your mistakes too. But that's part of the process is you wanna get into good cultural habits of doing good code reviews and testing and summer. So, and then back to the education. After you've done that, you want to educate your team because at the end of the day, silo culture will break you. I'm serious, like the fastest way to destroy a zero trust environment when you're trying to roll it out early on is to treat everything as a silo. And so, again, to reiterate, you have this process of repeating that information, like of going through the same process over and over again. And so, at the end of the day though, and this is really what I wanna drive the point towards. Like I've shown you a lot of information in terms of like what zero trust, like how can you get initial section? How do you communicate? How do you set up some initial processes? But at the end of the day, there's not a single culture, a single technique, a single thing you can bring in that's that'll transform you because these are paradigm changes, which are paradigm shifts that we're seeing here. And without, so you need to focus on the people, you need to focus on the processes. And as an organization, sure, if you're a vendor, you can sell tools to help this, but it'll ultimately up to the consuming organizations to make these particular changes. And anything you do to help them in that cultural shift is really where you're gonna get that value because it is ultimately about the processes and about how do you, people looking at the blast radius when they're developing, people looking at what is the cost of what I'm doing? What is, what's the level of granularity? Constantly asking these questions as they're architecting. And also driving that communication with your peers. So with that, I wanna thank you, and I don't know how much time we have for questions, so I don't think we have time. So thank you very much. Thank you.