 Hey, y'all welcome and thanks for having us. We're going to talk today a little bit about how the DOD is using Istio to provide both end-to-end encryption as well as authentication. So I am Zach Butcher. I'm one of the founding engineers at Tetrate. I'm also a longtime member of the Istio community and one of the Istio steering committee members as well. Awesome. And I'm Jeff McCoy. I'm the CTO for Platform One, a DOD organization. We'll talk about a little more in-depth in the next two slides. But we're trying to help modernize software and the Department of Defense. Great. So today we're going to talk about a couple different things. So one, we'll do just a little bit of context setting, tell everybody kind of what Platform One is, what the problems are solving, and how they're going about solving those problems. Some of their use cases around service mesh specifically. Some of the pain that we've gone through kind of on that service mesh journey. A little bit of practical, and as part of the use cases, we'll actually show you a little bit, I think. Some, we'll actually have a little demo there. We'll talk about some of the pain that Jeff and his work has experienced as they've kind of started down this mesh journey. Out of that pain, we'll talk about some advice for kind of going down this path yourself. And finally, we'll mention just very briefly a little bit of upcoming literature as well. So Jeff, take it away and tell us about Platform One, please. Yeah, so Platform One is, I guess in a nutshell, what we're trying to do is to find a repeatable pattern that's somewhat open source, that's collaborative in nature, among the various studio organizations so that we can kind of move the ball forward and how we think about committees, how we think about it's the cyber stack and all the tools that you get with committees. Service mesh is one of those components. It is not the only component. Service mesh is not a magic eight ball for us. It is one piece of a layered stack of security that we offer and that we try to optimize. And one of the goals of Platform One is to hopefully, over the next year or two, is to really push all this back to open source and both the things that we're building as in writing code, green field applications, and also just the integrations we're building. We do this. You see a few tools listed there like repo one, which is essentially our version of GitHub, if you will. It's our open source site on the DoD, where we host all of the stuff. These URLs are changing in the next few weeks to dso.mil, but for now they're dso.io. And this is where we put a lot of our, you know, open source code. We're actually working to mirror this back to GitHub. Source of Truth will still be our GitLab, but it will allow others to at least have a wider audience to see it. And then with that, we also have the Iron Bank, which is a way for the DoD to vet, validate, accredit, stamp, sign, and publish images in a way that we trust using modern tools for scanning, whether that be Twistlock, Anchor, you know, OpenScap, if that's your fancy, if you care about the stigs. And all these other tools, we kind of layer on top of that just to validate the state of the images to make sure that they're actually following a trusted baseline and a trusted supply chain. So we have a binary chain of trust that matters. We use the UBI or Scratch or Distralis images as our bases. And then we stack on top of those layers and sign those and publish those out for people to consume. And from that, we produce something called Big Bang, which is just a way for us to automate the deployments of things like in desop here, we list out here that that's things like our chat solution, which right now, as matter most, we use JIRA Confluence. GitLab, with a lot of the stuff in the ecosystem, 405, SonarCube, OAuthSap, a ton of other scanning tools we add to the mix. Twistlock, we mentioned earlier, Anchor, and a few others. And we are continually evolving that pipeline. And the whole point of this is for the DevSecOps platform is to create a repeatable process for building code, for weapon systems, both at the unclassified level and at the secret and top secret level, so that the engineers have the same experience, whether they're sitting literally in a coffee shop or if they're sitting inside of, you know, a compartmented, secure facility with no windows and no cell phones. So we want to give a similar experience to engineers across the board, whether they're in that most restrictive working on nuclear weapon systems, or if they're building web apps to show dashboards for bosses in the military or the department. And so really platform one in a hole is a way for the DOD to modernize and optimize how we automate DevSecOps using Kubernetes and the various cyber tools that will go over in this briefing. Yeah, perfect. And so, yeah, some of the specific technology picks you all chosen to build the platform. Yeah, so I think a couple years ago when we first started talking about doing this in the DOD, someone already gone down this rabbit hole with, you know, Rancher had a really great kind of gateway drug into this world for a lot of people who didn't know Kubernetes. So people who knew automation but didn't know Kubernetes and the DOD went down that hole. We chose a mine team to start with upstream using QBDM. And we hit, we landed in J-Wix after a few weeks of learning. And that was with Istio and J-Wix is a top secret environment that was out here in Colorado. And so we had to learn a bunch through that. We did some other, we made some of the news, we deployed this to F-16s last fall with the team out of Hill Air Force Base using Istio and the same kind of basic stack. And what it proved for us was, and this needed to be proven, this is not news to anyone in this conference, but it proved to the DOD stakeholders that you can take the same product, the same technology and deploy it in a web app and a skiff on a jet. We're preparing to launch Satellite with the same technologies. The YouTube was famous, we did that recently. And so the DOD started to latch into this concept of what I would call it's more than infrastructure agnostic. It's really more about platform independence. And, and so with platform one, we actually don't say yep, thou shalt use upstream communities, or you should use Rancher, or you should use Convoy or, you know, TKG or OCP or you pick your poison, right? We say just give us a N minus two compliant cluster. So, you know, Kubernetes 117, 118, something like that cluster. And then we will deploy our stack on top of that for you. What's really cool about that though is we have very different inconsistent environments. So we try to segment those layers, so we can move between classified and unclassified easily. And Istio is a very solid choice for the server because we had to have a service mesh. We needed to sidetrack pattern. We live on void. Istio has some growing pains we'll talk about, but it made a lot of sense even in the early days after we got through the initial hurdles of holy crap. There's a lot of, yeah, I'm also learning a lot of manifests and a lot of things going on here. We've worked through most of those kinks we think now. So we're getting to a repeatable process finally with Istio. Yeah. Yeah. And so today a lot of what we're going to spend time talking about are kind of what we just bolded there, right? So the kind of basic encryption and transit, which probably if you're watching service mesh con or watch it previously, you've heard a lot about, we'll talk about a little bit. And then we'll talk kind of about that framework for securing applications and how we're starting to build that. Yeah. So I mean the first use case and I think, you know, from my perspective, this is one of the primary reasons that I see mesh adoption, right? And is the encryption and transit, right? For a variety of regulatory reasons, for a variety of security reasons, this is kind of what I almost call the standard mesh use case, right? Y'all had kind of some interesting requirements though or some kind of interesting sags versus a lot of folks are doing this in the form of a lot of legacy, a lot of off the shelf and a lot of open source software that y'all were kind of pulling into use, right? So I know we hit some pain points there that I think we'll talk about in depth later. But that was still not super easy to kind of get encryption everywhere, correct? Yeah, it's a constant battle. Like you said, we'll talk about some more specifics. But the bottom line was the automation of encryption was super important to us. We have some hard requirements, especially in the classified systems. I mean, if you think about the consequences, if we mess this up, the systems using this are not going to affect your Gmail. It's going to affect national treaties. So we have to be really careful that we get this right. So things like FIPS validation is becoming more and more important to us, which is something we're working through with the different Kubernetes layers. Right now, we're going to be bringing that to other layers too, including the TLS rotation process. So there's, there's all these layers here that we care about, but certainly it's, it's not been like easy button everywhere. It's every new tool we bring into the ecosystem is a new challenge for us to, to figure out how to make that happy in Istio land when it's complex. Yep. Yep. On the flip side, though, the key rotation that Istio brings, I think is has really kind of opened up a lot of doors, right? Because this, I think the kind of the kind of encryption and trends that the y'all are getting today would not have been achievable with your legacy PKI. Do you think that totally? Yeah. Yeah, I fully agree. I think it's really, that's one of the biggest selling points is just handling of certain rotation, handling of issuance, handling of verification, all these things that in PKI land we take for granted and owners, the DOD is one of the biggest PKI consumers on the planet with, you know, the common access card. It's how the entire forces in the US government essentially does their business and often the case the system. So we're talking about millions and millions of active tokens and, and so it's, it's a massive system. So we understand the importance, but, but with that comes all the complexity. So the more we can make that more like cattle, the better off we are. Yeah. Yep. Exactly. So that was just kind of briefly and again, we didn't spend a ton of time there on encryption and transit because quite frankly, I think it's, it's not super, super interesting for, for a lot of the folks here at the service mesh con. You know, I will say, I think one of the big things that, that y'all are starting to do and that we would recommend here, just revisit, definitely try to root your mesh PKI and your existing PKI. That's going to make your, your whole life easier with respect to managing your, the best certificates and things like that. So there's just one, one learning I'll leave there. But if we transition over, I want to talk about then kind of the second major use case that y'all embarked on after encryption and transit, right? And that was kind of giving, you know, I'll call it SSO for free, not for free for y'all, but effectively how can you make it cheap for teams to onboard and start to secure their applications using the service mesh, right? Yeah, I think that there's, there's this really, and as like, you know, we've talked about this quite a bit over the past year, but one of my first complaints when I first met, met you guys, you and Baron was, hey, I just want automation and how I do SSO. And right now it's not great with this deal, because it just, it really wasn't like you could, you could do often and off the, right, against WPS, that kind of stuff. But there was no management of the user session, or it was just, you either had a token or you didn't. And if you didn't, you got a big fat 403 and that just wasn't great. So, so getting to the point where we are now, SSO for free literally is we want app developers to come in and build their software and not worry about the details, they just need to be secure. And we're doing this, not just the unclassable, but the classified workloads as well work this way. And it's just a way to network break using the service mesh, you know, using on voice filters, that connection, so we can stop traffic flow. And this is super interesting too, for, for tools that we don't write and we don't control, that don't have OIDC or SAML or other typical standard SSO integrations, we can still protect those as well. And we've done some stuff with key cloak to enable RBAC against those in the same realm, so that we can kind of find, grain control off Z who has access to work workloads, even if the workload itself is ignorant of the RBAC it's being enforced upon it. Yeah, yeah, which was super cool. I actually don't know if I, if I realize the order and off Z for, for some third party apps with this as well, I do obviously the order and off them. And so the big way that this is achieved is using some, some tooling out of the issue ecosystem, the off service in particular. And so it acts as a shim between on voice and OIDC using on voice external authorization API. So if again, I'm sure that other people will give talks about that today, but envoy provides exactly like Jeff said, a set of filters that, that call this external off API, the sectional authorization service that effectively provide a network stop, right? So they, they give us a handle to be able to insert in arbitrary authorization, authentication or authorization logic, right? And so we'll, we'll look at some of the config, but in a minute, but all service y'all basically have deployed. You minted out as part of the standard staff that is the platform one deployment, pointed that your existing identity providers. And then yeah, like we said earlier, label, label the pod and it goes, I know there were some enhancements that, that we, I say we, I didn't actually make them, but you're all steam made to off service as well to actually make it fully usable, right? For your use case, correct? Yeah. And there were some, you know, in the ecosystem, you see this in all open source projects, there's people wanting the same problems, right? And the same scenarios and somebody challenges, they have the same issue and there's somebody comes in and does a pull request and so on and so forth. And we provided some code, there wasn't much to change really, frankly, it was very, very small. And others have provided code as well. So, so right as back end was one of the ones that was in works that some other tweaks we, we had there, there was also some issues with just the way that the part of us work that was breaking down instead of envoy going back to off service and actually crashing off service. So very, very minor things, you know, very minor code changes, just say the very breaking, breaking problems for us. And just the transition, which we'll talk about later on of envoy as the API change and envoy and how we had to change the on with filters was pretty, pretty painful from Istio one, six to one, seven. Yeah. Yeah. And we'll, we'll definitely dig into that in a second here. But actually, you can see here on the right side, the outcome of that, which is a new and updated envoy filter config for Istio 1.7. And this is one of the things is a little bit painful and in the migration that we'll get into it a little bit. But effectively, you know, like, like we said before, we have this, this mesh wide envoy filter notices in that Istio system namespace. So it's going to be the default. If unless somebody has an envoy filter, you know, more specific than this one, this one implies. And teams deploy again with that label. In this case, we, the labels protect key cloak. And what we'll do is exactly what the envoy filter says. We'll go in there and we'll insert the external OZ for all emboys that are doing HTTP requests. So you can see we're inserting before the, the envoy router, which means we handle it before HTTP request flow. And it does the full OIDC flow, right to redirect force and authentication and then come back. And I think we'll even show you, yeah. And so like we said, all service is deployed in the mesh. One of the things I want to talk about too, that's, that's an important idea is the idea of using the mesh to provide operational assurances. And we'll talk about this kind of at the end very briefly as well. But, you know, the mesh is providing security between the all service deployed in the mesh and workloads in the mesh, right? So it's not just, you know, just like the mesh is providing security between random arbitrary services. Effectively the same security that the mesh offers any arbitrary workload we can use to secure our often and OZ services and gain additional security benefits. Those operational assurances that the mesh provides for our authorization and authentication services themselves, right? And so this gives us a powerful set of tools. It's really nice when we get to a system that can kind of model itself, that can represent itself, right? And, and with the service mesh, we can start to do that. And so that's a really, really important idea that I want to call out. And with that, I think we have, we can actually show you how the system actually works. Yeah, let's see if this transition screen sharing stuff actually works without blowing things up. Yeah, it'll be always fun. The wonders of conferencing. Did it work? Yeah, I think it did. Exciting. So, so inside of Platform 1, we have this, this sort of concept of stamping we're trying to do right now called Hello World, which is just like examples, distilled on examples of how to use parts of our Big Bang product. And this is the first one we did that I've been all the time on myself directly just because I wanted to get it right because I've been stuck in the Istio-osly land for a little bit. But, but basically we had this simple little script. It uses CERT Manager, it uses K3D, Istio, CLI, and I'm sorry, it makes CERT Manager to generate basically a simple little Hello World concept for you. So I will go ahead and kick that off. This is actually already running, but I'm going to kill it. If I run in the same script, it'll destroy the cluster and recreate it. And if you haven't used K3D, it's similar to KIND, but we have quite a few Rancher people on our team and they have convinced me to love K3D because it's just really dang fast, as you can see. It so works really well. But KIND is also incredibly valid here. We just happen to use K3D a lot on Platform 1. No hating at all against KIND because we also love it. Little head against head cube, that's a different story. Yeah. So, so what I'm going to do is, yes, but delete, create cluster, all basic stuff, you guys all know this stuff. And then Istio is going to do which thing, as a reminder, nothing fancy here. We're just creating a basic cluster with some 80, 443 load balancers and installing some CERTs that we generated that my browser will not trust because it makes CERTs to install them and go from there and deploy them out. So with this setup here, and it should be just about done now, what we try to do is kind of distill down like what's what. So hello world. We wanted to show you how simple it was and really this is all it is. So pod info, if you don't know, is a super great tool, open source tool that just shows you a bunch of great stuff about a pod running into your cluster. So you can use headers or environment variables and do some tests against it. There's a whole swagger API you can check out. But we wanted to show if you took some random workload, in this case, pod info, and then protected it, what it take. And we use customized to do patching. So you're not familiar, you know, it's pretty basic point of remote resource and add this patch to merge here, which is going to now do the enforcement for us. And then the config for us service, which is frankly a little verbose right now. We don't super love this still, there's still some work to be done here. You know, we defined out we're tracing right now. So super high, highly verbose right now, just against local host. And yes, this is a client ID that's publicly available, but it's only a valid against local host. Hence the this is not a real secret bro. So don't freak out. We pass in both a bear authorization, which is what you need if you're doing often all these types of stuff with the service mesh beyond the exhaust the service, we don't anymore, we've actually removed all that we've got to remove all that code we used to actually create those filters at the welfare facility that they want to but they don't need to anymore. So if you wanted to do further filtering, some sort of Aussie thing against the claim on the jot, you could do that, but we just we don't require that for our engineers literally they do say go read the header, jbt, and there's your token, parse it out. We've given an example of what that looks like from our SSO, some basic information, what it would look like to them, saying to the format, and they go from there. Right, so this is booted now. And we can see these labels exist here for protect key cloak as expected. So I'm going to go over here to Google. And I'm currently logged into a session or hopefully I still am. So I'm going to go ahead and go to local hosts with this network traffic logging. So you can see this. All right, so you can see what happened here. I don't want to show you my tokens, but but basically we were at local hosts, it redirected to our our production SSO, we do that on purpose, we want the engineers will see it live with their real credentials, they can actually see their headers and see their claims. And because I'm already in an authenticated session, another basically key click as a running session right now is active still, we redirect it and now we're back here. Now to show you another example here, we're not going to, we went back and forth on how to show those like exposing too much about credentials or you know, we do this stuff. So we're just going to let it redirect to a login. Basically, as you can see here, redirected to the login screen as expected. And now you're at our little baby login screen. And just gee whiz fact, so we do key clout. There's a whole bunch of extra stuff we're doing here. We built a whole custom framework around this so that we can do the RBAC and OCD stuff at key clout before it even touches the workloads, before it even touches the OIDC consumer. So we actually do some interesting checks. Even beyond this, we actually have AppGate for authentication. So we do a bunch of layers above just this, but at the very basic level who you are at an OIDC provider trying to get authenticated, trying to follow the OIDC OF2 flow. We're stopping you here obviously because we don't want to show any more about how we do the DOD side. It's you can go look it up. It's very basic. It's all the same off flows. We also do do PKI here, but because my smart card is removed, it's not prompting. But this would flow to pod info and then you would see it. And that's kind of the workflow through there. Let's switch back over to you, Zach. Yeah, thanks. Awesome. And so that is in there. Obviously, pod info is a wonderful example of a third party application as well that we are that we're able to to provide authentication and authorization over, right? And so with that, then we want to pivot a little bit, which is, you know, and so that is, I just want to call out incredibly cool, right? The fact that we can have SSO against production identity servers for any single application adding a label. That is awesome and a huge time saving as well for a lot of your developers. Because historically, if they had wanted to do this, they basically would have had to maintain this entire infrastructure themselves, correct? Not the identity servers, but they would have had to handle calling the OIDC the whole flow, right? And I would say just one other thing on that. So we've gone, we've done this a whole bunch. We've gone through dozens of iterations on this concept. Platform one, with many different systems and some of the most significant systems in the DoD and some very basic things and everything in between. And what initially, we, you know, we had apps going and building their own SSO, building their own SSO, literally creating a key cloak or using some other third party SSO, the Air Force has some that they provide you as well, using those and consuming those and then writing like these authentication brokers to go to their applications, very complex, very risky. And then we had others who would, okay, fine, we'll use this SSO OIDC or SAML, excuse me, and then, you know, we'll consume it some way through the application of any state there. So all these different variances. Then we finally get to the point where with OS service and before that, a few other similar tools we used, using envoy filters that were really complex, the envoy filters were getting really messy, very big, a whole bunch of stuff going on. We took a dang degree to read through those that was complicated, but then it was worse than that because you also the deploy as a sidecar OS service attached to the workloads. So you're not just deploying this, you know, envoy filter configuration that's often an OSC filters and STO, you're then also deploying this, this other sidecar thing. Oh, by the way, if you have, you know, more than one replica running, which is like what pretty much anyone does in these systems, you now have multiple sidecars, right, for each of those different workloads, so there's different replicas. And now your state isn't synchronized. You have to do things like a back-end Redis to manage state. So now you have this really complex thing, it was so complex that we actually used Quick Cutter to try to automate the deployment of all those manifests, because it was a lot of manifest, a lot of state to manage for engineers who don't even barely know Kubernetes or not expected to manage all those complexities. So we've, we've now broken it back down to literally add a label to whatever workload you want that will now be protected. Yep, which is, yeah, just awesome. However, there was a lot of pain to get there. And, you know, we've kind of alluded to it throughout, right, but I think it's worthwhile for us to kind of call out and call attention to I, what I think are some pretty important areas that need to be improved, right? And I think, you know, Jeff maybe can screen this one, but maybe the single biggest area that is still rough today is upgrades. Upgrades are still really painful. I think they're a lot better than they have been historically. You know, I think, and maybe Jeff, out of there, if you want to confirm, like kind of some of the earlier days versus now, you know, today for, yeah, exactly. You know, so today, for example, there is an API now, that's phenomenal. You know, having an API for upgrade is such a big improvement over before. Unfortunately, the discoverability of that API is really bad. And that has caused us quite a bit of pain, right? In particular, for example, there's a whole lot of values that used to exist in the SEO home charts that you would then set in the value stanza of the SEO operator API. And it was basically the escape patch, right? And so for that reason, it was not very well documented what even the values that you can set are. So then to kind of compound that, a lot of those values over time have been updated and upgraded into first-class fields on the operator API itself, which is phenomenal. That's great, right? That's exactly what we want to see. The problem is there tends to be kind of lackluster documentation when that happens, right? And so there's not a very good guide for how you might keep your operator API, your operator CR, up to date over different versions of the SEO. So I think that's one of the kind of big areas that could use quite a bit of improvement today. Yeah, Jeff, I don't know if you want to add color to it, just weeping and mashing at you. Generally speaking, yeah, from a consumer's perspective, platform one, we are definitely probably third graders in a steel land still. I mean, that's why we brought Intetrate to help us when we get stuck. And I'll mention on-board filters. I had Zach read line by line with me, the on-board filters, to explain to me what's happening and why. There's been just lots of gotchas throughout the past year. You're in, I guess, almost two years now of doing Istio stuff. And just from a consumer perspective, it's so much better. We were so excited about 1.5. We jumped on 1.5 immediately. And that had consequences in 1.6 land. We basically had to skip 1.6 because we had perpetual issues, but we're finally migrating our workloads from 1.5 to 1.7 and getting better every day. But it definitely, it's been a very, very frustrating me screaming at Istio right in our team, saying, why are we doing this a lot? But we're starting to finally see the fruition of that work and the stability and the APIs. And it's come a long way. Nice, awesome. I'm glad to hear. And then to carry on in that same vein, I just complained about the operator docs. Other docs in general need some improvements. I think we talked about that Envoy filter specifically. So we'll revisit that one now. The syntax on the Envoy filter changed going from Istio 1.6 to 1.7. Now, there was actually nice documentation going over that change. If you went and read the detailed upgrade notes for Istio 1.7. Unfortunately, the underlying documentation under that, we also, hey, Envoy filter told, you know, hey, the syntax change, that's great. Unfortunately, that syntax change included adding some things like this at type field. And so one of the great mysteries is, what do I put in the at type field? Right. And so there are still some holes in the documentation today. This is another example where I think Istio is getting a lot better. The fact that we had those detailed notes in the 1.7 upgrade that talked about the change of the shape of the Envoy filter API, that's great. We haven't had that level of detail before. Unfortunately, there's so little of a gap that we need to cover, I think, in terms of really making them very usable and easy to use. Yeah. I think one of the things we ran into is just kind of weird that I still don't fully understand, even looking back at it was we had to actually change the Envoy filter, GRPC, or the stances we were using because the Envoy one was not working at all, but the Google one worked. And it's completely different syntax. Like it's totally different syntax. Yeah. And so this is another kind of key problem in Envoy filter. This is maybe a problem unique to the Envoy filter because it's that break class between Envoy and Istio. But we had kind of the double whammy of not just head Istio 1.5 to 1.7 change the Envoy filter. But in fact, the actual underlying Envoy configuration itself also changed. And so we had this double change that we had to figure out, like Jeff said, there's a change in how the GRPC configuration works inside of Envoy too. And so that kind of double whammy, again, really just goes to that kind of, that lack of some of that second layer documentation, right? Which, you know, an Envoy filter case might be a little hard to provide, but there's really needed for that usability. You know, Jeff, I'm sure you have a lot of stuff to say here, right? But maybe the short summary is it's still pretty challenging to roll out Istio to a large team. Yeah. Yeah, I mean, all the love for Istio team, it's definitely better. But there's challenges, especially when, in the DoD's case, we just lack, we lack Kubernetes expertise in the department overall. So it's contracted in, right? And the contracting quality is extraordinarily varied. So I have met few, if any, and any points team in any service in the military that I walked away and said, wow, they really get service mesh. It's just, I haven't found it at all. And so it's hard. And then specifically Istio, we've come a long way in trying to simplify that stuff, but there's always these crazy, weird gotchas. Whether it's the gateways or service definitions, or it's service entries, or some egress gateway ism we've run into. And there's always something new, some new pain, we find. But we're also perpetually evolving the complexity of our stack, too. We're deploying both all these DSOP things, which is, you know, your collaboration tools, developer tools, developer environments. But then we're also deploying all these very custom specialized workloads, some of them for embedded systems and the weapon systems. And so it's a completely different ecosystem. And soon we'll be real time operating system workloads as well for aircraft. So there's all these really complex things happening under the hood that having something like Istio is both wonderfully powerful for the security and the flexibility you get from it, but incredibly frustrating when you're trying to do debugging. And so Istio CTO analyzes is one of my new best friends for Istio isms. When I don't understand something, it definitely does a pretty decent job of helping me understand what's happening. So I think it's one of the most valuable things the ecosystem's introduced recently is the CLI and actually making it truly useful for us to help troubleshoot. Yeah, yeah, I'll just echo that and say that I think the Istio control work across a bunch of the different commands there and a lot of that's come down to the Istio user experience working group. Y'all are doing a great job, right? I think that if we're going to call out kind of one of the best areas in my opinion of the project today, I think y'all are doing great stuff. And really a lot of the tooling has come a really, really long way. There's still a lot more to go, right? Even catching little errors, I have an example here, right? Jeff and the team had changed from gateway ports, right? And another group had had a virtual service that matched on port number. You know, that seems innocent enough until you're trying to figure out why traffic doesn't flow for one group across 20 different animals. And you just don't see that, oh, this 443 needs to be an 8443, right? And you know, in little things like that. And bonus points, it's using ARGRA to pull it in. So they're not even like all in one spot unless you go look at the cluster itself to see it all because you won't see it all in one spot. Exactly, exactly. And so that's some areas where I think things like Istio control analyze still has room to grow, right? There's already pretty promising work there to begin, and it's already super helpful. I think as we get more mature, we can start to stamp out a lot more of those kind of small gotcha and misconfig, right? Which, at least in my experience, are the vast majority of kind of the angst of Istio config debugging. Oh yeah, for sure. It's always just little things. Yep, and then finally, and we'll go pretty fast here because we're nearly out of time, you know, Jeff, I don't know what you want to say about some of the OSS and free package work. You know, it can be challenging for some of these applications to get Istio to run there. I think that there's opportunity for the Istio community to work with some of these other open source communities to kind of provide package setups of, hey, here is Istio with GitLab or, you know, whatever it is. Yeah, I think for most of the third-party tools what we're at right now is our basic idea is North-South traffic we're going to protect, we're going to do the TLS, mutual TLS enforcement and all those, the good things we've been talking about for third-party applications. That's just how it's going to work. For simple third-party applications, you know, East-West we can totally do. MGS, you know, within that namespace is totally reasonable. But something like GitLab, which is, it is no slide on GitLab, it's an incredible tool. It's just massive. If you've seen, we deploy a lot of different GitLab instances right now across multiple levels of classification and different environments. And it's just a very large home chart, essentially. And so there's just a bunch of stuff going on there. So trying to get MTS enforcement across the mesh, you know, injecting side cars for all those different workloads, it just doesn't work. There is so many, I mean, we could play whack-a-mole all day trying to get those to work, but it just doesn't. Then we have weird ones like JITC, which is a VTC option. And actually met with the creator of JITC and A by A, super helpful, very informative about the architecture and how they do it at scale for millions of users. It's all VM-based and orchestration, but it's not credits. And their statement to me was it just won't run on credits. I understand why they said that now, because it was literally a nightmare. And it's still not fully done. We were actually waiting on Istio 1.6 for a change that allowed us to do an association by, I think it was a Crest URI for our sticky sessions. But in 1.6 is broken for us, we ended up doing 1.7. But we actually ended up leveraging some Istio stuff with JITC for North, South, and East West, but then only partially because JITC under the hood used a WebRTC and a lot of UDP traffic. And the Envoy supports it. The Istio supports were not there. So we did this really complicated orchestration of network load balancers in AWS and then pass that through on host ports, a big mess. So definitely a mixed bag right now in the open source ecosystem. Yeah, but maybe a little bit greener pastures in the open source ecosystem. All services has been pretty great. You all have been able to achieve quite a bit with it, although still some pain there, right? Yeah, I mean, we found that on that technology. We knew it wasn't as supported. And we watched carefully the folks who did the work to build it. I believe they were out of pivotal originally and now VMware. And I actually intend to reach out to some folks who are on that side and that we work with regularly to kind of find a way to get more involved there because I know that those three or four core attributes are not really focused on that right now. So it is a little slower sometimes, you know, it's just typical open source pain. So we're, Platform One's offering to help now. So we're going to try to formalize it a little more just to find out, you know, how we can lend our hand to keep it running and maybe down the road we'll see it more integrated with this deal itself would be wonderful. Yeah, no, I agree. I would love to see that. Awesome. And so with just, you know, some we'll go very quick here because I know we are we're running short on time. If you're looking to do this yourself, I think, you know, maybe the two biggest things that we would say are standardized as much as possible. Right. I think especially as we're getting the 1.5 to 1.7 upgrade kind of wrapped up. You know, having snowflakes having different special things really cuts down on the speed at which the entire organization can move when you're and work as big as Platform One and some of your customers, right? Yeah, absolutely. I think that the snowflake comment is huge for us. Like we can't afford to have pets and we need cattle because our stuff has to be able to redeploy fast to come up and down over and over. Yep, yep. And then, you know, by the same token, we want to think very carefully about what exactly is being exposed to developers and what the cognitive load there is, right? I think Jeff did a wonderful, you know, a quick summary of, you know, kind of the iterations that they went through in getting to the label your pod for SSO that they delivered today, right? And a lot of that iteration was really kind of getting at finally, hey, what is the cognitive load that we really need to inflict on our developers for them to get the benefit? And how do we minimize that? Which is really just good API design, right? And the more you can spend some time to think through what that needs to be ahead of kind of mass onboarding, I think the easier time you'll have with your overall message option. Yeah, I just have one other thing there real quick. The beyond just like developers capacity to understand and to even worry about something cognitive load, as you mentioned, there's also the point of security and focusing your efforts. So now, we don't have 50 different ways of implementing SSO authentication. We have focused ways we're doing it and we're consulting this down more and more. We still have deprecated and grandfather ways we're doing it today that we're moving over to this way over time. But the point being that we can focus our red team efforts now and our assessors and from a DOD perspective, accreditation is super important. You know, on other industries, this is important as well for us. There's a lot of paperwork, a lot of bureaucracy behind this, but we have to have certain checks and things we have to do that now those questions that used to be every app team answering, you know, that 5, 6, 7 questions about authentication and Aussie, Authent, RBAC, etc. We can now answer them for them at the platform level and app developers can tip those things and focus on their core business versus context junk that they could potentially mess up anyway and jeopardize their security footprint. Yep. And this really goes to the heart of what I believe is one of the most powerful benefits of the mesh, which is it allows these small central teams to create these big wins for the entire organization, right? The fact that, you know, who knows how many man hours you save just in, you know, how many engineering hours just in the security audit, right, that every individual program has to go to before. And so, you know, in that vein, we actually, we really do believe that the service mesh is the one of the best ways to start to enforce obviously service mesh just part of the stack. But we believe that these methods really are the best way to start to ensure security across an organization. And so, you know, one of the, you know, one of the benefits worked with the government is we can work with organizations like NIST to help publish around these standards, right? And so previously, there had been this SB 800204A, which laid out some basic security guidelines. And, you know, kind of the thing I'll close on is we actually have some another upcoming SB that will have a call for comment on hopefully starting in December that is really about this idea of the service mesh as an operational assurance framework, right? A way, you know, it will provide some guidance on how you can safely deploy a service mesh in a way that gives you these operational assurances. And we'll talk about, and we'll talk about what those operational assurances are in detail. And then we'll talk about how you can then use those to deploy other systems like off and on Z and make the overall system more secure leveraging the service mesh, right? That really kind of getting into the meta, how we use the service mesh to deploy, you know, even things that the mesh uses. And so, we're pretty excited about that because like we said, you know, we really do think that the mesh is key for security moving forward. And with that, I think hopefully on the day we'll take some questions. Jeff, anything you want to close with? Nope, we're excited. Yeah, awesome. Yeah, thank you all. And yeah, good luck for folks that are looking forward to doing kind of similar things with their own meshes.