 All right, hey everyone, I'm John. I've been working on Istio for over five years now, so I'm really excited to be up here and answer any questions and whatnot. Louie. Hi, I'm Louie Ryan. I've been on Istio a little bit longer than John. I won't hold that against him. But yeah, just excited to be here, talk to you all, and hopefully get some inspiration as well. And I'm Mitch Connors. I'm a software engineer and product manager at Aviatrix, a cloud native ambassador. And I think I was on Istio for like a month before John joined. So just like squeezed it in there. So yeah, really excited to be here and chat about whatever comes to mind for everyone. Yeah, that's awesome. I forgot to introduce, so every panelist has been purposefully selected because they also serve on the Istio Technical Oversight Committee Board. So thank you all for your service. The other person we're missing is Eric Van Norman. Unfortunately, he couldn't travel to KubeCon. He's like the third or second largest contributor to the Istio project. All right, so I'm going to start with one question. And I would love, love for you all to come up to the mic there. And just if you have any question regarding Istio, the ambient announcement, or any of the things you want to talk about, any concerns in the Istio community, or even just to show us how much you love Istio, we would love to hear. And I have some t-shirts. So I believe I have maybe around 10 t-shirts, the same t-shirts I'm wearing now. I love to give away to people who ask the first 10 questions. If the sizing doesn't work, you can always come to the Solar Boots to exchange to a different size. But the point is I want you guys also have the same t-shirt. Joan just told me it's very high quality. So the first question is, what are the current technical hurdles around the Istio? So I want one of the panelists just quickly maybe show their thought leadership about the current technical hurdles we're having around Istio. So Mitch, maybe start with you first this time. That's good. That means that no one else can claim my answer. So I'd say one of the biggest technical hurdles from my perspective is API maturity stagnation. We've set a very high bar for what a GA API is, or a stable V1 API is. And matter of fact, we said it's so high that I'm not sure we'll ever reach it as a project. So we've been reevaluating that over recent months. We don't want to be sending the signal to our users that, yeah, it's a beta product. It's six years out there. It's been adopted in thousands of companies across the world. It runs all of your financial transactions, et cetera. But it's in beta. We're not really sure about it. We're not confident about it. And we might, by the way, our beta deprecation policy is three months. So we might just change our mind and turn it all off and break it all. That's not what we want to be telling our users at all. So we are making a concerted effort to move APIs to V1 this year to communicate to users that these are stable. They're ready for production. We are not going to ink the rug out from under your feet. And you can depend on them. So I think that's been a friction for users for a number of years. And I'm excited to have, I think it's predominantly the Microsoft team driving, moving those forward to maturity, really appreciate their efforts. Yeah, I guess what I would add is, obviously, Ambient is kind of the new thing that is taking up a lot of the active development time. And there's all sorts of new challenges that have come up from that. We saw a great talk this morning about some of the challenges with running Ambient on kind of arbitrary clusters and all the different creative things we had to do to make that work. So that's a big area of development. A lot of the other technical challenges in Istio have been kind of very long multi-year processes that are kind of well known. Ambient is kind of this newer thing that we're still kind of learning as we go. And of course, we're approaching beta now very soon. So now we're kind of in the execution phase. And we've kind of figured out where we want to go and what we need to solve there. Yeah, Louis, what's your thoughts? There are lots of them. Istio is a big project that covers a large surface. One of the things you can tell by what Mitch said and what John said is we obviously progress features to stability at different rates. That's actually one of the big motivating factors in Ambient is the fact that it provides for composition, not just at the API level, but also at the infrastructural level. So getting that to the place where people can start to leverage it is going to be pretty important. It was actually exciting for me to see somebody use Lua. But they were using it inside Envoy filter. If you were to pick one thing inside of Istio that we would like to work a lot better, it's the extensibility API. And so I think when Ambient settles down, we need to put more investment into some of that to really help users out, because those use cases are critical, and we see them every day. Yeah, the only thing I want to add is I know in our 1.21 releases, we introduced the compatibility version. And John here did a lot of work around that. I think that's one of the biggest hurdle we had in Istio for the longest time in the past as a project being seven years. We kind of made a lot of decisions. Sometimes it may make sense. But after hearing from many of you, we decided that's not the right decision or that's not the right move. So I think upgrade has been continuous to be the biggest hurdle we have in Istio. I'm really hoping the compatibility version we introduced in 1.21 really made that easier for every single user out there. With that, I'm sorry about the emoji. I don't know what happens. With that, I want to see if anyone have any questions. Be all our first questions. Thank you. I thought I'd break the ice. My company has recently started adopting Istio. We have two workloads on Istio now as of about a month ago. The thing that we are most interested in is multi-cluster. And I'm on the SRE team. And the kind of challenge we're facing is to, via automation or GitOps, actually configure multiple clusters. Because you've kind of got to do the certificate sharing. And then you've got to get a token from one class into the other and vice versa. Any advice around that? Or any upcoming features or plans that will make that easier? I think actually, if you were here for the previous talk from China Mobile, you saw a very compelling way to automate the setup of a multi-cluster, on-board new clusters into a multi-cluster paradigm. Also, that pattern of having one central cluster that houses your control plane, that houses your config, and each of the other clusters effectively are acting just as data planes to that central cluster, I think has proved a pretty solid pattern across the industry. And I think you'd do well to reach out. I didn't see where those folks went. But you should get in touch. Yeah. Perhaps the other thing I would add is right to see. Yeah. I just want to be sure I'm just kidding. The structure and CA provisioning is it's going to depend a lot on what vendor you've chosen. And so take a good look at that as well and the facilities that different CA's provide, whether it's one of the other CNCF projects like Spire or a commercial vendor, there's quite a lot of variability there, or you're using a CSP solution. All right, go ahead. This question about we've been using Istio for some time at eBay, and one of the challenges that we run into that we don't necessarily have the expertise for is getting it into ARM platform. Is that something that you have thoughts on that you're heading towards as Istio, do you recommend going there? We've had some trouble buildings. Yeah, John, you want to talk about that ARM? Yeah, did you say you've tried it and had issues? Yeah, I mean, in Istio, we added ARM support. A few releases back. It should just kind of work out of the box without any configuration or anything to do. If you have ARM nodes, it will not work. There's been plenty of people that are running on ARM on various clouds. We've seen some cool demos on like Raspberry Pi and whatnot, which have also been fun. The best way to... I mean, 1.15 maybe? It was a while ago. It's been a while. If you're on a version that's old enough to not have ARM support, please upgrade for other reasons. Yeah, that's good shout out. Istio.io slash security. The best way, though, to get support for a new architecture is to buy the developer's laptops on that architecture. Yeah, awesome. Great question on the ARM. To summarize, migrate to Istio to 1.15 if you need ARM support. The next question? Hi, my next question is probably for John. We are heavily Google Cloud users, and we love it so far with everything. But then when we try to enable managed Istio and then set up some tracing and other stuff, it feels like once you try to use it based on the Istio documentation here and there, it comes limited edition provided by Google. Is there any plan to extend and have an even better integration of how... Or maybe it was me just not using it, right? Yeah, I mean, we can chat afterwards. I think we probably, you know, should discuss vendor roadmaps on this panel. All right, so we're gonna take that question to private. If you want a t-shirt, do you want one? Okay, I'll try it. Better? Awesome. Please ask your question. Thank you. Thanks. So assuming Ambient goes well when you've got loads of time to work on other features, can you talk more about how you would like to extend the extensibility API? Like, what are the features you would like to get out soon? So, you know, there's obviously the capabilities of the data plane to be extended, right? There's kind of two basic extension mechanisms, right? Either the data plane calls out to some system using a standard interface like x.z right in Envoy. I think there's probably a push now to maybe use xproc, but a bit more than x.z because xproc is a bit more general purpose, right? You can do things like content transformation in it. So there's the call out pattern and then there's also, you know, running some embedded code, whether it's WASM or LOA, which are the two kind of embedded runtime options. And so really, it's about enabling those use cases and giving an API that explains the context in which that code is going to be run well. So there's already a WASM API that's still alpha, right? It needs to be progressed. It's probably missing a couple of features, but it's a reasonable kind of outline of what you could expect for APIs in that area in terms of I want this piece of code to run when this happens, right? Over here, like wherever you want it to run. So that will be the focus of the API and also making sure that that API works well with the gateway API pattern and the gamma patterns that we're adopting as part of Ambient, right? So it has some particular API design patterns in idioms and I actually suggest people should go to some of the gateway API talks here at KubeCon to get an idea of what that API surface looks like and how these extensions fit within that. So that will be the focus, I think, in the project. All right, thank you. I think extensibility also goes hand-in-hand with ecosystem integrations. We decided a long time ago that Istio wasn't going to do everything that the network could possibly do. There are plenty of other tools out there that are great at being API gateways or other things. And so part of the Ambient pattern is going to be making it easier to integrate those other things into your service mesh so that you don't need to find ways to translate, say, your Nginx behavior if you have a really long Nginx config that's lived for 15 years. No one wants to touch it because you don't know what's gonna break. You don't need to translate that all to Istio config. You should be able to say, hey, for this service we're gonna spin up an Nginx gateway and Istio's going to send traffic to it and we'll know that we have continuity there because of that. So extensibility and integration go very much hand-in-hand in Istio. Thank you. Thanks for that great question. The next guy, please ask. My question is regarding the rate limits in Istio and whether you are recommending to use Envoy rate limit service for this purpose as in our company, still we have the workloads on our on-prem clusters which and we are just implementing Istio on on-prem and in the GCP side. So in the GCP side, the cloud armor is quite well and it's solving our problems but on the on-prem side, we wanted to implement Istio and we are trying to figure it out whether they're using the Envoy rate limit service is a good way to go with this on production and whether we can use it, for example, in the pattern like rate limit but for example, team or workloads. Yes? So it's obviously the rate limiting support in Istio itself is fairly thin. Like Envoy has, it's built-in rate limiting feature which is a local rate limiting only feature which any one Envoy enforces a rate limit. Very often that's not what people want. They want a unified rate limit for their service across many instances. Then you're into the world of choosing one of the end different solutions. Istio is not in the business of recommending any one of those solutions. Obviously it's an ecosystem. I've seen a variety of them used. Vendors obviously provide their own that are tested. If you're working with a vendor, you should obviously be talking to them about what their solution is and evaluating whether you feel like it meets your needs or not. There are limits to what Istio is going to take on and so that is one of the limits, I think, in terms of the project boundary between what we think we should be providing, which is an integration point and what the ecosystem should be providing or the vendor ecosystem or people developing has. We've certainly seen a lot of companies who have their own rate limiting systems that they've built and then they integrate with them via the standard integration APIs. So I actually don't have a specific recommendation and I actually don't think these guys do either because I just asked them. I mean, Zack, right? Sorry, I actually do. So the rate limiting service is a good way to start. So let me just, it is widely used in production, Lyft uses it in production, they built it and contribute it into the Envoy upstream. So that is a great place to start, right? So I'll just put that, because that's the one specific question, whether or not it has parity with other things, but good place to start. That's backed by Redis, correct. Yeah, that one's based on Redis and yeah, that's actually, that one's been around for quite a while and it's pretty widely used. Yeah, definitely I would echo that. We've actually seen a lot of users using the rate limiting service. The only challenging I agree with is is the Envoy filter representing rate limiting is a little bit harder to use. So that might be good to check out a vendor for that. Okay, we have about 10 t-shirts. So if you like the question, if you like the t-shirt, when you ask the question stay at the end, so I know you want the t-shirt, if you don't want the t-shirt, after you ask the question, you can go back to your seats. Someway just let me know if you want the t-shirt or not. So I think you still want it, right? All right, go ahead, please. Thank you. We are running this doing like a heavily multi-tenant environment where we deploy a lot of workloads, the workload scale, they receive like the wall mesh configuration. Now the right direction would be using the Sidecar API so we can like model what is concerning or what is not concerning for the workload. However, for some reasons, where it's not possible, is there anything in the roadmap that can like push only the configuration that is needed using the metrics that is still publishes? Is it something that you think it's a concern for Istio or it's something that you think it should be solely a responsibility for the users? Yeah, it's a good question. We've considered whether we should automate things or have tooling to help do it. I think at one point, I think Mitch and I accidentally, independently both wrote a tool and then both neither of us finished it. To look at metrics and kind of give you suggestions, I think Keali even has an integration that does this, but it's tricky to integrate into the core of the project because so many different metrics providers out there, there's not really one integration point. One of the other things is that with Ambient, the configuration size and scaling problem is kind of radically different. So last Kupicon in Chicago, I think, I gave a talk on that if you wanna learn more. So today, yeah, the choices are really semi-automatic or kind of, sorry, semi-manual or semi-gamma, whatever, but there's no fully automated solution for that. There's no current plans, I wouldn't say that's forever. That sounds like an excellent opportunity for a new CNCF Sandbox project right there, so we'll look for your merge. Yeah, I think there are some, not core projects, but isolated projects that are doing things in this nature, I'm not sure of all the names or the details of them. All right, I hope that answers your question, go ahead. Hello, our product is using ASM to Google Cloud on-premise service, so very heavily. I wanna know, so I know Ambient is getting more and more stable recently since it could work upon any kind of CNI plugins, so I wanna know when will it be released on ASM? So I think, because the current price is so crazy for our product team, so I know John is working at Google, so could you please provide some inside information or some commercial plan 2024? I cannot provide that information, I think, but like Lynn said, in open source Ambient will be going into beta in 1.22, which is very exciting, but I can't give Google roadmaps. Okay, thank you, sorry. For what it's worth, we all worked on ASM as engineers in the past or present. That's true, all three of us. You should be talking to your product manager about that. They're the ones who really have the lever to say. We are using it on our scenario environment, but in the production environment, so it's impossible for now. All right, this is just a reminder for everybody. If you ask a product related question, this may not be your best place. This is a community conference, so we would love you to ask questions regarding Istio first. So thank you, go ahead, please. Hello, we have been using Istio for a while. One thing which I noticed is when we have a high traffic, the load balancing is not so good, and we have some things configured at Envoy, so this is more related to Envoy, but is there any thing which can be done to have more TCP connections balanced across in the downstream and the upstream parts? Any such things which are being developed more further in that area? When you say that the load balancing is not ideal, you're getting too much traffic on particular nodes, not balancing to others, and is this HTTP or TCP traffic? Yeah, HTTP and most, it's like I'm talking about in Envoy, we have like about worker threads, like 36 different number of cores. What we have seen is like the traffic coming on one TCP connection, and it goes to about 240 workloads, so all of them goes to one worker thread, which is running 97% or 100% CPU, and the traffic is imbalanced across in-rescuit reports. So, I mean, we have been trying all this Envoy configuration exact balance and all, but it's not helping the way we want to envision that inbound and outbound, so it handles on the same worker thread. Is Envoy or Istio has anything more inside in that, which we can balance out the TCP inbound and outbound, like upstream and downstream? There's a lot of knobs. It's hard to say without knowing more details, which ones would help. A GitHub discussion thread or a Slack thread would probably be productive here. There's the exact balance thing I think you mentioned. There's also all sorts of knobs and destination rule around like number of requests per connection, all sorts of things. There's a pretty long list of things, load balancing algorithms, but it's a bit hard to be prescriptive without more info. Yeah, one thing I would make sure, depending which version of Istio you're using, like make sure you're using elastic weighted, sorry, elastic weighted load balancing as the default, like least request style, instead of Ron Robin, which was the default in Istio and actually still is the default. Oh, we switched, it switched, yeah. Obviously, the kind of siloed or single threaded nature of Envoy and association of one request, one core, you can have specific situations in load balancing where that can lead to suboptimal load balancing. That's a very generic Envoy problem. There's no roadmap to make Envoy free threaded. So you kind of compensate for that in other ways, but it certainly can lead to some load balancing issues that you can address somewhat with scale or better back end load balancing or better front end load balancing or multi tier load balancing to address some of those things, but there are just specific limitations the Envoy architecture and that's how it gets some of its efficiency, which is a bit of a trade off, right? You're getting more throughput at a cost of some latency in some cases. Yeah, we had a limitation of number of TCP connections, so I think we have to live with that somewhere. Can I ask one question more? So going forward with IPv6, right? What is the update with Istio? Like we had 1.17, which was the experimental release for IPv6 and is there a further progress on that? Like is it alpha now with IPv6 Istio? I'm not sure the official status, but it's fairly stable for pure IPv6. What is a bit less stable and more in development is the dual stack support, which is still experimental and kind of undergoing a lot of development right now. So there's a lot of people working on that actively, so I think it will progress very well, but the pure IPv6 is fairly stable. Is your interest on pure IPv6 or dual stack? Yeah, so mostly I'm looking for inbound traffic and outbound traffic, which is doing conversion or if not conversion, then the workload inside will have to handle dual stack. So if we want to avoid that, maybe if Istio can do that conversion from IPv6 to IPv4 using virtual service or something, that would be the best, right? I mean, Istio can handle the conversion of IPv6. The internal workloads did not handle IPv6 in that case. Yeah, so that would be the single stack case with an ingress or regress gateway operating in dual stack mode. Is that the use case you're talking about? Yeah, so ingress and ingress would be on dual stack and do the conversion and internal workload just on IPv4. If that is something feasible, that would be the best thing out of Istio. Is that something planned or? If I'm understanding the use case right, I think the dual stack support that's being worked on would cover that use case, but like I said, that's still kind of under active development, so you could try it out, but it's not necessarily production ready yet for the dual stack. Yeah, this is also an area I think we would love some contribution. I believe it's experimental right now in dual stack, so we would definitely love some contribution on this. So just to sum up, we would recommend you to open an issue for your first issue about load balancing and try with some of the newer Istio build and we can come back to you to see if it's the Istio or Envoy issue and hopefully we'll see you in the community for dual stack contribution. And thanks so much for your questions. All right. Hello. Go ahead. I have no questions, but I would like to just thank you all the contributors and maintenance and developers of Istio, because it's making my life a lot easier because I don't have to bundle CA certificates in the container images and ship it and then less hassle to upgrade the CA certificates when it expires and I learned a lot about Istio when I was preparing for the certificate and I got it, thankfully. So thank you all. Thank you. Thank you. Congratulations to you. And I assume you want a t-shirt too. Thank you so much for the comments. Go ahead. Okay. I should ask this question private or on the working group meeting, but I want to share it. So the L4 for ambient is beta now. So what's the plan for L7? And the other question is, last time I tried the ambient, I felt it was hard to upgrade the data without any downtime. So is it fixed or what's the plan for it? So certainly the plan for ambient L7 is to stabilize it as quickly as possible if we could get some of that done for 122, I think we would. So certainly not making any commitments, but I would certainly prefer to see that happen. At the very least, we would stabilize the contract between L4 and L7 in terms of the relationship between Z tunnel and the waypoint, even if Istio's waypoint implementation itself was not, say, what we would consider beta quality, though I think that should actually be fairly achievable, but we'll have to see. You want to take the other part, Joan? Yeah, on the upgrade side, we've done some work on this, but for now, a lot of the focus on getting to beta has been not around upgrades and migrations because for the newer or less stable, we're kind of targeting Greenfield. Right after we do that, focus on onboarding existing users of sidecars, doing upgrades between versions, I think that will be a priority. Vyval and Ben, who are both here, at least were this morning, are experts on this, and have spent a lot of time. So if you want to know a lot more about upgrades, I'm sure they'd love to talk about some strategies that they've prototyped. Yeah, certainly the Z tunnel upgrade since we made that big change to how Z tunnel was integrated, like it's much more seamless than it was before. So depending on when you tried it versus what's in 121, there's a big difference. So take a look at what Ben and Vyval's demo, I don't know if you were able to make it this morning, but they should have pretty compelling experience. All right, thank you so much for that great question. We have five more minutes, so I have like 10 minutes. All right, awesome. So we might be able to get all the questions. Yeah, hello guys. Thank you for your work. We use not the newest version of Vyster, we use Winepoint 16. And recently we started monitoring the convergence time, like when all the configuration reaches the destination invoice. We use a multi-primary setup. We have a lot of services. And we saw that the 19th percentile of convergence time was about five seconds. We started researching how can we allow it because it's critical for us. We found the issues, so we found recommendations to disable caches. We disabled CDS, RDS cache. Like this timing became smaller. For example, three seconds, we tried to change debounce timing, we tried to increase replicas of EastOD. We decreased this timing to two seconds, but we are not sure that in the future, when we scale, it will be the same. And my question is about your roadmap. How do you see it? Like do you have any plans to decrease these convergence time? Maybe you have any thoughts about using Invoid Delta API to push, not the full context, but the Delta. Yeah, there's a lot of stuff here. So I think in terms of what you can do even today as a user, there's like the configuration scoping that someone else mentioned that can reduce the work that Easter needs to do quite a bit. That's really the most effective and can be done in any Easter version. We just launched a doc on Easter.io that describes all the different ways because there's a few different options when you should use which one. It may be helpful there. Some other things, you said 1.16, I think. Since then, there's been a lot of optimizations in this area as well. So simply upgrading might be pretty meaningful. It's hard to know if the issues you're seeing are like you're hitting some edge case that happens to be slow. We fixed many of those. Or if it's just you're sending a lot of work, like Easter just has to do a bunch of work. There's not many ways around that just by upgrading, right? In the next release, 1.22, we're aiming to enable Delta XDS on by default. So that will be there. I will caveat that it is not a perfect Delta implementation. So we do implement the protocol and do some incremental updates, but there's some cases where we still send a bunch of redundant configuration. But once we get our foot in the door and have kind of the initial release, it's kind of easy optimization moving forward. Probably each release, I'm sure, will optimize it more and more. There's also some other things on the roadmap around some of the just protobuf. That's where a lot of CPU time is spent in EasterD. We have some ideas on how we can cut that basically in half, which would be a nice improvement as well. I mentioned earlier as well in Ambient, the configuration scale problem is totally different. So it's almost in many ways, like I don't wanna say it's a solved problem, but it's probably an issue that becomes not relevant unless you're at a ginormous scale. So there's all sorts of things. If you upgrade and try these out and you're still seeing issues, there's also a page on EasterD about analyzing performance. You can get a profile post that on GitHub. I always love to see where people are bottlenecked and if it's some edge case or something, then it's usually pretty easy to figure out from that and make some optimizations. Okay. Thanks for sharing. Well, thank you for the great question. And I appreciate all the different things you tried. You're really working hard on EasterD. Thank you. All right. We might be able to take two more questions. Go ahead. Okay. Hi. Thanks for my presentations, for your work. So our situation is like every half year, I try to evaluate if I can use Easter in our deployment. And every half year, I decide that not. The reason is that, okay, while I as an engineer, I can probably introduce Easter for a couple of services, but most of our deployments are made by usual developers. And for me, sometimes it's even hard to persuade people to use home charts. I mean, yes, seriously, I'm serious now. Is there any plans for Easter maybe to at least support some default configurations like plug and play? Like some of your competitors at least try to do. So basically I just create labels on new place and I say that I want everything at least mutual tools in the names place. I don't care a lot about proper or balancing maybe, at least like the first very simple step. Are there any plans for that maybe? So there's not. And that I'll tell you, that's a good thing. Your question is a perfect advertisement for the practice of platform engineering. So products like Argo, Backstage, Crossplane, these are all going to help you create same defaults that are best for you. We could never choose the defaults that are going to be best for your environment or your developers. Everyone is going to have a unique set of constraints. And so the pattern that we've seen succeed the most at companies is where a few engineers have a deep understanding of Kubernetes and Istio and networking, et cetera. They build a platform that provides all those same defaults so that developers have very few choices as they go to deploy that application and things just work under the hood. Okay, thank you. Well, I want to add to Mitch's point, right? If you look at the demo you added, right, for Ambient, the only thing you need to do is label your namespace. You got mutual tiers automatically. And then layer seven function is more about leveraging what Mitch said, using IDP or Argo to make it easy because you have to let us know what function you need. Okay, then another first question. What if a node process, which is that tunnel if I'm not mistaken called, yes, fails, will I lose all connectivity from this node until it starts? Are there any plans to have two processes? One is primary, another is secondary, just to reduce downtime. Yes, I understand that I should put anti-affinity and so on, but. Is that in context of ZTunnel for the upgrade? Or just? For downtime. Yes, for downtime. Oh, okay. If we load the diamonds, I mean if we go to basically one process which handles all communications on a node, not sidecars, yes? Then it's like single point of error basically. Yes, we have not seen a reason to do that yet. So it will depend on what we start to see in production. It's certainly designed to be capable of that, but unless we see a reason to deliver that, like it would just be complexity for no value. Obviously we're very focused on making ZTunnel incredibly stable, right? Stable is a kernel module or something of those lines, right? Which are also logically single points of failure, right? Running on a node. So until we see meaningful examples of that in production, it would just represent complexity. So I think the more pressing cases upgrade, actually, where you want to go from ZTunnel 1 to 1.1 and how little downtime we can give you during that upgrade process. Right now, because ZTunnel starts so incredibly quickly, right, we're fine just firing a new one up and then cutting traffic over, but we may look at providing better options there. But again, based on what we see, actual user experience. No, but upgraded is usually when procedures are used. Usually it's not a problem actually. I mean, usually people when they do upgrade, they know that there will be downtime, probably. That's okay. But if it's stable, then thank you. Well, thank you for your great questions. You want one? All right, looks like we extended you a few more minutes. Go ahead. Hello, first of all, thank you for your work and for this panel. I would want to ask about FIPS compliance. Is there any plans to support this in upstream project? I know there are some alternatives like third-party offerings or building from the source, but maybe it is in roadmap to deliver this as part of upstream project. Thank you. Okay, so there's compliance and then there's certification, right? And it would be very hard for an open-source project to say that it's FIPS certified and I'm sure Zach's gonna wanna come up and maybe talk about this. Obviously we do a lot in Istio to make sure that we use best practice, both technologies and libraries and tools in terms of dependencies. We're not a legal entity in the certification sense, right? So we can't provide you a kind of fiscal-like guarantee that where Istio is FIPS certified, right? And specifically, you have to pay for that certification on major releases. And so on the open-source side, we actually have the FIPS built. You can build it in FIPS mode yourself, but you have to pay somebody for that certification and that's a cost that the open-source project can't bear, right, because it's pretty pricey. Yeah, now you can work, right? Usually FIPS is part of a larger certification program like FedRAMP or something else that a compliance, you have to go through a compliance program, right, as some solution provider. You can often use Istio as part of that certification process for what you're building and like have your certification auditor look at it and go, yes, you're using technology that's following best practice and therefore meet your needs in terms of compliance. But we can't say we're like rubber stump, FIPS certified, we adhere to all the best practices that should allow you to go through that process much more easily. If you want that rubber stump, you have to go to a vendor, right, like that's not something I think you would see from any CNCF project. Right, then there's the other technical world which is what should people actually be doing, right? In terms of algorithmic security, NIST recommendations, the difference between the different regulatory bodies like FedRAMP versus the EU, that's a very complicated and involved long list of discussions which I don't want to bore people to dears about, but if you want to talk about it afterwards, there's plenty of people who would happily talk to you about it. But there's also some interesting kind of concerns in that space as well. All right, thank you for that great question. Samurai is probably go to a vendor if you need FIPS certified and you are the lucky person, this is the last shirt, probably won't fit for you, so go to the solo booths for exchange, thank you. Go ahead. Thank you. I started doing Kubernetes and stuff while deploying Istio a few years back, so I kind of got me where I'm here, so thank you guys for that. And in the four years I've been doing Istio, I feel like I've barely scratched the surface of all features and use cases there is. And just a question, do you have use case of features you feel are underrated or just very proud of technically? And like as maintainers, what parts of Istio you just are really excited about and might not be as visible to others? The standard onboarding story is I came to Istio for security, I realized I had no idea what off policies I should be writing because I don't know which services I own and how they connect to and so telemetry helped me to understand what I own so that I'm capable of securing Istio. So for me, telemetry is sort of the undervalued player there. This is not really answering the question, but my favorite is when someone goes and adopts Istio and instead of trying to use every single feature all at once before they understand it, they say, ah, that looks nice, but I don't need that today and I'm gonna reduce my complexity. The people that come and say, oh, Istio's too complex, they're like, I'm gonna go through every task and I must have one of each in my cluster. It's like the check box, which doesn't really go well for, especially for new users onboarding. After years of experience, sure, it may be if you have all this use case, it makes sense. So I'm always happy when someone says, that looks nice, but it's not for me yet. All right, thank you so much for that great question. Yeah, we were able to get through our last question. Thank you, go ahead. Thank you. Well, one of the things that I love about Istio is that we run since 1.13 and the process of upgrading has been really smooth. I mean, we use releases and tags and so on. We are at the last version. It was super easy for us to upgrade, so thank you for that. It was very nice. I tried it out like 0.8 or something like that. It was upgrading help, but now it's critical. Wow, I needed to have you recorded. Thank you so much. You're welcome. My question is we have been running Istio for the two years or so, and we want to increase adoption of our workloads in our company. We are about 50, 55% if we want to get to 100, of course. But one of the features that we are having trouble is that we use API Gatern on our border and a lot of APIs like to have fallbacks for, for example, S3 static content if they have some sort of trouble. We are struggling with that because we had to try it out. A lot of Invoi Future and go really deep into Invoi with the configuration and module and so on. You guys see something related to that to make it easier, for example, for me to fall back, not straight to S3 if I want, but at least for another workload inside my cluster that can proxy to S3 or something like that, because I know that there are DNS problems with that and depending on the URL that they are using, I don't know if this is something that came up in the research that you did, the forums and so on. So Invoi has a bulkheading feature. Well, Invoi has a lot of features that people could use to solve this problem in a few different ways. You're probably the third person who's asked a bulkheading question that I can remember in the last year. So I think the first problem we have is we need some more consistent user feedback about what people actually want here. Because it's, right, there's a very wide range of solutions here from built-in bulkheading features to serve static content to integrating with like a higher availability cache, like S3 or say memcache or something like that, to tiered fallback and load balancing mechanisms during retry, right? Where Invoi has this feature called composite cluster and you can say, well, pick from this cluster first, but if you retry, pick from that one, right? Those obviously add very different requirements in the API and the configuration mechanism and we're not really sure which one to give people. So we actually need some more user feedback in aggregate to actually decide if that complexity is worth carrying. And then the kind of compensation for that is, okay, if we're not gonna build it in, then can you use one of the extensibility mechanisms to do it yourself, which is really what we would like. And we make that too hard. So the one thing that we know we have to solve is Envoi filter. And so I think our focus should probably be on that in the short term, until we get more consistent feedback about what people might want from bulkheading or look at some of the other extensibility mechanisms that we've talked about. I think we're gonna do a wrap up. Do you wanna say something quick, John? Yeah, I will say if in the core East Joe, if there's workloads within the same service, you can configure a failover between those. But to do cross service or cross namespace or to an external thing, that's what's missing. But you could, like you mentioned, having a proxy 2S3, you could probably kind of make it work. All right. Amazing. Thank you. All right, thanks for the great question. And I want to thank all the panelists for answering all these tough questions. Thank you for being here. Thank you.