 So hi guys, this is our little conversation about LTS. I know everyone's favorite topic. So I'm Mo, I work for Microsoft. And I have gathered some fun folks with all sorts of hats. So Micah, I know you've been working on working group LTS. I see Bridget, you're here. Sadly, Jordan's not here, and Jeremy's not here, but we have them in spirit. I think Phil will join us hopefully shortly. And Rob, I know you love this topic from Gateway API, so I'm looking forward to your perspectives. Before we get started, so in case anyone wants to take any notes or anything, I have QR codes there. I'm being told this is recorded, so I will be able to retroactively take notes on this, which will be good. But that stuff is there in case anyone wants to do that. And I'll, I guess I'll put it back in a little bit in case anyone needs the link. Before we get started with our discussion and various topics, I wanted to signal boost a thread that Jordan has started, which is basically how can we have releases that don't require you to have to read the release notes very carefully to make sure that you're not going to get broken when you upgrade your stuff, right? Which comes down to a very strong and careful mentality around no action required, right? Is that Phil? OK. Phil, come on up. I don't think Phil realized he was going to be on stage. Yes. We need your container D representation, because I think it, I'm sorry. Right up here, all right, all right. But yeah, so the link is there for anyone. I will publish the slides later. So like, you know, if anyone wants to read the various docs and the stories around things that have gone well and things that have not gone so well on people's Kubernetes upgrades. So yeah, so let's go on to the main topic, which are various conversations around LTS. So Micah, I think I might start with you, because you had brought this up. So it was about a year ago when you started conversations around last KubeCon EU, right? How's things going? Yeah, so last year at KubeCon EU, who was here last year? OK, we got a few hooks. Great, awesome. Yeah, so for you who were not here, we had sort of an conference time to talk about, hey, Kubernetes releases last 14 months. And it's really painful. And there are really a lot of customers that I see from my own experience and a lot of people that I talk to who that just doesn't work for. They need longer amount of time. They need the pace of change is a lot. Upgrading, even though Kubernetes now does three releases a year instead of four, that's still a lot of work. And so we had a long, good conversation about what does LTS mean? What does that look like? Even the acronym LTS kind of is a little bit contentious, long-term support. That conjures up a lot of images or just experience from people from maybe like a canonical Ubuntu LTS where you can skip versions. Like, should we apply that to Kubernetes? What about downstream things or other dependencies? Container D or Core DNS or CNI? How do we coordinate that? So that was kind of the basis for the discussion we had last time. And your question, I guess, where we're at now is we now have a working group. There used to be a LTS working group a few years ago. And working groups, if you're not familiar with Kubernetes governance, SIGs are sort of long-lived and generally own code, not entirely all of them do. But they're open-ended. And working groups are typically formed around a specific problem. And in this case, LTS is kind of the question of, what do we do? What are things we can change? Does that mean we have, and not, the answer's not yes to all these questions, but does that mean we have fewer releases a year? Does that mean we maintain releases longer? Does that mean we allow skip upgrades? These are just kind of the questions that we're still trying to grapple with and answer and make things better. So that's kind of the state of where we are now. I don't know that we have actual answers to all of those, or maybe even any of those questions yet. But I think it's definitely something that we're exploring and just had a community survey about. So if you haven't seen those results, I think we're going to probably get into that a little bit. Did you have LTS survey results as part of this? I did not do that. OK, that's fine. Well, you can read the survey result summary. It's on the LTS working group notes in Google Drive. Yeah, so that's the first QR code, the one on the left. That'll take you there. All right. So thank you for that. So I guess I would ask you, I have to maintain some. I'm sure you do too on, sorry. Is things getting better for you for maintaining EOL stuff? Yes. So I think if you've been paying or haven't been paying attention or just haven't been close to this, a couple of things that have gotten better over the last, not just I think year, but more than a year, a couple of years, is new releases of Kubernetes only have V1 GA features on by default. Alpha features have always been off by default. Beta historically have been on. And that really got a lot of us into trouble. How many of you experienced any fun or pain around PSP, Pod Security Policy, deprecation? OK, yeah, so people here are familiar with that. That was an idea, and it was also an earlier feature in the Kubernetes project lifecycle when beta things just sort of stayed in beta and didn't really graduate to GA or have a full GA plan. And that caught a lot of Kubernetes users off guard. If you're an end user company, you're not necessarily reading Kubernetes release notes all the time, and you're just saying, OK, I'm going to try and upgrade and see what breaks on my pre-production environment. And oh, wait, we have all these security controls in place around Pod Security Policy, and then it's suddenly gone? What do we do? Or things like, what do you mean Docker is not supported anymore like in Kube-124? Oh, there's this container D thing. OK, yeah, so safer defaults are now the default. Another great, I think another big item has been the Go version support policy. So Jordan has done a ton of work in this and worked with the Go folks to have a, if you're not familiar, so Kubernetes is built with Go. Go is the programming language in runtime. Go has a very, very robust backward compatibility with the language, but not necessarily the behavior of the runtime or clients. So while the language will typically work and compile, it will work across versions, things like, I'm trying to think of like leading zeros in IP address parsing. URL parsing? URL parsing. Those things can change, where a leading zero used to not be an error, and now suddenly it is. And so things like that would happen in Go. And so for Kubernetes to pick that up, that was a lot of work and required. It was breaking changes. And additionally, Kubernetes has its release cycle of 14. We maintain for 14 months, we release three times a year. Go releases twice a year and maintains two releases. And they're not coordinated with Kubernetes releases. So it was the case where Kubernetes would release a branch midway through a Go version lifecycle. And there might be lower bound six, up around a year of Go language support for that version of Go. And so the version of Go Kubernetes would use would go end of life before that Kubernetes version was end of life. And so any CVE fixes or whatever were not backported to that version of Go. And Kubernetes was still building with that. And then if you tried to upgrade, you might hit issues. So the Go project has done a lot of work to maintain compatibility, I think at least two years. Still in new versions, but support backward compatible behavior in each new version of Go for up to like a two year period. So that encompasses all of basically each Kubernetes release. But yeah, that's one other big line item, I would say that that matters a lot to the security conscious folks. And it doesn't necessarily something end users would feel or impact them a lot. But yeah. And CVE is special in the sense that I've worked on smaller Go projects. And you just bump the Go version and it's just fine. Nothing happens. But with CVE, it is a process. There is an issue. And there's a series after series of regressions caused by some Go change. And you very slowly work through it. And of course, somebody was using unsafe somewhere. And it did a thing that made an assumption about a thing. And that used to work and it no longer works. And now Cube stops working. And so you've got to work through all those things. All right. So have we defined, now I guess I'll ask this all through you, have you defined what LTS actually is yet? I spoke the last two ones. I get to go last. Maybe from container D's perspective? Yeah. So I guess one interesting data point for us is that we did start an LTS line for container D. We announced, I think we did announce at EU last year. So container D1.6 is LTS. And again, interesting, there's no, you get to define what that means because there's no, you know, you can think of Ubuntu or whatever other LTS things you know of. So for us, we came up with a set of statements about what that means for container D consumers. It seems like it's been successful. It seems to, you know, make folks happy that they're not. Again, we have a bit of a longer train than Kubernetes or Go probably every year. On average, we've done a minor release. So we had, you know, 1.3, 1.4, 1.5, 1.6. 1.7 is our end of kind of the 1.x line. We just did cut our release candidate for container D2.0 yesterday. I think it was yesterday. Maybe it was a middle of the night here. So essentially what 1.6 is providing is that stable base that people can rely on because we expect moving to container D2.0 is kind of a bigger step for many people. So the trick is how we defined LTS in reference to Kubernetes and the fact that the CRI API continues to evolve is that the maintainers of container D will try as best we can to bring back, you know, CRI features into container D1.6. So usually you think LTS, okay, there's no new features. It's just bug fixes, CVEs, but we understand with Kubernetes marching forward, you know, if you adopt container D1.6 LTS and then you can't use a new Kubernetes version that kind of puts people in a tough spot. So for us, you know, we haven't really, we haven't existed long enough to really be tested on how that looks. But again, if you have suggestions for us or you know, when we do hit that point, it's like, oh, well, how is Kubernetes 1.31 gonna work with container D1.6? You know, we'll figure that out as we go. Yeah, so I'm here to represent primarily Gateway API and Ingress and I think of those APIs as some of the most broadly implemented and built upon APIs in the ecosystem. So we're not just thinking about the APIs, but we're thinking about all the tooling that's built around those APIs. There's 25 or 30 implementations of Ingress and Gateway and then countless tools, CI, CT, et cetera, that also build on top of those APIs. And you think about that ecosystem and how painful it is for them to try to support all the different range of Kubernetes versions and API versions that exist. If you're familiar with the Ingress V1 beta one to V1 change, I'm largely responsible for that pain. Sorry, I am very, very aware of the pain that causes now and I'm doing everything I can to avoid that going forward. And so although we don't have anything that's called LTS, we thought or I thought, oh, we'll solve this by doing this really cool thing of supporting the trailing five Kubernetes versions at least with every Gateway API release because Gateway API CRDs, right? So we can just, you can take our release and deploy it on any Kubernetes cluster, you're gonna get probably 95% of Kubernetes clusters just with that promise and we aim for even more. And that was, I thought that covered everyone and then somebody decided to start an LTS conversation and well, now what do we do? So it introduces pain points there, but then you look at it from the other side, there's some really great work going on in this ecosystem whether it's cool new features coming to CRDs, whether it is storage version migration, all these things that Gateway API and others would really benefit from and LTS by extension unfortunately delays our adoption of those things or we as Gateway API have to say, well, if you're using this version of Kubernetes, you can use this version of our release and then like we have to segment our releases out and then create all that pain for controllers and the entire ecosystem. Or we wait, I don't know, 10 releases until it's safe to use the new CRD features which also are very slowly rolling out and it's an unfortunate, we're all trying to be safe and we're all trying to provide as broad support as possible, but unfortunately the direction that Gateway API decided to provide something resembling LTS but not called LTS actually works against the other LTS discussion here. What was the original question again? I was just taken by the two guys answers. I'm trying to remember which one I asked on this one because the answers were so broadly different when it was like container D and Gateway API. I guess I would just make a quick comment. So maybe thinking off of what you just said, Rob. So I think at least from the outside when you think about LTS, you're gonna see a lot of positives, right? You're like less churn, less work to just, because at the end of the day, your job isn't probably not like maintaining a Kubernetes cluster, right? It's like delivering customer value using stuff running on top, right? But there is a world where folks like Gateway API never adopt new features because they're too desperate to keep old versions working. And then how do we get usage and feedback from features, right? How do you actually promote something from beta to GA if no one has ever used it? Like, and by, I mean no one, I mean like anyone outside of the people that developed it. Obviously we tried it and it probably worked for the two use cases we had. Yeah, no, that's exactly it. I've asked teams for hey, can we have this cool new feature for CRDs? Then they deliver it and then a year later, I provide feedback because it's that long until we can actually pick it up and use it in Gateway API. And that's being generous. A year is probably not quite long enough. But yeah, it's an unfortunate consequence of how we've decided to build a long set of release support of just five releases, but then you bundle that on top of LTS, which is more and I'm not entirely sure where we land. Yeah, I think this goes in like to some of the second bullet point, right? Like, at least in Kubernetes, as far as I understand, we're not taking the container D route. Thus, we are not going to at this point backport any features, right? So that means projects like Gateway API, if they support an LTS, that means all of those features for the last N years are not available to them because you can't use a feature if it doesn't exist in the cluster that you're on. But maybe stepping back on the, what is the LTS question? I think there's strong agreement on security fixes because I don't really know what else LTS would have. Yeah, I think that's definitely true. I think security fixes, I think there's definitely a strong interest in sort of critical bug fixes. Like if big bug in existing feature or data corruption or sort of non-security, but consequential bugs as well. Yeah, I left the question mark on all bug fixes. I think that data tells us that we can't do that because when Jordan had compiled sort of like the health of various release cycles, one of the biggest things we noticed from that data set is we cause a non-trivial amount of regressions backporting things. So I think we have to hold ourselves very honest to the fact that the data shows that things aren't so easy and clear when you backport them. So you really need to have a good reason that can't just be, well, someone might benefit from this. It really does have to be, no, I know why I'm doing this. Phil, maybe for you. So, 130 is about to come out for Kubernetes. And 2.0 of container D is not ready or it's almost ready, but not there, but we're gonna miss you guys. So how did that impact the fact that we are building a new release on like 1.7 instead of 2.0? Impact in which way? Like, did it change any support life cycles for you or any thought process around? I think you said 1.6 is LTS, right? Yeah, so 1.6 is LTS. And yeah, so 1.6 we are supporting through at least February next year, I wanna say. Yeah, it should bring up the read me on my phone so I can actually say the right things. So we actually just had a PR in the last few weeks trying to make sure that our, so we have a table in our, actually releases.md, not read me. That gives you a table of Kubernetes version, API version and container D version support matrix. And so we were just updating that for getting ready for 1.30 so that consumers know what matches up. So I think we're gonna be fine that 1.6 and 1.7 will both be available when 1.30 is released and 2.0. Yeah, I can't predict our, we just cut an RC as I said yesterday but we're very close to 2.0 being released. Yeah. I guess it's, it is just the way things are but it'd be kind of nice if we could somehow magically align all the stars but the reality is we can't make container D, Kubernetes is go and everybody else to say, I release on February 2nd and that's when we're gonna release, just doesn't work that way. Yeah, just a comment while I have the mic since I wanna be super careful. So you mentioned Kubernetes won't be like container D backporting features. So I think there's gonna be maintainer discretion about, you know, sometimes adding a new CRI API is just wiring the CRI API to a feature we already have in the core of container D. That's a much simpler discussion because like the checkpoint CRI API, we didn't have to change core container D code. We added basically the implementation of the CRI API endpoint to use the existing checkpointing code in container D. So, you know, those are the kinds of nuance of what does it mean for an LTS of container D to support new Kubernetes features. So we'll be, I think having to make that, you know, as a case by case basis. And there are other things that we've discussed that it's like, wow, that's, that would be significant changes to container D core code base. That may not, you know, we may just not be able to do that. So, yeah. That makes me feel better. I thought it was much broader. I guess maybe the last thing I would say is, so thinking about the cost of LTS to Kubernetes and related projects, right? How do we deal with the fan out of Kubernetes LTS? Right? Because at the end of the day, when you tell a customer that, you know, you have an LTS environment, they're not just, they don't mean just Kubernetes, they almost certainly mean the host, but they also certainly mean the things running on that platform, right? Because part of LTS for them is mostly, well, I want to limit the change to this environment. You know, maybe because I'm in a regulated industry or I have very specific times within the year, I'm allowed to make changes or whatever other constraints a customer is in. So any thoughts on that, guys? Yeah. You know, one of my thoughts is I wonder if this is something that is, well, first, it's going to be very challenging. I am very familiar with, you know, just the ecosystem that I live in, which is Ingress Gateway and that world, that there's a lot of controllers that want to have one release that supports the broadest range of clusters possible, right? And that's it. And they don't want to have this release for, you know, version 122 and above and this older release for the rest. It's going to be challenging. I can imagine that every individual product will have to decide if they want to support LTS. So if you're an Ingress controller, maybe that's a per project decision that I want to support this older mode and this is, you know, what that means. Maybe for Gateway API, as a sub project of Kubernetes, we need to decide we'll support, that this is not a commitment. This is, maybe we might decide, you know, but, you know, maybe there's a world where we decide that we will support some subset of our capabilities, GA only, LTS, legacy, CRDs, that don't have any of the fancy new features. I don't know. It's a cost to every project and this is just my little corner of the world. Yeah, I think one of the other costs to consider is going to be, and it's definitely a cost, but I think it's also clarity for the ecosystem on top of Kubernetes. Like, obviously we work closely with, Kubernetes works closely with Container D and some other, you know, core dependencies, but think about like Fluent Bit, Fluent D, any other project that sort of runs on top of Kubernetes. They're going to either have to decide, like, do we do, what do we do about this? So the cost also will extend beyond the scope of the project itself. To your point though, and your question, yeah, there's definitely going to be a cost in terms of saying, okay, I'm on the security committee with Moe. Now we're going to accept security reports for versions not just from the last 14 months, but from the last, you know, potentially two years. That's kind of the time range we're thinking about. Or maybe it's, you know, maybe it's 26 months, I don't know, we haven't exactly gotten to where, yeah, we haven't exactly figured that out yet, but that's a long time. Like if no one's touched code in almost two years, like other than backporting security fixes, like that's a lot bigger service area. So I think it's also those, yeah, those cross cutting and SIG release, right? SIG release is going to have to have more people spending more time on these releases and being able to cut them still. All right, y'all. I think with that, if anyone has any questions in the audience, I see a hand. Thanks for continuing to push on this. I've always said I think it's not maintaining a release for longer. For the end user, it's what happens at the end of that LTS. And what I observe is customers drift to the last maintained version and extended support and then get bumped exactly four months, which means they live in exactly the same world, they're just time shifted back a year. So we're just making it observably worse for every consumer of LTS. And I think we really have to address that. We have to make it better to upgrade, safer to upgrade. And that's where we're putting our time and energy and would love some more help and support on that. Another comment you talked about, the ecosystem on top of Kubernetes and a cost there. One thing I haven't heard is what's underneath Kubernetes, the base OS image. If I read the AWS terms of service, right, you just bump the AWS Linux when it needs to happen, which is kind of disingenuous when you talk about an LTS. And I think that's the only reasonable solution to that problem. And so I think it's kind of misleading that we can and should support an open source LTS. All for leaving release branches in a great state. If that becomes a need for regulatory reasons, I get it. But I think the transitive dependencies of the Kubernetes community make it really difficult. And final point, the path that go went was not to maintain versions of go for two, three years, but to implement compatibility mode for older versions. And that reduces the surface area where they have to implement security fixes and so on. And I'd love to see more exploration of that idea in Kubernetes too. Thanks. I'll make a comment. So some of the stuff that's at least a little bit related to the comments you made. We have seen designs that have firmed up the amount of Qubelet drift you can have from your control plane. So that makes it easier to do upgrades across many releases because Qubelets require draining them if you wanna be safe, which control plane does not. The other thing is, I forget the number of the kept, there is a design for within certain constraints. Like say, for example, you only use GA features. We could maybe allow you to do a skip upgrade of your control plane. So instead of having to go from 125 to 126, you could go from 125 to 128. But yeah, I think you are right that LTS on its own, like supporting releases for longer does not inherently solve the problem of upgrades are painful and they cause stress and just feel dangerous. So people avoid them until they're stuck. And oh yeah, if you're stuck two years behind versus one year behind, you're right, you're still stuck. Yeah, maybe I'll just address one more thing. One of the things I've seen common in LTS and as it's evolved is that there's a premium to pay for LTS among vendor support, right? One of the questions I'd have is when it comes to a pure open source project like Gateway API, how do we fund that support, right? Because it's a significant additional cost if we were to support that. But it's not yet, we don't send a bill to anybody for providing Gateway API or open source Kubernetes or whatever that is like, I know this is a much broader question, but I think that's part of the LTS discussion. No comment. So yeah, I think a lot of what you said is completely true and is a lot of the reason why in the initial iteration of LTS, we managed to get what we did done in the, what was it, 10, three years? Yeah, three years, I think it was that we were, that the working group was active previously. The biggest change that came out of it was the change to this version skew policy and the release cadence so that it to move to 14 months instead of nine months, right? Because at the time when we were doing that, the support we know was nine months, which meant that for a lot of people, they're like, oh, I can only really safely do upgrade once a year in the quiet period after Christmas for non-retail ones and for retail people, it's the opposite, right? And so that everybody was screwed in that case. And so things are getting better and I think that the stuff about safe upgrades is really, is the key to this stuff. Like you say, the key here is like LTS as you put it a good way that LTS kicks can down the road and gives everybody a problem like in another 12 months or whatever of then how do you get from one LTS to another and you need skip upgrades or safe upgrades or being able to turn backwards compatibility modes or something like that. And so we need those things anyway to make upgrades easier and so that's why I think people have been putting the, rather than saying let's just do an LTS, it's better to put the effort towards making upgrades easier so that then when we want to do it and when we do want to do an LTS, it's more achievable and it's safer for end users to be able to upgrade. If you can do skip upgrades doing an LTS is much less a big deal in terms of upgrading. And so, yeah, like I think that where we're at is a pretty good spot that the working group is doing a great job of identifying things that we can do to make the upgrades easier. And that's one of the reasons why I think from the outside it can look like we're not making any progress on LTS. It's like, well, we are, but we can't like just marking something as LTS is gonna end up with way more pain than doing the job right. Yeah, thanks again for the discussion. My question is related to how we are sort of maintaining Go version upgrades in the Kubernetes project. So that's one of the areas that I help out with. And I think some, I think Jego mentioned Go making those changes. And so far Kubernetes has probably been one of the biggest influencers to why Go did those changes to begin with. And Go isn't changing its release cadence. So it's still supporting the last two versions. But if we do an LTS, it's going to be that many more release branch bumps that we need to do. So like, I'm just curious about if you've taught about not just the ecosystem on top of Kubernetes, but the things that Kubernetes consumes as well, because you have a huge fan out in terms of the infrastructure that you will be using in order to test these, for example, Go version bumps out, right? Because each Go version has two RCs that we necessarily need to test out in order to get the feedback to the Go team in place on time. So I'm just curious about if you've thought about the dependencies that are consumed by Kubernetes rather than Kubernetes as a dependency itself. Yeah, I think it's definitely been something we've discussed. I think because of the way that Go is doing this, where you can have configuration to support specific functionality, I think the idea is that we can keep bump versions of Go, but keep that old compatibility. So you're right, there is maintenance there in terms of specifying the version or the features necessarily. But the idea is, again, with testing that we can have that guarantee that the functionality is the same. I'm not, it's not something I'm super close to, but that's my understanding of it. So if we keep more versions around longer because we're doing long-term versions of support, also for ourselves, just the infrastructure costs to keep all that CI around, we already do things like the older releases test less frequently, but that only gets us so far. You still need to be able to test patches that are coming in and you have to keep all of that running. And that stuff's pretty expensive and only a few of the vendors are actually backing this. How do we get people who want LTS to help pay for the costs? Like there's the engineering time and there's also all the resources we need to qualify it. That's a great question. And I don't know that I would great answer that. I think this goes back to what you said earlier, Rob, is that the impact of LTS on open source is hard to amortize because there isn't a path to revenue in that stream because you don't have a customer relationship. You have an end user to community and maintain a relationship, right? But there's no money being exchanged. In the same way, we're not like charging anyone for running extra branches because we don't have anyone to charge. It's the opposite. We actually pay like pen testers to people who find bugs, security bugs, security netties. So now we're just paying them longer. I mean, yeah. So I don't think I have a good answer there other than, I don't know, three big cloud providers, please show up and donate tons of credits. Not a great answer. Okay. There is also this associated cost to run more tests, more branches for the project, right? So, and people's time. That's something that we need to factor in this equation. Yeah, I think it just comes down to the fact that we don't have revenue coming in from open source things because it's not how they work, right? That's just not the spirit of that entire premise, right? Thanks. I think Rob brought up the other point that's really critical is the fragmentation and the not adopting new features for longer. Couple of years ago, that wouldn't have bothered me as much, but I think what we'll see at this KubeCon again is all about AIML and inference is the new web app. And I really see it as an existential risk for Kubernetes to get pinned at 127 where dynamic resource allocation and multi-network doesn't exist. And we end up in a Python 2.7 versus 3.x scenario. So I think we really need to weigh the benefits of LTS against the risk of displacement completely or fragmentation. I don't wanna live in a world where vendors and users have to know what version of Kubernetes they're running on and behave completely differently. So I think that's a real risk and I don't see it in the LTS discussion enough, I think. Please come, J.G., specifically, but anyone who's interested and has thoughts here, please do come to the bi-weekly LTS meeting. We want to have these conversations. So if you feel like something... Those are Tuesdays at 10, right? 10? Tuesdays at... It's like 10 Eastern, yeah, yeah. Seven Pacific. Seven Pacific. It's like terrible for a Pacific coast. We do get, we try to cover EU as well there, but yeah, it's tough. Yeah, that's actually a really good point because we've already kind of, with container D, we've made some hard decisions for folks who wanted to adopt NRI, which we put in 1.7, Node Resource Interface that enables some of the work around dynamic resource, basically inserting devices into the containers OCI config. So 1.7 had a bunch of experimental features that are gonna be GA and 2.0, but folks that were playing around with inference and using containers for ML basically had to choose container D1.7 to use NRI, which gets them off of 1.6 as an LTS. So yeah, these are the complications that don't really have good answers, but that's a very concrete example of the choices people are having to make. Yeah, I mean, very related to that. One of the things I can't help but think that is if we go too far down this path, do we end up in a world where everything moves out of tree? Like, is it just too expensive to develop anything in tree because your LTS users are two years behind or I don't know what that might be, right? And then do you have a million gateway-like projects that may not be the end of the world, but it is another significant threat that we need to understand is that an outcome we want? I'm not sure. I was about to say that that almost seems relatively desirable. Like, you move everything out of the tree and then the tree can be stable. Like, and then it doesn't matter so much, there's less churn in the tree, so you can have the core parts that everyone is using be relatively stable and then you add on the bits and it's up to you, the operator, to decide how alpha you're gonna be. And the flip-positive side of that is it also gives you better experimentation, right? I want to try this like pre-production feed. Yeah, you get back into the loop, right? Like, oh, I'm depending on this pre-production thing and it breaks, but from a core project perspective, it gives better experimentation because you're not bound necessarily to that release cycle. Yeah, I mean, I think Gateway API is the sort of canonical example here where it's like, well, right now, we're still depending on this stuff because CRDs need evolution, but hopefully, hopefully at some point, CRDs will be finished enough that we can be like, look, we don't need any more changes now from this version. Hopefully, I said hopefully. Yeah, let's just wait till everything good about CRDs is merged and then we can talk about LTS after that. Yeah, yeah, that's the point I'm making, but I mean, presumably there's some sort of like asymptotic effect here where like, at some point there will be less and less and less changes required. So maybe just a little counter to that, right? Sometimes you do encounter things that you cannot build out of tree, right? And so what do we do then? And do we ever get any feedback there? And on the CRD thing, right? Like the rule basically now is if you build this functionality for like core APIs, you have to have a mechanism to express that in CRDs at some point, right? So like I think back to when we added warnings, right? We could have implemented it very quickly for entry, but then we're like, no, we got to add it to it. Like, you know, webhooks, like conversion webhooks need to be able to send warnings and other things because we didn't want to lose parity with that external system, right? So it's still going to evolve indefinitely and one day it might have another new feature that Gateway API wants and then you'll be like, well, I guess I don't get to use that for two years now. Anyone else? I think we might be done. Okay, since I have the mic in hand, I am going to ask a question myself. This is a good checkpoint of where we're at, but I would love to hear from each of you, where do you think we're going to go? What conversation are we going to be having at the next KubeCon about this? Where do you want this to go? So personally, I think where I would love this to go is where we have, I think some form of extended support, especially for security fixes. I think that's probably table stakes and I think most, there's pretty broad consensus there just because it can be hard for people to move and I think there's definitely recognition that I know at least AWS I can speak for and I know from Microsoft too that they're already doing this work, right? Like we're backporting, we're upgrading old versions of Go and old versions of Kubernetes and we're backporting security fixes because we have to be FedRAMP compliant and so it's work that we're like already doing and so I think getting that a little more formalized and not all doing it so siloed just makes sense. I think these other questions around skip versions are great questions. I don't know the answer to where that's going but I think that's in my mind the sort of the security and critical bug fixes is kind of the, we have the most consensus and is in some ways the easiest part to agree on. Speaking purely as a gateway maintainer, I'm really hoping that we can keep the community experience as good as it possibly can be with minimal additional cost to maintainers of the project, right? So what that means from my perspective is I want to be able to release a set of CRDs and gateway API that can be available to 95 plus percent of Kubernetes users and I also want to have some kind of path to use the latest and greatest CRD features within the next year or two. I don't know how we achieve those two goals because it feels like they're pushing against each other but that's really, if I had a wish list, that would be it. Mike could drag me into this discussion but I guess from a container D perspective, I think if both Kubernetes and container D have ideas of LTS, it probably makes a lot of sense for us to be better synced when I see Dawn here and container D is very poorly represented at Synode, I think as just a maintainer team, I think Mike Brown is probably there usually but it sits on my calendar and I always feel bad that I don't join but yeah, I think better alignment of like, okay, if you're gonna have multiple projects with a definition of LTS, then there probably should be some better alignment to make sure we're not making extremely confusing about how that works together. Yeah, so I think for me personally, I want us to start solidifying like with the path to like a skip version that actually looks like and I think that requires us to, I'm forgetting the name of the design that some folks have had, I think Jordan was part of them where we were talking about like, I don't know if it was feature sets or it was like, it was just reducing the scope of like compatibility mix matches that you would have across upgrades but just in general like, I think we're at the point in the lifecycle of our project that upgrades are much more critical than perhaps they were in the past, right? Like they shouldn't induce fear, they shouldn't cause stress. I don't know how we get there, but I'd love to have more upgrade tests and open source CI. It's just, if you look at a KEP, there's a, I know, I didn't want to say it out loud because it hurts to say it out loud. We have no upgrade tests. There's like jobs that upgrade clusters, there are not upgrade tests. And there's a very few of those even. There's really not much investment in this right now. That's part of why I'm a bit skeptical. That's why I feel bad with you. Just with the current set of releases around how do users sustain this that's missing? Yeah, so like when you review a KEP right or when you write a KEP in the document it's like, did you manually upgrade test this? And I'm like, at what point? Like once in like the feature life cycle? Yeah, sure. I'm totally did that, I promise. Not that I have any proof. There is some sort of a greatest that caught some regression that is what Fabricho and this Cubadmin kind of jobs are doing. But there is another problem that is higher than this is. There is no testing of different features enabled at the same time. And if you add this as an upgrade then it's even worse. So we don't, we remove one beta feature and we remove them all to alpha because when it was running with other features at the same time it was breaking the cluster. So it's just not upgrade per upgrade. It's a great plus all the things that are running in the cluster has to work at the same time. And that's why I think this is a very big problem that we need all of us to invest time on that and figure out how to solve. There's no that great, that great plus all the things that are running and all very good points. I'd like to make one public plea to not make the number of months of support and arms race. One vendor supports two years then another and then the first one supports three and then the other and then the next one supports four years and then the other. Then we really won't get past one 24, five or six ever. So if you are involved in those conversations at your vendor, please suggest that this creates a strain on the community and we need those people to actually be merging security patches upstream for the versions we already committed to support because those people are short staffed already. So thanks. All right, I see I think the next folks for the next thing coming in. So I think we're out of time. Thank you all. Thanks.