 It is awesome to be home. I don't know who thought to put this in November, but you got really lucky. When I first proposed this talk so many months ago, I figured it'd be really easy to fill 45 minutes on this topic. Then some fool made it a keynote, and now I get to do it all in 15 minutes, which is way harder, so let's get going. So a project like Kubernetes has many parents. As one of those parents, I think a lot about my child's future. What is it gonna be when it grows up? Will it be successful? When kids are young, you don't wanna put too much pressure on them, right? But you want them to be good at something, so you teach them what you can, and you try to prepare them for the real world. It feels like just yesterday, we were talking about Kubernetes 1.0, arguing about what was the scope, what was the mission, what were we good for, what did we do? And now we are in the run up to the second, to the first decade, the end of the first decade of Kubernetes. And our baby's grown up, but not fully grown yet. Like a real child, I'm thinking still about its future. Now I admit this talk is a little premature because it's only nine and a half, but if you're a parent, if you have kids, you know that the half matters, right? So to prepare for this talk, I sent out some seed questions, just some thought provoking questions to cube long timers, maintainers, SIG leads, top contributors, and I asked them what they thought the next 10 years might hold for Kubernetes, both opportunities and threats. And surprisingly, very few people actually talked about technology topics, but there were a few really obvious, clear themes that emerged from the feedback over and over and over again. So I want to highlight these things today. I want to talk about these people in their own words, and since it's me on the stage and not them, I'm gonna add all my own color. So the first one is a real doozy. I want you to think about this one. Jago McLeod from Google said the next trillion core hours, trillion, think about just how big that is. And the crazy part is I believe it and I think he might be underestimating. It's a huge number. So what is going to drive a trillion core hours? Well, what has changed in our industry in the last 10 years or so? Cloud adoption through the roof. Containers are everywhere, yeah. Web three, maybe not that one. The obvious answer, even factoring for recency bias, is AI. I'm not that old and don't let my hair fool you, that's just genetics, but I've been around long enough to see a lot of things change. I've seen the rise of the internet. I've seen Linux happen. I've watched smartphones become a thing, electric cars, the cloud, and now AI and ML. Janet is one of our SIG leads in SIG apps. She pointed out that AI and ML is obviously going to drive the need for compute resources at a new level of scale and efficiency, and I think that that is saying it really lightly. Clayton Coleman got straight to the point and said inference is the new web app. And remember Kubernetes early days 1.0, what was it good for? It was good for web apps. Well, just like Kubernetes was for web apps then, everybody has an inference app now, right? And for those of you who didn't immediately know what inference is, like me, I had to learn about this, that's the part where you use the AI instead of training the AI. So in other words, when we look at what Kubernetes and AI is gonna do, our success may hinge entirely on our ability to satisfy the needs of AI and ML users. And I'm sure you're all tired of hearing about all the work that we're trying to do around AI and ML work, but we have more. I do think that Kubernetes is really well positioned to be the platform of choice for AI and ML workloads. We have a lot of great primitives, we have a ton of brain trust, a lot of momentum, energy, this conference will show you that. But we don't really know yet all of the things that we're gonna need to do to make AI and ML successful on Kubernetes. So we need to be listening and watching and probing and asking questions. And frankly, we need to consider things that we've maybe ignored or disregarded before. We need to be really active in this. This is a call to all the maintainers and all of the users to, let's look at what do we need to build to make AI successful. The way I see it really, the advent of AI is about as impactful as the internet. Now, do you remember what the internet was for 25 years ago? It was for sharing recipes and for talking to your friends on AOL, right? So what is AI going to be for in 20 years? I don't think we have any clue. Everything we know about it, we're gonna be wrong. But that's okay, let's go for the ride. What does that mean for Kubernetes? Honestly, I'm not sure. I don't really understand AI. I've tried to learn the math, but it's beyond me. But at the bottom of the AI ML stack is a ton of software that runs on a ton of hardware. And that's a problem that I think we understand. Don had a different and slightly more optimistic perspective, which I really appreciate. AI and ML is an opportunity for Kubernetes, not just an obligation. Perhaps 10 years from now, natural language processing will be the way that we all interact with our systems and run our workloads. One thing that I'm sure is going to fall out of all of this, though, is an increased focus on clusters and in particular, how to manage many clusters. Jeremy, who runs a SIG multi-cluster, had this to say. He said, multi-cluster is unavoidable, but he also said that users need to work at a higher level, which might seem like they're contradictory statements, but I don't think they are. The truth is clusters are not an interesting abstraction for most users, but we have made them important. Lots of decisions are being made about clusters, what version you use, what the policies are, how the networking works, how the storage works, how it scales, and so on. Running a big app across multiple clusters is still too hard. If you look at these huge ML jobs that we're talking about this week, they need a ton of compute, right? And as far as the users care, that's one job. They just want to do one thing, do my training job. So maybe we, Kubernetes, should support 100,000 node mega-cluster. Sounds fun, right? I think it's not a great idea. But also telling people, hey, that's cool, but go run your app across 40 different clusters. Also, maybe not a great idea. So my take on this is that we really need to head in a direction that is de-emphasizing clusters for most users most of the time. Now, I don't think we're gonna go fully cluster-less, but we can go less cluster-full. And I don't mean Paz, I mean high-fidelity Kubernetes, but making things just work when multiple clusters are in play. Clusters should mostly become an invisible implementation detail for users. And for a slightly contrarian point of view, Vojtek expressed this thought, and which I interpret as, are we using multi-cluster as an excuse to avoid solving single cluster problems? We should be really careful not to offload our own problems and our own complexity onto our users. Playing off that, the next theme that emerged is really about complexity. Almost everybody who emailed me back in response to these seed questions had something to say about complexity. Kubernetes is a big project. It's not exactly known for being simple. It's not something people charge us with very often. So one of the seed questions I asked was about existential risk. Antonio said something that really resonates for me, I say it all the time. We can't do everything for everyone. And Don hit on the same point, almost exactly. And Vojtek came back with a similar concern, but he focused on an interesting aspect. We, the maintainers of Kubernetes, are very familiar and comfortable with Kubernetes, and that complexity doesn't really affect us. We are not our users. We are not all of you here in the room. He cites this as the biggest risk to Kubernetes. And what do I mean by complexity here? Let's talk about that for a second. I think complexity can be broken down into internal complexity and external complexity. Internal complexity is our ability to maintain the project, to evolve the project, to add new features and APIs and solve new problems and to debug it. Mostly our end users don't have to deal with this internal complexity until something goes wrong and then it leaks right out. External complexity, on the other hand, is things like the API and user experience. It's all the things that you have to deal with in order to use the system. Every API field we add increases complexity. They show up in yaml dumps and diffs. They show up in the documentation and they demand room in your busy brain. So, Mike calls for where's the rails that makes Kubernetes dead simple. And Tim Bannister joined in the same course, talking about the complexity. And at the risk of getting repetitive, Maché went here too, emphasizing stability. And also Jordan touched on this. The more complex Kubernetes gets, the harder it gets to do anything at all until we eventually, we can't move. We can't add new features or solve problems. Clayton dropped this pearl on me. Kubernetes won not because we're the best at any one thing but because we can run almost everything reasonably well. And I really like the reasonably well emphasis is mine. It suggests that there's more we could do but we're choosing not to. In particular, for the last 10 years, we've always made a trade off between squeezing out all the performance versus getting better automation, more fungibility or simpler experience. More recently, I've seen a strong push to squeeze out every last drop of performance from these machines. And AIML is often at the heart of this. This manifests in the form of narrower and more special purpose features with more complex use cases and more sophisticated APIs. That's code for complicated. The result is increased operational and conceptual complexity and that's a tax that we all pay, users and maintainers alike. So my thinking on this, my reflection on all this feedback, something has to give. I have this idea of a complexity budget in my head. It's a pretty simple idea. There's a finite amount of complexity that we can absorb into the project over a certain amount of time. The more complexity there is, the more complex something is that we add, the more of our budget that we eat up. And when the budget runs out, bad things happen. We can't fix the project, we don't understand it, people have crashes. And how do you manage this? How do you measure this? Honestly, I have no idea. But as an engineer, we generally know when we are overspending on our budget. So for every new feature that comes in, we have to ask, do we have the complexity budget for this in the project? Do our users have the complexity budget to absorb this? Is this what they want to spend their budget on? Remember Kubernetes is just one of many tools that all of our users use, that all of you use. If I could squeeze out 5% better performance, but it comes at the cost of some extra complexity, is that a fair trade-off? Or if I could enable some new relatively niche use case, but it comes at the cost of an API feature that everybody has to see, is that worthwhile? Should we do it? And if not, how do we say no? Jago called this out, as Kubernetes evolves, our next generation of users will necessarily be less invested in the details. We've been pretty lucky so far. We've had a great user base that has invested a ton of energy and time and passion into learning to what Kubernetes is and how it works. But that won't continue. As we have crossed the chasm, and as we are in sort of general adoption, the willingness to do that amount of learning just necessarily goes down. So Jago also called out that Kubernetes is not just for games and web stores. Real lives are on this. Vojtek calls for actually raising the bar. We have a wonderful community of incredibly talented and passionate contributors and maintainers. You've met hopefully some of them here this week. But raising the bar requires all of us to be willing to say no. To be willing to say no to things that we really do want, things that are not bad ideas, things that seem obvious and easy on their own, and things that honestly, the company's paying us might really want. Another way to look at this, the inimitable Solomon Hikes, no is temporary, yes is forever. We can always add something later, but we can never take it away. This tweet from many years ago still sticks with me. Complexity isn't just about what's in Kubernetes though, it's also about our ecosystem. And having a rich ecosystem, absolutely critical to the success of Kubernetes, but our cloud native ecosystem is huge. So Janet thinks we should help our users navigate it more. We shouldn't have less could to do and more should to do statements. We need to pave the path to success and make sure that users walk it. If you combine this with the idea of the complexity budget, you end up with something that might be contentious. Maybe we should shrink the landscape. If we're here 10 years from now and the landscape is 10 times the size, I think we made a mistake. We're so accustomed to the idea that more is better, but I actually do think we're at a point where less is going to be more. It's a good thing when winners emerge. It's a good thing for users when there are less choices, when there's an obvious successful path, when you can go to Stack Overflow and find an answer to your questions. SF Tim touched on the same point. We have so many ways to plug into Kubernetes, it's hard to know what works. And Dawn also said this, she even said the S word. Michelle called out a related point. Fewer and fewer ideas are coming back into the Kubernetes project. We want to be a place where people build up, but we also want to take the best ideas back so we can make them available to the most users possible. And as we get to the end here, I'm gonna offer one's thought without any comments at all. I'm just gonna let you read. So to wrap up the session today, I want to offer one last quote that really, really stood out to me from all the quotes I got. Kubernetes should stay unfinished. And this hit me like a ton of bricks. It felt like something we need to say to each other all the time. Whenever we see each other sort of succumbing to the desire to say yes to things, we can trot this out. It's our moral support for each other. It's okay not to be done. The next 10 years have a lot of evolution for us and we don't really know what's gonna be ahead of us, but it's okay to accept that we're not done with it. In fact, I think this should be our motto if I had done enough planning and done my homework more than three, four weeks ago, I would have put it on a T-shirt. And I'd be wearing this T-shirt up today. So this should be our motto, right? We'll translate it to Latin and we're done. So I've thrown out a lot of ideas today, not all mine. I'm sure some of you disagree with me. If you disagree with me, I challenge you. Come to the Google booth afterwards. I'm gonna be hanging out. I would love to talk to you and hear about what you think the next 10 years are gonna hold for Kubernetes or what you hope will happen over those next 10 years. Thank you very much for your attention today.