 Everyone, thank you for being brave enough and courageous enough and feeling safe enough hopefully to show up in person. This is my first conference since the world changed and so I'm actually pretty excited, although I suspect it's going to take us a while to feel like this is totally normal. So I appreciate you showing up today and hopefully what I'll talk about is something that you could have found any other way, but you know, being at OpenShift Commons, this is an important time for us because we're starting to think about, you know, what comes next? Like, as, as Architect at Red Hat, I've been involved in OpenShift since the very beginning and have been with Kubernetes since the beginning of that project. We've learned and achieved a huge amount. You know, Karina was walking through all of the, you know, things that have come in OpenShift after 20 releases. Of OpenShift 3 and 4, I guess, adding together 20 releases of Kubernetes 21. I don't know, I can't count anymore. Over 20 releases probably approximately of iteration in the Kubernetes community. It's been customer success. It's been community success, ecosystem success. But as part of my job, it's to think about the things that aren't working. What are the gaps? Where are the places that we can do better? So the talk is Kubernetes as the control plane for the hybrid cloud. If you attended KubeCon virtually in May, Joe Fernandes and I gave a little bit of a teaser for this. And I also gave a keynote where I talked about some of the ideas that lead into this. But now that we've had an extra three or four months to work through those, I want to talk a little bit about the broader context and try to tie it together for you in a way that even though this is still very early for us, thinking about what comes next is an important part of my job. And so I'd like to share it with you. So there's a bunch of problems that we all face. And there's problems that we can face, that we can fix by adding something. The additive approach to adding a feature or adding a new component to OpenShift or adding a new technology that accelerates a particular part of our workflows. But there's certain problems that you cannot fix just by adding one more thing. Well, actually, no, that's not true because what you can do is you can build a layer on top. And so the layer on top is what I'm going to talk about today. But I want to describe what I mean first by layers. So over the last 25 years at Red Hat, we've been involved in all of the technologies and areas that you see up on this list. This is roughly chronological in terms of each of these was an abstraction over something that was previously a problem that did not exist or simplified a problem in a way that allowed us to reach another level. There's a lot of commonalities between these. Open Source drove every single one of these. Or as a fundamental part about how we reach it, most of these are key parts of our lives today. Obviously, if you're here at KubeCon, I hope Kubernetes is a key part of your life. But the common factor that I think I wanted to reinforce is all of these, to some degree or another, have a purpose. And the purpose is helping us run build and sustain applications. And applications is a really generic way of saying the stuff that we do that either drives business value or creates value for others, but doesn't necessarily drive any business value for ourselves. It might be the things that we have to run for regulatory requirements. It might be the things that we choose to run in our personal labs or in our homes. And each of these, this is a little bit of a red hat-centric view. And I chose it specifically because at each of these layers, the Open Source communities either built on around or were in integral to the evolution of these. And Open Source is a fundamental part of every technological advance for the last 25 years. And so at KubeCon, which is about celebrating all of us working on our problems together, the problem that we're roughly trying to do is to get better at building and sustaining applications. So when we talk about another layer, we're kind of asking, well, what could come next? What might come next? Have we been successful at Kubernetes? So this is, if you're not familiar with it, this is the CNCF landscape chart. I had to shrink it to make it show up on one page. We get laughs about how complex this diagram is. But one way that I like to prefer to look at this is these are solutions to real problems. We may not all agree on which the actual fix should be. We might say, oh, no, no, no, that technology is not for me. I'm much more interested in this other technology. But some of us, all of us have contributed to the breadth of this ecosystem. And every one of these technologies or tools or certified platforms or service is something that helps us build and run applications. And all of these are possible because of the layers below us. We're able to deal with more. We're able to solve more problems. A great example, which you can just barely see, because again, this is almost too big to fit on a even a high resolution laptop, observability. Observability has been a huge open source transformation in the last five to 10 years. There were capabilities before that that made observability easy. If you were willing to pay lots of money and you were using a particular infrastructure, observability wasn't a new concept. But the idea of observing everything, of getting deep in the details and all the way up the stack across tens of thousands of applications or services across multiple different sets of hardware, that's something that we're still coming to terms with. That problem is a consequence of how successful we have been at building and running more. And so ultimately, every time we go up a level of abstraction, we're adding more. We're adding more things to deal with. We have more technologies we need to understand or integrate. And so I like to think that a part of my job, and I think I suspect a lot of the folks in this room's job, the folks at KubeCon's job is to come up with an answer to help deal with this problem, which is for the last 25 years, our answer has been, we want to do more. We're building more. We have to handle more. We have to manage more. And the yes please more of everything is, I think, one of the most succinct definitions that I've heard of for the phrase hybrid cloud. Hybrid cloud is the reality, because the reality is there's too much of anything at too many layers for any of us to really understand or to control or to even predict what we're going to need next. And so hybrid cloud, another way of describing hybrid cloud is, hybrid cloud is the problem, which is there's too much of everything. How do we bring it together in a way that makes sense that what's the simplifying assumption? What's the simplifying abstraction that takes everything on that CNCF landscape chart, boils it down to something that we can all appreciate and address, without worrying too much about all of the details all at once. So if that's the problem, how does it connect to our day to day? So I talked to customers and community members, to partners, to technologists, to individuals. I listen to what the product organization hears from customers. I listen to what customers say about the product organization's priorities. And I look at the places where what we're trying to do is move above Kubernetes. We're all above Kubernetes. Karina, in that very first section, we talked about the breadth of the things that exist above Kubernetes. And that basically boils down, I think, to three questions I hear the most, that all of us end up answering on top of the latest layer on the stack. How do I go beyond the limits of one? Whether that's one cluster, which, as Karina said, we all have done, or almost everyone that I know of who's deploying Kubernetes at scale is running more than one of. Sometimes, not everybody, but sometimes people say, well, how do I go beyond one cloud? Not necessarily because they planned to, but because they acquired someone, or they got a good deal, or someone made a strategic directional change, or someone got a better cost opportunity. Or it turns out that half of the stuff that you didn't even know about was actually already running on one of those other clouds. It might be questions like, how do I go beyond the limits of a region? Where just Kubernetes fundamentally is a technology that's designed to solve a simple problem, a close co-located set of computers running software in a manner that hides the details of those individual machines. But that doesn't give you a solution for what happens when you want to run an application across seven geographies at once. What happens when your workload follows the sun? What happens when your workload has to deal with a large failure? And so these are questions that we all answer, and we're all coming up with approaches. The second question, how do you integrate? You integrate more. Obviously the cloud native landscape was about new technologies. But there's also services, whether those are the, you know, we think about one of the biggest transformations in the last 10 years as well has been delegating to others to do things on our behalf through an API. Infrastructure as code, API is, or infrastructure as API, asking someone to take on a portion of the problem to give them that responsibility so that you don't have to worry about it so you can accomplish more. And so there are more services than ever before that we integrate where someone's taken on the responsibility for keeping that running. It's our problem. We do this internally. We do this within organizations. Vendors increasingly form, are moving from package software to managed services. This transition will only accelerate because, again, another way to deal with complexity at scale is to put the problem in a box and talk about it from the outside. And finally, we have more teams. Every one of those applications is run by somebody. Those teams turn over. Those teams have to pass on knowledge. They have to be educated. All of us at some level are trying to deal with the complexity that we've created as we're creating additional business value. So the solutions for these problems are out there. OpenShift as part of our ecosystem focuses on things that we can stand behind, but we can't stand behind everything. There's a huge ecosystem of partners and vendors here at KubeCon today who track and solve problems. Open source communities are even larger than that. People scratching, taking a problem that they know how to solve and solving it in a way that others can consume. In fact, the set of patterns that we use, nothing is wrong with the approach that we've taken, which is we all solve our individual problems. And at some point, all of us end up solving very similar problems. We talk about CI CD, continuous integration and continuous deployment. We talk about doing that across clusters. We might bring abstractions in like service mesh, which has been one of the hot topics. Observability allows us. There's disagreements about how we do this. Different use cases, different requirements lead us in different directions. But at a certain point, and this is, I think, one of the strengths of open source as a fundamental way of thinking about how this is, open source is relentless because it's all of us trying to solve the problems we face. And it doesn't always mean that someone else has solved that problem already or that the way that they've solved it applies to my problem. Or maybe I can't approach it or maybe I don't even know about it about the fact that someone else solved the same problem as me. And so open source is, I like to think of it as a bit of a relentless wave where we all bash our heads into the wall and write some code and share it with the world. We move a little bit forward. We try all the paths. And occasionally, some smart person somewhere will, they'll get ahead of everybody else. They'll be like, ah, CICD, this is brilliant. I'll start a lecture circuit and make lots of money on sharing my knowledge with others. I would be a private vendor who gets a great idea. But that's just one smart person or 10 smart people or 100 smart people. For every one of those people, there are 1,000 others who are also smart and also solving that problem. They just don't necessarily have the time to focus on that. So as a group, both here in this room in the OpenShift community and in KubeCon, we're always working on common problems and trying to move them forward. And we've been doing this long enough, and the CNCF charts I think cloud native in general kind of speaks that is we can start to ask what are those common problems that all of us are trying to solve? That if we stepped back, looked them at the same way and said, you know, actually, our problems are the same. If we can simplify, if we could agree that maybe we don't get everything we want, or if we can pick 90% of the things that we all agree on, could we arrange those together into something that Streamlines unifies the lines? And the answer may be, no, maybe we have to wait for another smart person to come up. But as part of my job, I sit and I sit all day and I go, what are we all doing that's very, very similar that maybe, just maybe, if we bring two or three or five or 15 separate problems together, they actually all are one problem. And if we simplify that problem, we might be able to make and share that simplification. Well, it's just like every layer of the stack has done before us. So describing the problem, we talked about layers, problems that can't be solved by just adding more. So imagine an abstraction that's application-centric. Kubernetes is application-centric. Linux is application-centric. What are the things with Kubernetes that allowed us to move forward? One of them was, and this is when I talk about things in common is, I hear this, there's Kubernetes adds complexity. There has to be a payoff for Kubernetes to be worth it for you to adopt it. What is that payoff? So I think the most obvious payoff, but it's not always obvious to everyone who's new to the ecosystem. And after we work on Kubernetes for a while, we forget about it, which is, for the most part, most of us were deploying chunks of Linux applications and some Windows applications into environments onto machines. And we were trying to roll them out, give them access, or give, expose those to external users over a network, connect those up to other applications, perform rollouts, and survive machine failures. We were all doing that differently and very smart people before Kubernetes had solutions for it. But we were all, for the most part, in the ecosystem, individually iterating on the approaches. And that didn't make us wrong. That just made us inefficient because we were focused on solving problems that somebody else or the set of us already kind of had dealt with. If we could share that, if we could put that consistency in place, we wouldn't ever have to solve that problem again, or hopefully we wouldn't have to solve that problem. That let us move up to a higher level, let us focus on what do we actually need to do? Well, we want to deliver hundreds of thousands of cloud native applications, we're bringing new geographically distributed capabilities, we're supporting global services, we need 24-7 uptime, we need to be able to survive failures of data centers, regions, geographies, or sometimes we just had tens of thousands of developers who increasingly were solving different problems that all needed to come together, and we just had to make sense of all of that. And Kubernetes gave us a way of standardizing a part of the problem. So what are the kinds of standardizations that would be useful? I don't have a complete answer for this. In fact, I'll stand up, we talked about hybrid cloud being the problem too much of everything. I'm going to talk about one possible approach that feels right to me looking at the problems that we're all facing. And it starts incrementally. It has to build off what we knew. So I'm going to give three examples, and these are going to be pretty basic examples. You might say, Clayton, that's not obvious at all. Go back to the drawing board, you should be smarter. And that's okay. And I'm going to frame it as a constraint, an opportunity or a superpower, and a consequence of that. And I'll talk a little bit about how we're thinking about this and how you can get involved. So I think the fundamental constraint is that standing here, this is kind of no duh kind of thing. If I stand here at KubeCon and I say, hey, we need to do something new, go rewrite everything. You're all going to laugh at me, right? In order to give that benefit, I would have to give you a 10x improvement in some area, right? Generally, the rule of thumb is nobody's going to change anything unless the benefit is so obvious it is staring you in the face. And the bigger the hill you have to climb, the bigger the benefit. So I definitely know that in this room, if I told you you had to throw away Kubernetes, you'd probably be like, we don't love Kubernetes. There's some problems with it. We're all totally psyched to go out and rebuild everything we do. That's kind of a non-starter. So this is a very useful constraint because it eliminates a huge class of possible approaches. And it's incremental, which is if I can promise you that you can take most of your applications and get the benefit just by adding one thing. I am adding one thing to the ecosystem. The question is, is the benefit going to be worth it? And does that help simplify the problems come after? So you're going to laugh at how simplistic this diagram is. I'm trying to boil this down to the very essence because, again, we don't know exactly what the future looks like. But one of the things that I like to think about is if there's one problem that everybody in Kubernetes agrees with, it's that if you want to keep something truly isolated and truly isolated means to the utmost level of paranoia, you have to give them their own space. And there's a couple reasons for that. One actually is that Kubernetes is a collaborative environment. It was not designed for hard isolation at every level. It tries to, and OpenShift has been relatively successful at giving you isolation on some levels. Whether that's inside of containers, Karina mentioned sandbox containers, things like running containers inside of VMs inside of containers, which gets awfully complex. But creating that layer, that separation, some part of Kubernetes is fundamentally cooperative, collaborative. If you don't like what someone's doing on the cluster, you can catch it using ACS or other technologies. And that's a part of the price we pay because we brought together the problem and standardized it. So I'm going to start by saying, if we could tease apart different teams, but the problem is that if you give every team a cluster, you just end up with too many clusters. That's not what Kubernetes was about. Kubernetes was not about running tens of thousands of clusters. Maybe some people benefit if you run tens of thousands of clusters. But I think we can do better. So at a very high level, we would all agree that teams like the things that a cluster provides, and they want that isolation, we want that isolation between teams. You know, if team A screws up, you don't want team B to pay the consequences. So thinking about an abstraction, let's imagine, just for the sake of argument, a control plane, an abstract layer that sits between an application team that thinks it's using Kubernetes and an actual Kubernetes cluster. This comes with a whole host of abstraction challenges. We've actually, many people in the ecosystem have built or tried things like this. Every time you use getups, you're basically working in a mindset that uses this. Argo emulates parts of this pattern. ACM emulates parts of the pattern. We're all building similar tools. Can we take the idea, strip it down to its bare essence, and look at it from a different lens? This is something that gives us a new opportunity. So if you have someone who thinks they're on a cluster, well, obviously they're going to say, okay, I want a cube control apply, my service, and my deployment. I want to apply a Helm chart. I want to use getups. I want to CI CD process. I expect somewhere a container runs. So obviously as part of this constraint, we would have to take a workload that someone specifies at a fairly high level and make sure it runs. But the best outcome would be the team doesn't actually want to care about what a cluster is. Because I think, and this is something I think all of us fall victim to from a, we are technologists, we think about, I have a technology, I will solve a problem with it. When we come to KubeCon, we show icons of technologies, but the point isn't the technology. The technology is a tool that gets us to the next step. What's an abstraction that takes away the cluster from Kubernetes? Because the cluster isn't the point, the applications are the point. So when I say I want to run a service and deployment, I don't care what machine that runs on the vast majority of the time, why do I care what cluster it runs on? Could we make Kubernetes clusterless? How do we take and separate the high level problem? I expect a service to be running somewhere from the low level problem. I've got to go maintain and run a cluster. Someone's still going to be doing that. At the end of the day, there's still a physical machine running a bit of software. But we're getting to the point where I just don't care about that problem anymore. That's a low level detail that someone can abstract for me. If we do this right, and again, this is a constraint. We don't necessarily know that we can do this in a perfect fashion. But if I can start with a service and deployment and not care what cluster it runs on, and to me as a user, I don't see the difference, then all my tools still work. My user experiences work. Yeah, there may be some tweaks. Nothing is free. You're going to have to adapt. But if I can keep that idea isolated, we get something we don't have today, which is when a cluster fails, bad things happen. And again, all of us, to some degree, have to deal with this approach. We build solutions that let us say which cluster an application runs to. I suspect many people using a getups flow or a CI CD flow, you've got a mapping somewhere that says this application goes to this cluster. These aren't new ideas. But we also had those ideas when we were just talking about machines and software moving across them. What is the common pattern that we could all agree on for taking an application and putting it on a cluster? And when it moves, we don't care. How do we make that part of our normal operations? A superpower that Kubernetes gave development teams is the ability to delete a pod or to stop a machine and verify, hey, my app kept working. You couldn't do it before. But now all of us have a common vocabulary for dealing with a failure of one machine. How would we get the superpower of not caring about a cluster, a geography, a region, a cloud, half of our data center infrastructure, a key vendor? How do we build the abstractions that let us test, tolerate, and try failure on a much larger scale? How do we go through that in a common way so that when you add a new person to your team, they don't have to learn your way of doing application mobility. But they come in with a common understanding of a common problem that probably isn't, absolutely isn't going to be the perfect solution to the problem, but it's a common problem. It's a common solution. And that's, again, what open source is about. It's about grinding down all of the problems we have into these nice smooth pebbles that we can all fit together and move on to solving our real problems. So in this example, this level of abstraction is really about trying to hide the fact that there's two different clusters underneath. Sometimes you can't hide that. At the end of the day, we're still running software somewhere. But if we, as a group, as an ecosystem, as a community, are doing this the same way, that becomes the center of gravity. It becomes the path well traveled. It becomes the downhill path that gets better and better and better. If you can test cluster failure day in, day out, if deployment looks like the same process, if rollout looks like the same process is a cluster failing, think about the options that give us. And there are real constraints. In a cluster, there may be hardware that's only available on a certain machine. Your app's not running if you don't have access to that hardware. That's the part of your application that's specific, that is not infrastructure agnostic. But for all the other parts, for all the other bits of the application, we don't want to focus on how our applications are special. Or actually, no, I take that back. We do want to focus on how our applications are special and let all of the other problems be common problems. So something special is when an application isn't cloud agnostic or cluster agnostic, when where something's running matters, how do we break those into smaller and smaller problems so that at a high level, we say, this is an application. I don't care where it runs. I'd be willing to wager. That's about 97, 98% of all of the applications that run on Kubernetes today. The other 2%, which judging by talking with OpenShift customers is 90% of OpenShift customers. There's always something special. I need hardware. I'm running network infrastructure. I'm the backbone of an important payment processor, and this has to run in a specific regulatory area. Those are the unique characteristics of the app. Not what cluster runs on. A consequence of this abstraction is we actually have to start thinking about how applications connect with each other. We actually do this today. There's at least 15 or 30 or 75 service mesh projects out there that talk about the abstractions that help us link up services together in large organizations, because that's what we do, is we build upon and connect and integrate these different bits of technology. The characteristics of a service mesh, every one of us approaches the problem slightly differently because we have slightly different requirements. Service mesh is an organizational construct. It is the ability to scale services and applications across a wide enough group that it all doesn't just collapse into a flaming pile of ashes. The technology helps us, but unless we address, at the same time we address the service mesh problem, the organizational problem, policy, control, placement, we're unable to truly solve the whole problem. In fact, a lot of times when I look at the information we gather about how people are using service mesh, the biggest problem isn't the service mesh. It's how they get and pick a pattern that that service mesh is used. This is an area where if we can consolidate and come together, we can find common approaches that allow us to not just use the technologies for cross cluster, but figure out the common way that most applications across all footprints should run. Again, we don't know all of the details here. In fact, I'll be quite honest. Is this the future of Kubernetes, a diagram like this that lets you have a layer that you can run different types of applications with different sets of APIs with integrations on the side that picks a winner or a set of winners or has a pluggable layer for all of these different technologies like identity, service, interconnectivity, networking, storage, backup, data consolidation, data duplication, the ability to represent the data of your application distinct from where it runs. I don't know. We're just in the beginnings of this process, but I will say based on what I've heard and based on the people I've talked to, we are absolutely all solving the same problems. What I would like to do, and what Red Hat is very interested in doing, is working with customers, partners, users, this community to try and find the commonalities. Diane is probably jumping up and down because I keep saying the things that we have in common, which is the motto of OpenShift Commons, is finding the points of how we work together that we're ready to say not just this is a technology for you, but this is a pattern that we all agree on that simplifies, standardizes, and aligns where we're going. And so we're very early in this process. In fact, it's really too early to even pretend like I'm going to talk about a roadmap. But what we are doing is we're prototyping these ideas in the open. We're talking about them here in the community as part of our one-on-one. We're trying to work concretely the list of features that Karina showed around multicluster. Each of those capabilities is part of a discussion with customers, partners, and users around what problems are they trying to solve. We're trying to work through and offer those incremental capabilities to give you the options where you might diverge so that we can use that as input to the next process, which is how do we converge? And so I've added two email addresses up here. Rob Zumsky is a PM who is helping coordinate the early parts of what we talked about here. The Kubernetes is the control plan for a hybrid cloud. The project, the prototype that we have is called KCP, Cube-like control plan, although we're very cagey about whether that's what it actually means. But if you are interested in these ideas, if you have use cases, if you think that your problem is a problem you'd like to see another layer that feels cube-like, that standardizes those. We want to gather feedback over the next year. This is going to be a very active area of investigation for us. So I hope you'll come along for the ride. And I hope that the ideas and concepts here are ideas you can take and ask questions of us of, why aren't you helping us converge? And why aren't you helping us simplify what you're doing? So thank you very much. I hope you all have a great CubeCon. And if you'd like to talk about any of these ideas, I will be here all week. Thank you.