 Good morning, good afternoon, good evening and welcome to the very first live episode of KBE Insider. It's a very new show. I will hand it off to my very illustrious friend Langdon White to tell you a little bit more about it, but I'm very excited to have the show coming on the channel and the folks that are helping us with it are awesome. So yes, thank you very much. Langdon, tell us a little bit more about the show. Definitely. So I'm Langdon White as you may know me from another show called The Love of All Power and other places on the channel. You should definitely come check out other things that we do here, but this show is specifically about in combination with a site we relaunched during summit, I guess, two weeks ago now and called Kubernetes by example and Kubernetes by example.com. The idea of the site is to kind of be really focused on the kind of how to and the walkthroughs at the really basic levels of Kubernetes so that, you know, if you don't really understand what a pod is, there's kind of a whole special area about what is a pod kind of explaining not only how to kind of use one in kind of a brass tacks sort of way, but also like kind of the reason for it as well. So you can kind of try to get a deeper understanding of what Kubernetes is and how it functions. And it kind of takes a multi, what we were trying to do with the site is like take a multi, multi learner approach. And so, so there's like video content. There's also like actual like training class style content. There's written content so that, you know, depending on what kind of learner you are, you can kind of approach it, you know, in the best way that works for you. So that was kind of one aspect of it. But as part of that, we also wanted to do some live content, you know, because we know a lot of people, as we've seen with the OpenShift TV channel, right, have really started to engage with the kind of Twitch or streaming style of content understanding. So this show is kind of a nod to that. We're trying to deliver that type of content as well. But specifically what we're doing with this show is to try to give you some of the philosophy in a sense behind Kubernetes or the ethos or whatever word you like to use. But it really helps when you're trying to learn something new to understand what its goals are or why it's trying to do that, you know, because then things become more intuitive when you are looking at the actual content because it all starts to fit together because you understand the larger block it's trying to build. So we're trying to give some of that color by giving you access or interviews with the people who actually make Kubernetes kind of go. And so, but as part of that, we also need to give you the context in which, you know, all that's happening. And so Mina, who is going to be a regular on the show, is going to tell us and also actually updates the content on the website for the news is going to tell us kind of in the last month, because this show appears every month, what's been happening lately in Kubernetes land. And, you know, and you can kind of keep going back to the site to learn that information as well. And this will get a lot tighter as I repeat it more often. But without further ado, I would like to introduce Mina. And if you want to tell us a little bit about what's happening in Kubernetes. Yeah, absolutely. Hi, everyone. I'm Mina. If you saw episode zero of our show, you may already be familiar with my face, but I'm here to kind of tell you a little bit about what we uploaded on to the KBE website since we launched a couple weeks ago. So as you may know, Cube turned seven this month. So with that, we wanted the first week, so a couple weeks ago, to be the introduction to Kubernetes. So we definitely wanted to talk about some news, some latest news, but we also wanted to talk about the five tips we wish we knew sooner about Kubernetes. And we learned that those are successful automation requires diligent auditing, ignore Kubernetes pod labeling at your budget's peril, understand your application's resource needs, don't play around with ETCD, and you don't need to go it alone. So those were the five tips that we wanted to bring to you so that you know them before, you know, you actually had to. There was Silascape, which was the first malware to target Windows containers that broke out of Cube clusters to plant backdoors and raid nodes for credentials, which was pretty important. It was everywhere. People were kind of freaking out about it. And then we wanted to bring a case study for you. So Flipkart is India's leading e-commerce company. And they actually recently adopted Open EBS for storage on Kubernetes. And some of the key lessons the Cube platform team at Flipkart has learned from this migration was that being production ready is really important, obviously. Managing storage resources, creating a volume group construct, LVM partition, and disk failure response. So those were the five things that they kind of focused on the most as they were migrating. Yeah, as they were migrating. And then second week, we kind of wanted to bring you more opinion pieces from thought leaders in the space. And the most notable of these were Matt Asay said that we're thinking about Kubernetes all wrong. He told us that we have to try, we should try, using Kubernetes like an app server for smaller teams, instead of treating it like a centralized cloud. And then we had David Lenticum, who said it's time to get more aggressive with Kubernetes. As Kubernetes is really mature now, I just mentioned it turned seven. He's saying it's time to take some risks and develop the next generation of applications. He even said that perhaps we can weaponize it to build a better business. And then we also gave you the top 25 Kubernetes experts to follow on Twitter. Whether you're just learning Kubernetes or you're already a seasoned container buff, you'll need the right resources available, such as tutorials and monitoring tools, which is kind of what we're trying to give you with the KBE website anyways. And this also includes following the right people on Twitter as well who can open your eyes to what you can do with Kubernetes. So that was a quick highlight of what we talked about since we launched on the KBE News section. I'm going to drop some links in the chat as well in case you guys want to go to the articles and see what they're talking about. But you can always come on the KBE News section of the KBE website to see what's going on in the world of Kubernetes and keep yourself up to date on what's happening. And with that, I'm giving it back to Chris Short. Take it away. Yes, I think it's vitally important that if you're mucking with that CD, that you do not muck with that CD unless it's just to give it the performance profile that it needs. So thank you, Mina. Awesome work. Gordon Tillmore dropped the link in chat to the overarching news page. So yeah, feel free to drop any of those links and Mina. So Langdon, we have a special guest with us, right? Like we do. We should probably introduce this special guest of ours. Right. Although, I mean, how much introduction does he really need? So this is Clayton Coleman, maybe architect of Kubernetes for Red Hat. I don't know what your actual title is. Yeah. As we often talk about on another show, we have like Red Hat's really good about changing group names and titles and all that stuff on a really regular basis. So we always like to say, Clayton, could you introduce what your title is yourself? Because you likely know. So today, my title is probably something like Architect for Hybrid Cloud Applications in a Changing and Complex World where Kubernetes is super important and helps you get a lot of stuff done but can be even better. That's my title today. But you have more Red Hat. That's what I'm going with. Yeah. Nice. Cool. So what is it that you kind of do most of the time with Kubernetes or kind of in your job role? Sure. And I've been, for the last seven years since just before Kubernetes was publicly announced, like I've been a part of the project and I've kind of shifted my role. So I still contribute heavily. Well, I'd like to say I contribute heavily. Maybe a trickle compared to what I was lucky enough to be able to do for the first three, four years. I participate in SIG architecture and a number of SIGs kind of trying to help smooth over the gaps. We've got a pretty effective community system these days where SIG contributes and the community that's built up around Kubernetes, all the people who participate from big companies to individuals using it in their home labs. We've kind of got a pretty good system. And so I kind of function almost as kind of a background cog just making sure this stuff ticks over. I spend a lot of time focused on kind of the thorny or gnarly issues and try to help teams that work within Red Hat or teams across companies or individuals try to catch trends early. So things that are important in Kubernetes project and Kubernetes is a really firm boundary. And there's a whole bunch of stuff out there. I kind of try to help people move across that interface. So is it something that Kubernetes needs to improve? Okay, let's sort out and work with some teams and people and bring together the folks who care about an issue. And sometimes it's around or above Kubernetes. And I spend a lot of time with OpenShift and the OpenShift community and people using Kubernetes in production. And I spend a lot of time listening to what they say and then saying, well, it's all positive. Yeah. It's if you make it something that's this important to you when it breaks, when something goes wrong, when you start hitting limits, when you hit the kind of what is Kubernetes great for? And what is Kubernetes not great for? When you start hitting those limits, trying to help think about where Kubernetes can go or where the ecosystem around Kubernetes can go or how Kubernetes itself should change. So it's a mishmash of big, I write a lot of PowerPoint presentations and I do most of my coding in PowerPoint these days. But it's a lot of communication and helping people come together and find that lucky person who also cares about the same problem as you. That's actually what I really enjoy. It's what I can connect to people who have the same problem and then they can go fix it in Kubernetes. We can get it. We can make the world a little bit better a place every day. Right. Right. Yeah. I would actually like to explore the communication part of the problem a bunch more. But before we do that, I would like to ask you and we're trying to set up some theme here of when we do the show. Can you tell us a little bit about like how you got into open source to begin with? Like what brought you into that community or into that world? Well, so it was really interesting. I went to work for this company called Red Hat. And I certainly have used open source prior to that. I actually worked at IBM, ironically enough. Started out of college, worked at IBM in North Carolina for like 10 years. And I was like, you know, I did a lot there and I knew open source and I kind of was familiar with it. A bunch of friends worked at Red Hat and they're like, we're working on this really cool thing. It's going to change the world. It's called containers. And I was like, man, that's not interesting. Well, okay, fine, whatever. And so I, you know, in the early days, I used open source and you know, Red Hat is very intense about open source. And so it's kind of been like, oh, okay, this is interesting. And it's actually been awesome because I'm working at Red Hat. Like there's a little bit of the mindset is, is you're doing it for three sets of people, communities, customers and partners. And your job is to balance those and make sure everybody's working together because it's not just, you know, we're not just doing it. And you don't go out and just make something and then hope people use it. And you don't go focus only on the customer and don't think about how community can benefit the customer. And then like partners, you know, that could be anybody, but it's other people trying to make stuff that matters that they can, you know, keep going for a long time. And so, you know, communities are diverse things, but there's kind of a customers and partners help kind of anchor that community. And so for me, that was actually really helpful because, you know, I got a lot of stuff to do. I love programming, but I like having a purpose. And that was a really great purpose to have. So I enjoy that, you know, it's that constant feedback loop between something that's awesome that someone has shared. And a lot of these days, a lot of it is, you know, large companies or people, an engineering team at a large company, we say like, we made this, we want it to be useful. And there's a lot of question marks after, well, you know, here's this open source stuff, like what happens if that company goes away? What happens if those people don't want to work on it anymore? So trying to figure out that loop is has been, you know, what I've learned over the last nine years is like, there's a lot of ways to do it. And it's super important for everybody. Although what I'm also hearing there, right, is that, you know, what you see a lot of value in communication again, right? Is that is those, you know, you kind of have those three groups, right? And making, and not making them work together, but, you know, making them work together, you know, is a lot about communication between those groups. So then kind of more specifically, you know, when we're talking about Kubernetes, what, what, okay, so you got pulled into the container world pretty quickly when you got connected to Red Hat. What brought you, because it was nine years, so like, that's a little bit, you were doing a little bit of stuff before Kubernetes launched. What brought you, what made you think, Kubernetes, this thing, this container thing's got some legs, maybe this Kubernetes thing does too. But, you know, it's funny. So this is like, you know, even my memory is starting to get hazy about that period in the early days of Kubernetes, but I, this is one of my favorite stories, because it was like, it was such a, you know, Docker came out in, I think it was 2013. Yeah, everybody was like, you know, containers existed before that OpenShift used them, you know, there were different parts of it, C groups and process containment in Linux and lots and lots of stuff, you know, Docker kind of crystallized, they had that like three, you know, pieces you could download something, which then we all download stuff off the internet and run it as we were done in our systems, that's like what we do. And you could get a reproducible environment, and then it mostly just worked. And that combination, like that year, I remember like this, this sense of like excitement and, you know, everybody was, you know, this could be the next, you know, big thing because it was, it was something that worked well, put together in a novel way, a little bit like, maybe like the first iPhone, right? You know, it was the world before Docker and the world after Docker are very different. And so, you know, through that year, you know, we were on OpenShift, we've been doing containers for a while and we're like, we need to get, you know, there's, we want to modernize because we kind of had done that, you know, here's the first phase. And then we started talking to a number of people in the background, we talked to a few people at Google, I think Brendan and Tim, Brendan Burns and Tim Hawken did a demo for us of their, at the time, it was called the Seven Lit, which is the, with the prototype cubelet that they built internally at Google. And they showed some UI and we're like, oh, that's interesting, you know, we're working on some stuff too, you know, tell us, like, tell us if you're going to launch this. And we got a call, like one week in, I think it was the end of May or the beginning of June, just before DockerCon. And they're like, hey, we're actually going to go through with this or you guys in. And we were like, sure, sounds awesome. And we'd kind of been, you know, it was kind of one of those, like, fortuitous accidents for us, which is it was the right place and right time for us to be like, we think this is an awesome idea, we're willing to do it in the open, we're willing to work, we're willing to, I don't want to say throw away, but we're willing to throw away everything that we did before because it used containers and even Kubernetes, like, it wasn't really about Docker containers, it was about generic containers. And it was about containers at scale and like a lot of, you know, Docker works great on your local laptop. And then that scaling up factor, everybody was building their own container orchestration systems. But this one felt like it had like, you know, the Google folks, I really respect them, you know, in those early days, like, there's a, there's a bunch of domain knowledge that they shared. And then they were willing to listen. And a bunch of other people in the community were like, we know some, we've got a lot of like experience, so on an open show, we had a bunch of experience like dev loops on top of containers versus dev loops on top of VMs. Or what happens when you want to do something that's more complex than a 12 factor app? Like how do you do a development or a software update loop for a database? So even from those early days, like we were thinking about, you know, we had a lot of compatible worldviews. So we launched, they launched it, QCon I got, or a DockerCon that year in 2014, or I'm getting my years wrong, but this is how long it's been at this point. And we, I was, I think I was one of the first contributors. I got like the commit bit on the repo and it went public. And we all showed up in Slack, or it wasn't even Slack at that time, I think it was pre Slack, we were using a bunch of, maybe we decided to use Slack very early, but we showed up, we showed up in chat, and we started opening GitHub issues. And, you know, it kind of snowballed. I mean, there was nothing really in the repo was a, it was a basic idea and was, you know, it wrote some stuff into at CD, and then the Kubelet had this really janky loop. And now we have, you know, it wrote back forth, and we added an API layer, and we designed some APIs, we came up, you know, some of the ideas that we had around like declarative config and being able to cube control apply, like those were the seeds of them were there, but it took, you know, years to see them realize that was, that was a really awesome period for me in my life. I'm very grateful to have had the chance to kind of be at ground zero of that. Yeah, I mean, you know, and I not sure if I'll hit the number right, but I mean, I think it probably it also helped right that, you know, both red hat and particularly Google had, this was like their third try or something of like orchestrating at scale for their own internal separate was kind of a lot of what was maybe it was even the fourth try, right? Like, you know, and one of the things that I think people particularly who are new to software or are outside the software world don't realize is, you know, we, it's actually really good for us to rewrite things, often from from scratch, because then we make, we recognize some of the choices that we made early on that may not have been the best choice, you know, and then, you know, because, you know, the thing evolves or whatever, it's been so nice in the past, I don't know, 10 or 15 years, right, that even less than that, that our software has actually gotten so kind of rebuild quickly, you know, so like languages like Python and stuff, you know, make it so you can redo things much more efficiently than you ever used to in the big waterfall, you know, development models of the early 2000s. Well, and I mean, you know, there's it was interesting too, because I think, you know, a lot of the Googlers brought things that didn't work or experiments and investigations and, you know, in a bit of a change for Google, they were very willing to to kind of share, I always joked with Brian Grant, who was one of the Google architects, and kind of helped, you know, even even today, Brian kind of his Twitter is on the list of the top 25 and Brian always has these great, you know, insightful, you know, connections and he was, we used to joke that sometimes on GitHub issues that he drop a paragraph of summary about, you know, how, how they thought about a particular problem, and I would laugh and be like, that's like $10 million of R&D research and your engineering time and pain and effort that's been nicely summarized into a single paragraph so that we can avoid it. So there was a lot of, there's a lot of knowledge sharing and to be honest, you know, the way that we envisioned cube in the early days, I don't think is the cube that we have now. There's certainly a lot of areas where the whatever we, there even early on, you know, there was probably most cube clusters, the average size of a cube cluster is one to two nodes probably, just because lots and lots of people run really small clusters for testing or trying things out or on their home machine or they run mini cube and now they run kind or in the early days they ran OC cluster up or, you know, there's like a million solutions for running these small clusters and your problems are different when you're just doing local dev. And I think that's something that Kubernetes, even though it's designed to be, you know, 10 to a thousand nodes, those kinds of things that people look for in their local iterative dev loop aren't always the same and we can still improve that. There's still things that we can go do. So there's a rich vein of, you know, things that we didn't achieve that if we come back and look at them a second time, maybe there's actually some really new ideas still lurking there because we've got to mature Kubernetes and we can depend on it, but we can take it in new directions. Well, and that's I think what we want to talk about a little bit more is actually specifically what, you know, what did you have in mind there? Like what are some of the examples of, you know, the places when you see the biggest change happening or the biggest, you know, the biggest opportunities in a sense going, you know, in kind of the next steps. So, and this is, I'll say this sounds bad just off the surface. Kubernetes has calcified a bit, which is when you have a big mature project with lots of people, you know, kind of helping in little areas, it's not like when you have a small team, right? I think everybody in the world knows like when you have a small team, you do a bunch in one direction and you kind of sketch out an arc and then you had to fill in the details and you filled those details over time. So you you had fixes or you go figure out that the stuff you hacked together in a weekend was also like, you know, I have a PR open right now to Kubernetes, which is a really subtle issue in the Kube lid and all of the code that I'm changing has, you know, it's five years old or more and it's stuff that it mostly works. But just as, you know, as contributors come and go, you know, as we get busy, so like folks on SIG node, like there's a lot of, you know, we kind of got a second or third generation of SIG node contributors going through now. There's a lot of, I don't want to say domain knowledge that's lost, but you got to bring it back into context and everybody's busy. So there's a lot of layers of Kube. One of the things I think going forward is there's maybe two dimensions I think are like super interesting to try. So one of them is like the really small, you know, clusters like what do you actually want when you're doing local iterative development or when you want to test something locally or when you want to test just the basics of an idea around something that's declarative, right? So like on our laptops, we have get repos and we run commands all day long, but when we start checking things into source control, we're trying to, we're trying to describe the idea and then have it, you know, survive for a long time into GitOps and like the idea that, you know, your source code or your configuration or documentation, like you put them in source control and you can see their history and you can capture your idea. And even though, you know, the code and configuration are kind of different, they're really not. Sometimes they have external dependencies, like libraries, and sometimes they depend on external systems or APIs that might change, but you know, you seem like those infrastructure as code ideas where you have, you write some code and it goes and changes the system. There's a lot of similarities. When you do that local development, like most people sitting down, like they just got to learn a concept and they hope it doesn't change. So like in languages, you get like an API contract from your language, like Go, and the Go team tries not to break you. And your dependencies, sometimes you use stuff from people who don't really care about, you know, long-term thinking, right? Open source has a lot of this, you write a library, someone gets bored, move on, they get burned out and they leave it. And what do you do as someone who consumes that dependency, that API? Conversely, like Kube is designed to be kind of flexible. And the most successful thing about Kubernetes, beyond just basic deployment is that extensibility where people were like, hey, like I can add an API, that API represents, you know, some idea that Kube doesn't have, I can add to it. Trying to figure out like a way, and this is kind of like the big idea that I talked about at KubeCon was APIs are really important, but you don't always need like a full Kube cluster. So what if we can kind of tease apart the config and the defining the world, like an API for code? We have multiple time, like Docker files or an API, or a Travis YAML file that you stick in your Git repo that tells the CI system as an API. Those are just represented as config in your code, and they define a process or whatever. So having something really small that kind of lets you deal with a loop that you can take source code and config and put it into a Git repo, and then have it show up on a local thing that you can then deploy to other systems, it works fine today. Most people are kind of using it Kube as Kube. I think one of the things, and this connects to the second idea is when you think about having lots of Kube clusters, you've got a definition of your application and you put it in source control, and then you put it on one of those clusters. Sometimes you're doing GitOps or sometimes using a tool like Argo, or sometimes it's something you've built yourself. In fact, a lot of big companies, almost every large organization running Kubernetes has some system on top of Kubernetes that tells Kubernetes what to do. Sometimes that's a light touch, like it just makes some changes, deploys your code. Sometimes that's a whole platform that's been built on top that might be older than Kubernetes. It's going to evolve, they're adapted, or you're in the process of tearing it down. So if we tend to think of Kubernetes, like when we talk about it, it's this thing, it provides value. But what's really important is all the stuff around it that people use, whether it's the development side story, or whether it's the control story above. That's what I'm really interested in is how can we take some of the Kube ideas, but tease them apart and use them for that area above or the area below. KCP, which is a prototype that we demoed, and it's very specifically a prototype, not a project yet, because it's an idea of something in the future. I want to call it, it's almost what anybody can interpret. It's a prototype that shows the idea of you don't need a Kube cluster to have a Kube API. And so if you don't need a cluster to have a Kube API, you can use it to do multi-cluster. So you can talk to the control plane or talk to a KCP, and then it talks to the clusters for you. And there's people out there doing this, like this isn't like a novel idea by any means, but it feels like Kube. And then on local development, if you could just run one of these locally, we could tie it into other systems, not just Kubernetes, but maybe Docker compose, or tying it into SystemD, or if you hate SystemD, you can tie it into Bash. The idea that kind of the stuff that you created deployment in the service, what does a deployment in the service mean? It can be a little flexible. We're kind of exploring in this idea right now, but it's a lot of big ideas, and it's really early. We're kind of trying to get to that point where we can show a prototype that feels awesome, as awesome as Docker did. I'll be very humble and say, I don't think it's going to feel as awesome as Docker did that first time that I used it. But we're looking for that. What's some stuff that kind of shakes off the boringness and the resiliency of Kube and says, here's some new ideas, where can we go with them? So specifically around that, so it's funny, like SystemD, I actually think is a great example, because I think SystemD, in kind of concept, is a really good one. One of the things I don't like about SystemD is that it's an interface with an embedded implementation all the time. And what I really wish SystemD was was an interface and then had pluggable implementations. So what I'm kind of curious about is with what you're describing, where are there similarities to kind of that SystemD idea or even the Linux kernel in the sense that you're looking at starting to offer kind of an API with almost pluggable implementations or, and then also, at least some of the stuff I've heard you talk about before, is kind of pluggable API as well, right? Is that you don't necessarily have, you know, I don't know, there's 32 APIs, right? But you only have three of them in this particular instance, because you only need three out of the 32 for this particular project. And you know, like, I think we could use, and this is the beauty of computers, which is just really fun to like sometimes be like, I could use this to solve any problem. And then you go through the list and you're like, which problems wouldn't people really need to be solved? And which ones people don't care about? I do like the idea of, you know, what is a deployment? A deployment is like it's got an image in it, and we've got some containers, and it assumes that something can set up a whole bunch of containers on the same network. And if you've got that, if you've got something underneath it that can do it, maybe not every flag is useful. But like, you know, people have been doing Docker Compose translation to cube and cube back to Docker Compose for a really long time. One of the things like thinking about, you know, the problem though is like, if you have a Docker image, you can run it anywhere. Okay, well, like, what does that look like on a system? Most people don't care. They just want to see what the image runs like. So you got to follow some rules. But there's a lot of like declarative style problems that if you can make it really easy to be like, well, you know, I don't need a and going back to pre Kubernetes, there was a lot of people looking at system D at a large scale, CoroS did fleet. And it was a system D unit, and it got put on individual machines. A lot of those ideas are still useful. What does a unit look like? It's just your API. So like a unit file for system D. And so we're kind of, I would say, making it really easy to come up with new APIs that feel declarative, that feel cube like that you can stitch together with your existing applications is kind of our short-term goal. I think there's a lot of room for if you want to declaratively control a whole bunch of machines, like let's say you're at the edge and you have tens of thousands of machines, one of the ideas has been, well, I don't have to use a deployment to create something at the edge. I just want some of those pieces. You know, I might want to say I want to run, you know, three containers on this really stripped down ARM device that doesn't have Kubernetes on it. But the definition of, I just want to run three containers, something like Podman or Cryo or Container D or Docker could actually go do that at those machines or system D actually. Could we have kind of a cube-like definition up here? And then instead of having that go straight to a cube cluster, right, where the interface is the implementation, have something in between. It's like, well, I can take that definition and turn it into a system D unit file and then put it into maybe my special distribute this to thousands of machines, whether it's Ansible or something. That kind of flexibility, being able to do that alongside the applications that are going to talk to that edge device might actually be useful. And sometimes it isn't, right? Two different teams have different life cycles. So we're kind of trying to open the door to not just cube. And I like to use the example here is like, if you have a cube app, sometimes you have 12-factor apps alongside it. Maybe you're still using Heroku. Maybe you're actually using Lambdas. You kind of have, you know, you use two different config systems today, or you use Terraform. That combination of config and experiences, like, well, wouldn't it be awesome if I could deploy something to Netlify? Like I could just deploy my static documentation website to Netlify or my homepage to Netlify. And I could use the same tool at the same time to deploy it to Kubernetes, but not just one Kubernetes cluster, maybe like three Kubernetes clusters. Or I could, and then I can connect that service to other cloud services, like a database service. And sometimes that database is running on my laptop or running on my cluster. And sometimes it's, you know, a service like MongoDB Atlas or something like that. You know, the idea that I really just wanted to find my app and I talked to stuff like SQL databases or no SQL databases, I don't really care about the details. Could we make that easier as a loop that you could do together? So you could deploy both, check on both them to get, have a get-ups flow that just applies them to a server. That kind of loop, we're still really early. But I'd like to, we'd like to show those demos and we actually are kind of prototyping towards that in the KCP project. And we're getting a lot of ideas like this. It's still, this is still super early. But you know, I've heard from others in the community, like, this is really interesting. I've been doing something like this. Could we work together? And that's what community is all about. So that's kind of where we are today. So I wanted to pause here for just a second and ask Mr. Short. Did we have any questions? One question. Okay. So Clayton, I think all of us can give an opinion here. Where do you see Kubernetes sitting in the future? Entirely bare metal? Or do you also see a space for underlying virtualization? And I'm curious what your answer is Clayton, before I give mine. I feel like all of computing is a yes and kind of conversation where we never get rid of anything. We either reinvent it, we redesign it, or we just keep using it. So I actually think we're all for the same thing. That's right. You know, the interesting thing is we keep getting better at all of it, right? So why was virtualization invented? Because it really, really sucks to deal with bare metal. And, you know, the first days of a VM, I remember the first day at work that I fired up a VM and I was like, this is really slow and janky, but I could see how this could be awesome. And it was a little bit like the first time I fired up Docker, right? It gave me something new. And then over the years, like virtualization matured, you know, but like it wasn't just virtualization got better. It was Linux changed or the types of apps we wrote for VMs changed, or we developed new tools that made dealing with VMs. So I think it's going to, I think it's a yes and I think Kubernetes is really, really well suited to both. But I do think, you know, Kubernetes is increasingly going to be something that people run through services. And the services give you a little bit of flexibility to, to cheat. And by cheat, I mean, maybe it's not the exact Kubernetes code base underneath. A little bit of what we've been doing in KCP is like, if it looks like a CUBE server, and it walks like a CUBE server, and it quacks like a CUBE server, does it matter whether it's version, you know, and like this is API is really important. If the API works, you don't really care what version it is. I think we're getting to a point where you probably want to not care what infrastructure it runs on. You want it to run well in all of the places. And if the open source community does its job right. And that's really all of us just working on our own best interests, collaboratively. That idea of, oh, it's a place I can run apps. I don't have to learn 15 different systems. I don't have to glue it all together with duct tape. That interface of deploying apps on Kubernetes, like that can spread pretty far. And I think we'd like to bring in more things. As I was saying before, like connect out to other services. I don't want to have to make a decision about where my dependencies can run because I'm running on Kubernetes. I just want to use a dependency. Find me a database. Somebody's given me that database. In a dev flow, though, I might spend a really cheap local copy. How do we give that flexibility to do both extremes? That's kind of where we're going, I think. That's awesome. And I agree wholeheartedly, right? There's room for everything. And who knows? There might be something new that comes along and supplants everything else, right? It's just the nature of tech, right? It's usually a new paradigm or something that makes the previous thing much easier. But then the old thing is still there. And then a bunch of people build the adaptation between the old thing and the new thing. I mean, even going back to system D, a lot of people, system D changed a lot. But I think Linux is better for it. I certainly did not grow up doing our C init scripts and Cisv init. And every time I had to debug something in an Apache start script where I was like, why am I reinventing starting a Linux process 500 times poorly? Because I don't understand Bash as well as people who have been doing it for 20 years. For all of those clauses, you come and you take the system. That underlying system is still there. The new system layer is on top. And it solves a bunch of problems. I hope that somebody comes up with a super awesome idea. I think the QB ecosystem is flexible enough to be like, oh, we'll just integrate that too. Or we'll get integrated too. That's, I think, what makes tech awesome is it's up to us to really adapt to change. Yeah. And I think it's funny because you say that and I agree with you. But at the same time, it's also one of our biggest challenges a lot of the time is it's really difficult for most software to kind of adapt to change. So I think the ideas that you're going to talk about with Kubernetes make a lot of sense to me. And getting back to what I was talking about at the beginning of the show, understanding the ethos behind a tool chain or whatever. I really do think that's one of the things about Kubernetes. It's like you can't really do anything in Kubernetes without using CRDs. It even has its own slang, custom resource definitions. So in other words, Kubernetes is not sufficient to do most of what you want to do. Kind of in the core of it. That's the value in my mind, right? Is that you do have that flexibility and you can entertain ideas like doing KCP or other things like that. So I think that recognizing propensity for change can make you a better, you're trying to become more active in Kubernetes. I think that's a real important message to take away that Kubernetes is about being able to change. And so you make trade-offs to do that. And so actually what I would kind of like to ask is do you see any of those trade-offs? Where does it make it tougher that it is so flexible? So I think Kubernetes, and I think this is a complaint, and it is a valid criticism of Kubernetes, which is it's just complex enough to solve 80% of your problems and let you be able to solve the 20% that it doesn't solve. A Microsoft Word architect made this comment a long time ago. They found that everybody uses 20% of the function in Word, but everybody uses a different 20%. I don't think Qube's quite that there, but it's a complex system because the problem is trying to solve this. You got a bunch of machines. You would need to define something that lets you survive any one of those machines going down. And you want it to be stable enough that you can go print it. The people writing Qube are not perfect and they're not magical. They can't predict exactly how all these things would play out. And so like Qube is a reasonably complex system, but it's probably about as simple as you can get to represent the problem it's trying to solve. That next gen though is what's the simpler ideas that keeps the core and 12-factor apps are like a great example. 12-factor apps work until they don't. When they stop working, because you have a problem that's more complex, you have to go build a second system to do it. And I think one of Kubernetes successes is you don't have to have a second system to run the vast majority of software in the world. Instead of 80% of apps being 12-factor and you got to go have a different system for the other 20%. I think Qube moved that ratio, which is Qube can run 97% of applications and you can with some effort make the other 3% work or time in. What we have to be open to though is what makes it more complicated, what are the layers on top that make it easy? And the answer is nobody. There's teams that do self-service on top of Kubernetes and there's a network for a long time. But then as people got more and more clusters, self-service doesn't work. It sounds like another angle with KCP, which is most teams, most individuals in an organization are looking for something to help them self-service their development journey that's flexible enough that they're happy. And when the infrastructure teams need to put these rules in because they're afraid of security breaches that cost the company hundreds of millions of dollars or expose customer info or result in like if you're a hospital and hospital applications are a little more formalized, but there's like big complex masses of software that run our lives. You got to have some responsibility there. That balance between a development team yoloing it and an infrastructure team saying you can't do anything is where all of us who make software for a living eventually sit, whether you know it or not. And so I think one of the things like maybe not, Kubernetes sits as an infrastructure piece. I think the problem we're all trying to solve is how do we let people accomplish most of the things they want to accomplish easier without thinking about it? That was the goal of Kubernetes. That's been the goal of platform as a service. That's what everybody is building in their large companies. They cobble it together or they put a bit of time. I want to really focus that ecosystem of people who want to make self-service and control that the developers doing anything they want and the operations teams or the security teams or the SRE teams or the CISO whose job is on the line if we get it wrong. We want to tighten that and have like a really tight loop between those two teams and everybody builds their own approaches today. I think that's the real opportunity. It's not about cloud. It's not about long premise. It's not about Edge. You're building an app and it's got to run someplace. How do you get that interface right between teams? I think that's what Kubernetes is a first stage of. And there's plenty of other projects that are going to be completely unrelated to Qube. Like Terraform does this great. Ansible does this great. GitHub through their source code actions is a part of this story. How do you like keep iterating in the open source world so that you have this nice layer that you can rely on everywhere and you have the flexibility above it to do whatever you want and those work well together. That's what I get to do every day and it's awesome. One of the things I actually kind of regularly use as an example is like software is ridiculously young compared to most of the other kind of human exercises. We've been doing medicine for several thousand years. Whereas computer science even in academia is arguably in the nearing maybe 80 years old or something, maybe a hundred. But that is a ridiculously short amount of time and it's evolving at a ridiculous pace. And so I regularly talk about it's funny because we're still trying to solve that same problem of when I was first starting new development, I would yell down the hall to the guy who was running the server room and be like, okay, what version of PHP can I use? Because that guy is the one who's going to have to operate it. I would never build anything without knowing that I could actually put it into production. And I think in some ways we're trying to not formalize, but we're trying to make that communication scale because now it's not just me and the one guy down the hall with one server. We're talking global. We're talking thousand person development teams. We're talking thousand person SRE teams plus sys admins plus a database of all these people involved now. So we're trying to articulate that same conversation in a way that is flexible enough that we can actually have all of the conversations. Well, and not just flexible enough, but explainable, right? Like we talked about an AI explainable. There's so much power available in modern infrastructure, whether it's a cloud service or what you can do locally is that one person can spend $10 million, you know, as someone made a joke the other day, I thought it was awesome, which is one person can spend $10 million in a day if they have the right controls or the right quotas on a cloud. That power, you know, that connection between like I've used a service, I can get it up and running an open source does the same thing every day, which is, you know, every time you bring up a instance of a Rust app or a PHP app or a Pearl app or Java, you're bringing up millions, hundreds of millions of dollars of investment and, you know, years of people's lives and you don't even think about it, the folks who have to think about it every day is like, okay, like how do we keep your supply chain going? How do we keep people from spending all of your money? How do we know what you're spending the money on? And what if you have hundreds of people in different places spending, you know, building stuff, you don't want to stop them from building stuff because the goal for most organizations, most engineers, most teams is I just want to get this one thing going and then I want it to keep working forever. Even though we would say like, oh, of course, we're going to do tests and CI and we'll have a rigorous release cycle and the reality is, is it's like 99% of it is like, oh, it's working. Don't touch it. Trying to find that balance. Like, yeah, it's that we're doing it at scale. We're increasingly, we're not, we don't need to think about the infrastructure. How do we build those kind of layers of interface, which is like, yep, I'm building my app, I'm done. Somebody else can use that interface productively. And I think we're kind of, when we're like bracketing down, right? We had 12 factor, you know, Paz was a little too, a little too far up. Yeah, too far up the stack. And we had virtualization and Kubernetes and, you know, different types of things that integrate with Terraform. Like you can do amazing things with Terraform and Ansible, like, you know, deploy a huge fleet. I got to admit some days I'm like, I don't really want to because each of those little bits are designed by an individual. They take somebody else's API and they make their own API on top of it. I'm not depending on the cloud provider, not to change the API. I'm depending on the open source volunteer who out of their time built the interface between, you know, this representation of an API provided by a cloud provider. It's got to support that thing. So it's a new API. So we have all these APIs and then we have all these, you know, these high level things. Can we bring that layer so there's a nice, you know, thin, you know, I can just say like, here's 99.99% of all apps that you'll ever need to build. Go run them on whatever infrastructure. I'm not going to think about it anymore. And we're getting close. I mean, I don't, you know, in the next like five, 10 years, I mean, a lot of, we've learned a lot. Like, when I started, we were talking about infrastructure as code. So when I started, we're talking about infrastructure as API or API driven infrastructure. Nobody talks about that. They talk about declarative config or declarative infrastructure or infrastructure as code. We take it for granted. I think another 10 years where we're much further down that scale of, there's probably a standardized way to deploy every application you ever need. Someone adds a new one. You don't have to change your tooling. You said a new API. Right. And that's, I think, where I want KCP and cube and the things that, you know, that we work on to how do we help people bridge that layer? So weirdly enough. So the first up, I have to throw out there that I was really, really hoping for a Brewster's Millions reference in there. And if you haven't, if you haven't seen that movie, I'm dating myself and you should go watch it. But the one of the things that, you know, kind of is the good and bad thing. But like, I find it kind of amazing these days that you can manage to share your Amazon API key on the internet, right? Because your, your deployment tool chain, all that other stuff is so, you know, codified, right? You can write code that does it that you can actually just drop your key in there and publish it by accident because you can just automate the whole thing. So cautionary care. Don't do that. But on the flip side, I think it's really impressive that we've come so far along that I don't reference it on a piece of paper anymore, right? I just embed it somewhere and magic happens. You know, and it really, it feels, especially to anybody who's ever had to swap a hard drive and a server. It really does feel like magic these days. I think it's pretty amazing. On that note, unless Chris, if there was any other questions you wanted to cover, I think, I don't know, there's pretty quiet chat this morning. Yeah. You know, even though I did tweet about the fact it's a little later in the day, it is a Tuesday, so maybe that's why people aren't as kind of, I mean, there's, there's people watching. There's a lot of people watching, but yeah, well, I mean, it's Tuesday before the fourth of July. So that's about the rest of you, but I'm already on vacation this week. Right. I'm just hoping that no fires are erupt, that nobody finds a horrific bug in Kubernetes, that suddenly I'm going to get work, like this is the, please let's just get to fourth of July weekend and have a nice long weekend. I will say one of the things that I really appreciate about Red Hat, and I think it's starting to become much more industry standard, is people don't do deployments on Fridays as much anymore, you know, or, or getting, getting away less. If you've got good monitoring, you should be fine. Oh yeah. Yeah, everything's perfect. You know, that's actually a good question. So I'll speed this factoid to you, Chris, I'll go look up and see when people do their open shift upgrades, and we'll see whether people actually don't do their open shift in Kubernetes upgrades on Friday or not. Yeah, that would be good. I'll get that back to you. We should, we should at the beginning of the show, you know, what it's going to be every month, we can just actually pause for a second and tell people, this is a good time to do your open shift upgrade, watch the show, and it'll be done when the show's over, thereabouts. Yeah, that's right. There you go. We get segue, and we like getting a cup of coffee, except even better. So, so Clayton, thanks so much for coming. We really appreciate you putting up with a little bit of, you know, noisiness with our first kind of production episode. You know, do keep in mind, we did do a behind the scenes kind of why are we doing this show episode with a couple of people who are more in the background. And, you know, you should go watch that on Kubernetes by example. I think we put in the link earlier. I can jump it again. And then obviously, if you weren't able to catch this whole show, it will be on the YouTubes, you know, in perpetuity or at least as long as Google's around. So, you know, please go check it out there. We'll be back again. Our cadence is going to be the last Tuesday of the month. So, we will see you again. Anybody can do quick monthly math. Let's see. Quick July, what's that, 27? 27. So, we'll be back. And between now and then, we will announce who our next guest will be. And it will be another insider from inside the Kubernetes world. But again, thanks so much for coming. Thank you to the audience for showing up. And we had a couple questions. We appreciate it. And we'll see you next time. Take it easy out there, folks. Stay safe.