 Good morning. Good afternoon. Good evening and welcome to the very first live episode of KBE insider. It's a very new show. I will hand it off to my very illustrious friend Langdon White to tell you a little bit more about it. But I'm very excited to have the show coming on the channel and the folks that are helping us with it are awesome. So yes, thank you very much. Langdon, tell us a little bit more about the show. Definitely. So I'm Langdon White, as you may know me from another show called The Love of All Power, you know, and other places on the channel. You should definitely come check out other things that we do here. But this show is specifically about kind of in combination with a site we kind of relaunched during summit, I guess, two weeks ago now, and called Kubernetes by example. And Kubernetes by example.com. The idea of the site is to kind of be really focused on the kind of how to and the walkthroughs at the really basic levels of Kubernetes so that, you know, if you don't really understand what a pod is there's kind of a whole special area about what is a pod, kind of explaining not only how to kind of use one in kind of brass tacks sort of way but also like kind of the reason for it as well so you can kind of try to get a deeper understanding of what Kubernetes is and how it functions. And it kind of takes a multi what we were trying to do with the site is like take a multi multi learner approach. And so, so there's like video content there's also like actual like training class style content. There's written content, so that you know depending on what kind of learner you are you can kind of approach it. You know, in the best way that works for you. So that was kind of one aspect of it but as part of that we also wanted to do some live content. You know, because we know a lot of people, as we've seen with the open shift TV channel right have really started to engage with the kind of twitch or streaming style of content understanding. This show is kind of a nod to that we're trying to deliver that type of content as well. But specifically what we're doing with this show is to try to give you some of the philosophy in a sense behind Kubernetes, or the ethos or whatever word you like to use, but it helps when you're trying to learn something new to understand what its goals are or why it's trying to do that, you know, because then things become more intuitive when you are looking at the actual content because it all starts to fit together because you're you understand the larger block it's trying to build. So we're trying to give some of that color by giving you access or interviews with the people who actually make Kubernetes kind of go. And so, but as part of that we also need to give you the context in which, you know, all that's happening. And so, Mina, who is going to be a regular on the show is going to tell us and also actually updates the content on the website for the news is going to tell us in the last month because the show appears every month, what's been happening lately in Kubernetes land, and, you know, and you can kind of keep going back to the site to learn that information as well. And this will get a lot tighter, as I repeat it more often. But without further ado, I would like to introduce Mina. And if you want to tell us a little bit about what's happening in Kubernetes. Yeah, absolutely. Hey everyone, I'm Mina. If you saw episode zero of our show you may already be familiar with my face but I'm here to kind of tell you a little bit about what we uploaded on to the KBE website since we launched a couple weeks ago. So as you may know, cube turns seven this month. So with that we wanted the first week. So a couple weeks ago to be the introduction to Kubernetes so we definitely wanted to talk about some some news some latest news but we also wanted to talk about the five tips we wish we knew sooner about and we learned that those are successful automation requires diligent auditing ignore Kubernetes pod labeling at your budget's peril, understand your application's resource needs, don't play around with ETCD, and you don't need to go it alone. So those were the five tips that we wanted to bring to you so that you know them before, you know, you actually had to. There was Silascape, which was the first malware to target windows containers that broke out of cube clusters to plant backdoors and raid notes for credentials, which was pretty important it was everywhere. I was kind of freaking out about it. And then we wanted to bring a case study for you. So Flipkart is India's leading e commerce company and they actually recently adopted open EBS for storage on Kubernetes. And some of the key lessons the Q platform team at Flipkart has learned from this migration was that being production ready is really important. Obviously, managing storage resources, creating a volume group construct LVM partition and disk failure response so those were the five things that they kind of focused on the most as they were migrating. Yeah, as they were migrating. And then second week we kind of wanted to bring you more opinion pieces from thought leaders in the space. So the first notable of these where Matt essay said that we're thinking about Kubernetes all wrong. He told us that we have to try we should try using Kubernetes like an app server for smaller teams. Instead of treating it like a centralized cloud. And then we had David lenticum who said it's time to get more aggressive with Kubernetes as Kubernetes is really mature now I just mentioned it turned seven. He's saying it's time to take some risks and develop the next generation of applications. He even said that perhaps we can weaponize it to build build a better business. And then we also gave you the top 25 Kubernetes experts to follow on Twitter. Whether you're just learning Kubernetes or you're already a seasoned container buff. You'll need the right right resources available, such as tutorials and monitoring tools which is kind of what we're trying to do on the KBE website anyways. And this also includes following the right people on Twitter as well who can open your eyes to what you can do with Kubernetes. So that was a quick highlight of what we talked about since we launched on the KBE new section. I'm going to drop some links in the chat as well in case you guys want to go to the articles and and see what they're talking about. You can always come on the KBE new section of the KBE website to see what's going on in the world of Kubernetes and keep yourself up to date on what's happening. And with that I'm giving it back to Chris short, take it away. Yes, I think it's vitally important that if you're mucking with that CD that you do not look with that CD, unless it's just to give it the performance profile that it needs. Thank you, Mina. Awesome work. Gordon Tillmore dropped the link in chat to the overarching news page. So yeah, feel free to drop any of those links and Mina. So Langdon we have a special guest with us right like, we do, we should probably introduce this special guest of ours. Right. Although, I mean how much introduction does he really need. If this is Clayton Coleman, maybe architect of Kubernetes for Red Hat. I don't know what your actual title is. Yeah, as we often talk about another show. We have like red hats really good about changing group names and titles and all that stuff on a really regular basis. So we always like to say, Clayton, could you introduce what your title is yourself because you like we know. So, today, my title is probably something like architect for hybrid cloud applications in a changing and complex world where Kubernetes is super important and helps you get a lot of stuff done but can be even better. That's my title today. But you have one red hat. That's what I'm going with. Yeah. Cool. So, so what is it that you kind of do most of the time with Kubernetes or, you know, kind of in your job role. Sure. And I've been, you know, you know, for the last seven years, since just before Kubernetes was publicly announced like I've been a part of the project and I've kind of shifted my role. I still contribute heavily. Well, I'd like to say I contribute heavily it's maybe a trickle compared to what what I was lucky enough to be able to do for the first three, four years. I participate in SIG architecture and a number of SIGs kind of trying to help, you know, smooth over the gaps. You know, we've got a pretty effective community system these days where, you know, SIG contracts, and the community that's built up around Kubernetes, you know, all the people who are from big companies to individuals using it in their home labs. We've kind of got a pretty good system and so I kind of function almost as a kind of a background cog, just making sure this stuff ticks over. I spend a lot of time focused on kind of the thorny or gnarly issues and try to help, you know, teams that work within red hat or teams, you know, companies or individuals try to catch trends early. So, you know, things that are important in Kubernetes project and Kubernetes is a really, you know, firm boundary. And there's a whole bunch of stuff out there I kind of try to help people move across that interface so is it something that Kubernetes needs to improve. Okay, let's sort out and work with some teams and people and, you know, bring together the folks who care about an issue in terms it's, you know, around or above Kubernetes, and I spent a lot of time with OpenShift and the OpenShift community and people using, you know, Kubernetes in production, and I spent a lot of time listening to what they say and then saying, Yeah, it's all positive. Yeah. It's, it's, if you make it something that's this important to you, when it breaks, when something goes wrong, when you start hitting limits when you hit the kind of what is Kubernetes great for and what is Kubernetes not great for when you start hitting those limits, trying to help you figure out where Kubernetes can go or where the ecosystem around Kubernetes can go or how Kubernetes itself should change. So, it's kind of a, it's a mishmash of big, I write a lot of PowerPoint presentations. And I do most of my coding and PowerPoint these days but but it's a lot of communication and you know helping people helping people come together and find like, you know, that lucky person who also cares about the same problem as you. I really enjoy it's what I can connect to people who have the same problem, and then they can go fix it in Kubernetes we can get it, we can make the world a little bit better a place every day. Right, right. Yeah, I would actually like to explore the communication part of the problem, a bunch more but before we do that, I would like to ask you and we're we're trying to set up a some theme here of when we do the show. Can you tell us a little bit about like how you got into open source to begin with, like what brought you into the that community or into that world. It was really interesting I went to work for this company called red hat. And I certainly heard of use. I used open source prior to that I'm actually worked at IBM. Ironically enough, you started out of college worked at IBM in North Carolina for like 10 years and I was like, you know, did a lot there and I, you know I knew open source and I kind of was familiar with it. A bunch of friends worked at Red Hat and they're like we're working on this really cool thing it's going to change the world it's called containers. And I was like, man, it's not that interesting. Well, okay, whatever. And so I, you know, in the early days, I use open source and you know red hat is very intense about open source. And so it's kind of like, oh okay this is interesting. It's awesome because I'm working at Red Hat like, there's a little bit of the, the mindset is is you're doing it for three sets of people communities, customers and partners. And your job is to balance those and make sure everybody's working together because it's not just you know we're not just doing it, and you don't go out and just make something and then hope people use it. And then you don't go focus only on the customer and don't think about how community can benefit the customer. And then like partners, you know, that could be anybody but it's other people trying to make stuff that matters that they can, you know, keep going for a long time and so you know communities are diverse things, but there's kind of a customers and partners help kind of anchor that community and so for me that was actually really helpful because, you know, I got a lot of stuff to do I love programming but I like having a purpose. And that was a really great purpose to have so I enjoy that. You know, it's that constant, you know feedback loop between something that's awesome that someone has shared and all these days a lot of it is, you know, large companies or people, an engineering team at a large company. We say like, we made this, we want it to be useful. There's a there's a lot of question marks after well you know, here's this open source stuff like what happens if that company goes away what happens if those people don't want to work on it anymore. So trying to figure out that loop is has been you know what I've learned over the last nine years is like there's a lot of ways to do it and it's super important for everybody. What you're hearing there right is that you know what you see a lot of value in communication again right is that is those you know you can have those three groups right and making and not making them work together but you know making them work together. You know is is a lot about communication between those groups. So then kind of more specifically, you know what we're talking about Kubernetes what what okay so you got pulled into the container world pretty quickly when you got connected to red hat. What brought you because was nine years so like that's a little bit you were doing a little bit of stuff before communities launched. What brought you what made you think Kubernetes this thing. This container things got some legs maybe this Kubernetes thing does too. Yeah, so this is like, you know, even my memory starting to get hazy about that period in the early days of Kubernetes but I, this is one of my favorite stories because it was like it was such a, you know, we docker came out and I think it was 2013. Yeah, everybody's like, you know containers existed before that open shift used them, you know, there are different parts of it see groups and process containment and Linux and lots and lots of stuff. Docker kind of crystallized they had that like three, you know, pieces you could download something, which then we all download stuff off the internet and run it as we're on our systems that's like what we do. And you can get a reproducible environment, and then it, it mostly just worked. And that combination like that year I remember like this, the sense of like excitement, and you know, everybody was, you know, this could be the next, you know, big thing because it was, it was something that worked well. You know, together in a novel way, a little bit like, maybe like the first iPhone right you know it was the, the world before Docker and the world after Docker are very different. And so you know through that year, you know, we were on open shift we've been doing for a while and we're like, we need to get, you know, there's, we want to modernize, because we kind of had done that, you know, here's the first phase. And then we started talking to a number of people in the background we talked to a few people at Google. I think Brendan and Tim, Brendan Burns and Tim Hawken did a demo for us of their at the time it was called the seven lit, which is the what the prototype cubelet that they built internally at Google. We were like, Oh, that's interesting, you know, we're working on some stuff to, you know, tell us, like, tell us if you're going to launch this, and we got a call like one weekend. I think it was the end of May, or the beginning of June, just before DockerCon. And they're like, Hey, we're actually going to go through with this are you guys in. And we were like, Sure, sounds awesome. And we had kind of been, you know, it was kind of one of those like, fortuitous accidents for us which is it was the right place and right time for us to be like, it was an awesome idea. We're willing to do it in the open, we're willing to work willing to, I don't want to say throw away but we're willing to throw away everything that we did before, because it used containers, and even Kubernetes like, it wasn't really about Docker containers, it was about generic containers, right, it was about containers at scale and like a lot of, you know, Docker works great on your local laptop, and then that scaling up factor, everybody was building their own container orchestration systems but this really felt like it had like, you know, the Google folks I really respect them, you know, in those early days, like there's a there's a bunch of domain knowledge that they shared, and then they were willing to listen, and a bunch of other people in the community were like, we know some, we've got a lot of like experience, so it on OpenShift had a bunch of experience like dev loops on top of containers, versus dev loops on top of VMs, or what happens when you want to do something that's more complex than a 12 factor stack, like how do you do a development or a software update loop for a database. So even from those early days, like we were thinking about, you know, we had a lot of compatible worldview so we launched, they launched it, QCon I got, or Dockercon that year in 2014 I'm getting my ears wrong, but yeah, this is how long it's been at this point. And we, I was, I think I was one of the first contributors I got like the commit bit on the repo, and it went public, and we all showed up in Slack, or it wasn't even Slack at that time, I think it was pre Slack we were using a bunch of, we decided to use Slack very early, but we showed up, we showed up in chat, and we started opening GitHub issues. And, you know, it kind of snowballed I mean, there was nothing really in the repo was a, it was a basic idea and was, you know, wrote some stuff into QCon and then the Qblit had this really janky loop. And now we have, you know, it wrote back forth and we added an API layer and we designed some APIs we came up, you know, some of the ideas that we had around like declarative config and being able to Qt control apply like those were the seeds of them were there, but it took, you know, years to see them realize that was, that was a really awesome period for me in my life. I'm very grateful to have had the chance to kind of be at ground zero of that. Yeah, I mean, you know, and I'm not sure if I'll get the number right, but I mean, I think it probably it also helped write that, you know, both red hat, like, and particularly Google right Google had this was like their third try or something of like orchestrating at scale for their internal separate was kind of a lot of what was maybe it was even the fourth try right like, you know, and one of the things that I think people particularly who are new to software or are outside the software world don't realize this. You know, we, it's actually really good for us to rewrite things, often from from scratch, because then we make, we recognize some of the choices that we made early on that may not have been the best choice. You know, and then, you know, because, you know, the thing evolves or whatever. It's been so nice in the past, I don't know, 10 or 15 years right that even less than that that our software has actually gotten so kind of rebuild quickly. You know, so like languages like Python and stuff, you know, make it so you can redo things much more efficiently than you ever used to in the big waterfall, you know, development models of the early 2000s. Well, and I mean, you know, there's it was interesting to because I think, you know, a lot of the Googlers brought things that didn't work, or experiments and investigations and, you know, in a bit of a change for Google, they were very willing to kind of share I always joked with Brian Grant, who was one of the Google architects, and kind of helped, you know, even even today Brian kind of his Twitter is on the list of the top 25 and Brian always has his great, you know, insightful, you know, connections and he used to joke that sometimes on GitHub issues that he'd drop a paragraph of summary about, you know, how, how they thought about a particular problem and I would laugh and I'd be like that's like $10 million of R&D research and your engineering time and pain and effort that's been nicely summarized into a single paragraph, so that we can avoid it so there was a lot of, there's a lot of knowledge sharing and to be honest, you know, the way that we envisioned cube in the early days, I don't think is the cube that we have now, and there's certainly a lot of areas where the whatever we, there even early on, you know, there was probably most cube clusters, the average size of a cube cluster is one to two nodes probably, just because lots and lots of people run really small clusters for testing or trying things out or on their home machine or they run any cube and now they run kind or in the early days they ran OC cluster up or, you know, there's like a million solutions for running these small clusters. And your problems are different when you're just doing local depth. And I think that's something that Kubernetes, even though it's designed to be, you know, 10 to 1000 nodes. Those kinds of things that people look for in their local iterative dev loop aren't always the same and we can still improve that there's still things that we can go do so there's a rich vein of, you know, things that we didn't achieve that if we come back and look at him a second time. Maybe there's actually some really new ideas still lurking there, because we've got a mature Kubernetes and we can depend on it. We can take it in new directions. I think what we want to talk about a little bit more is actually specifically what, you know, what did you have in mind there, like what, what are some of the examples of, you know, the places when you see the biggest change happening or the biggest, you know, the biggest opportunities in a sense, going, you know, in kind of the next steps. So, and this is, I'll say this sounds bad just off the surface. Kubernetes has calcified a bit, which is when you have a big mature project with lots of people, you know, kind of helping in little areas. It's not like when you have a small team, right. I think everybody in the world knows like when you have a small team, you do a bunch in one direction. You kind of sketch out an arc, and then you had to fill in the details and you filled those details over time so you, you had fixes or you go figure out that the stuff you hacked together in a weekend was like, you know, I have a PR open right for Kubernetes, which is a really subtle issue in the kubelet. And all of the code that I'm changing has, you know, is five years old or more. And it's stuff that it mostly works. But just as, you know, as as contributors come and go, you know, as we get busy so like folks on signal, like, there's a lot of, you know, we got kind of got a second or third generation of sick node contributors going through now there's a lot of domain knowledge that's lost, but you got to bring it back into context and everybody's busy so there's a lot of layers of cube. One of the things I think going forward is there's maybe two dimensions I think are like super interesting. So one of them is like they're really small. You know clusters like what do you actually want when you're doing local iterative development or when you want to test something locally, or when you want to test just the basics of an idea around something that's declarative right so like on our laptops we have get repose and we run commands all day long but when we start checking things into source control we're trying to, we're trying to describe the idea and then have it, you know, survive for a long time and so get ops and like the idea that, you know, your source code or your configuration or documentation like you put them in source control, and you can see their history, and you can capture your idea. And even though you know the code and configuration are kind of different, they're really not. Sometimes they have external dependencies, like libraries, and sometimes they depend on external systems or apis that might change. But you know you seem like those infrastructures code ideas where you have you write some code and it goes and changes the system. And there's a lot of similarities. When you do that local development, like most people sitting down like they just got to learn a concept, and I hope it doesn't change so like in languages you get like an API contract from your, your language like go. And the, the, the go team tries not to break you. And your dependencies, sometimes you use stuff from people who don't really care about, you know, long term thinking right open source has a lot of this right a library. They just move on. They get burned out and they leave it. And what do you do is someone who consumes that dependency that API, conversely like cube is designed to be kind of flexible. And the most successful thing about Kubernetes beyond just basic deployment is that extensibility where people were like hey like I can add an API that API represents you know some idea that cube doesn't have. Trying to figure out like a way and this is kind of like the big idea that I talked about a cube con was API is a really important. But you don't always need like a full cube cluster so what if we can kind of tease apart the config and the defining the world like an API for code. We have multiple like Docker files or an API. Or Travis YAML file that you stick in your get repo that tells the CI system as an API. Those are just represented as config in your code. And they define a process or whatever so having something really small that kind of lets you deal with a loop. And then you can take source code and config and put it into a get repo and then have it show up on a local thing that you can then deploy to other systems. Works fine today most people are kind of using it cube as cube. I think one of the things and this connects to the second idea is when you think about having lots of cube clusters. It's the definition of your application and you put it in source control and then you put it on one of those clusters. Sometimes you're doing get ops or sometimes using a tool like Argo or sometimes it's something you've built yourself in fact a lot of a lot of big companies. Almost every large organization running Kubernetes has some system on top of Kubernetes that tells Kubernetes what to do. Sometimes that's a light touch like it just makes some changes to place your code. Sometimes that's a whole platform that's been built on top that might be older than Kubernetes. It's been evolved or adapted or you're in the process of tearing it down. So we think of tend to think of Kubernetes like when we talk about it as like it's this thing it provides value. But what's really important is all the stuff around it that people use whether it's the development side story, or whether it's the control story above. And that's what I'm really interested in is how can we take some of the cube ideas. To tease them apart and use them for that area above or the area below and KCP, which is a prototype that we demoed and it's very specifically a prototype not a project yet because it's an idea of something in the future is this. I want to call it it's it's almost what anybody can interpret. It's a prototype that shows the idea of, you don't need a cube cluster to have a cube API. You can use it to do multi cluster so you can talk to the control plane or talk to a KCP, and then it talks to the clusters for you and there's people out there doing this like this isn't like a novel idea by any means but it feels like cube. And then on local development if you could just run one of these locally, we could tie it into other systems not just Kubernetes but maybe Docker compose or tying it into system D or if you hate system D you can tie it into bash. The idea that kind of the stuff that you created deployment in the service. What does a deployment in the service mean it's, it can be a little flexible so we're kind of exploring in this idea right now. But it's a lot of big ideas and it's really early so it doesn't really, we're kind of trying to get to that point where we can show a prototype that feels awesome as awesome as Docker did. I don't, I will be very humble and say I don't think it's going to be feel as awesome as Docker did that first time that I used it. But we're looking for that. You know what's the what's some stuff that kind of shakes off the, the boringness and the, the resiliency of cube and says, here's some new ideas, where can we go with them. So, so specifically around that so it's funny like system D. I actually think is a great example because I think system D in kind of concept is a really good one. One of the things I don't like about system D is that it's an interface with an embedded implementation, all the time. And what I really wish system D was was an interface and then have pluggable implementations. So what I'm kind of curious about is is with what you're describing, you know, where are there similarities to kind of that system the idea, or even the Linux kernel right in the sense that you're, you're looking at starting to offer kind of an API with almost pluggable implementations, or and then also your, at least some of the stuff I've heard you talk about before is kind of pluggable API as well right is that you don't necessarily have, you know, I don't know there's there's 32 apis right but you only have three of them in this particular instance, because you only need three out of the 32 for this particular project. And, you know, like, I think you could use, and this is the beauty of computers which is just really fun to like sometimes be like I could use this to solve any problem. And then you go through the list and you're like which problems wouldn't people really need to be solved and which ones people not care about I do like the idea of, you know, what is a deployment deployments like this, it's got an image in it and we've got some containers and it assumes that something can set up a whole bunch of containers on the same network. And if you've got that, if you've got something underneath it they can do it, maybe not every flag is useful but like, you know people have been doing Docker Compose translation to cube and cube back to Docker Compose for a really long time. One of the things like thinking about, you know the problem though is like if you have a Docker image, you can run it anywhere. Okay, well like, what does that look like on a system, most people don't care they just want to see what the image runs like so you've got to follow some rules. But there's a lot of like declarative style problems that if you can make it really easy to be like well you know I don't need a and going back to pre Kubernetes there was a lot of people looking at system to get a large scale core as did fleet. And it was a system to unit, and it got put on individual machines. A lot of those ideas are still useful. What does a unit look like, it's just your API so like a unit file for system D. And so we're kind of, I would say, making it really easy to come up with new API's that deal declarative that feel cube like that you can stitch together with your existing applications is kind of our short term goal. So if you're in a room for, if you want to declaratively control a whole bunch of machines like let's say you're at the edge, and you have 10s of thousands of machines. One of the ideas has been well, I don't have to use a deployment to create something at the edge I just want some of those pieces, you know I might want to say, I want to run, you know, three containers on this really stripped down arm device that doesn't have anything on it. But the definition of I just want to run three containers, something like pod man or cryo, or container D, or Docker, could actually go do that of those machines, or system D actually. We have kind of a cube like definition up here. And then instead of having that go straight to a cube cluster right where the interface is the implementation have something in between it's like, well, I can take that definition and turn it into a system D unit file, and then put it into my, you know, my special distribute this to thousands of machines, whether it's Ansible or something that kind of flexibility. Being able to do that alongside the applications that are going to talk to that edge device might actually be useful and sometimes it isn't right to different teams have different life cycle so we're kind of trying to open the door to not just cube. The other example here is like, if you have a cube app. Sometimes you have 12 factor apps alongside it, maybe you're still using heroku. Maybe you're actually using lambdas. You can have, you know, you use two different config systems today or you use Terraform that combination of config and experiences like well wouldn't be awesome if I could deploy something to Netlify. I could just deploy my static documentation website to Netlify or my homepage to Netlify, and I could use the same tool at the same time to deploy it to Kubernetes, but not just one Kubernetes cluster maybe like three Kubernetes clusters. Or, and then I can connect that service to other cloud services like a database service, and sometimes that database is running on my laptop, or running on my cluster and sometimes it's, you know, a service like MongoDB Atlas or something like that you know, the idea that I really just wanted to find my app and I talked to stuff like SQL databases or no SQL databases. I don't really care about the details. Can we make that easier as a loop that you could do together so you can deploy both check them both and to get have a get ups flow that just applies them to a server. That kind of loop, we're still really early, but I'd like to we'd like to show those demos and we actually are kind of prototyping towards that in the KCP project. There's a lot of ideas like this, it's still, this is still super early, but you know I've heard from others in the community like this is really interesting. I've been doing something like this could we work together and that's what community is all about. So that's kind of where we are today. So I wanted to pause here for just a second and ask Mr short to do we have any questions. Okay. So, Clayton. I think all of us can give an opinion here. Where do you see Kubernetes sitting in the future entirely bare metal or do you also see a space for underlying virtualization and I'm curious what your answer is Clayton before I give mine. I feel like all of computing is a yes and kind of conversation where we never get rid of anything. We either reinvent it. We redesign it or we just keep using it. So I actually think we're all the same thing. You know the interesting thing is we keep getting better at all of it right so why was virtualization invented because it really really sucks to deal with their metal. And, you know, the first days of a VM I I remember the first day at work that I fired up a VM and I was like, this is really slow and janky but I could see how this could be awesome. And it was a little bit like the first time I fired up Docker right it, it gave me something new and then over the years like virtualization matured, you know, but like it wasn't just virtualization got better it was Linux changed or the types of apps we wrote for the VMs changed or we developed new tools that made dealing with the VM so I think it's gonna, I think it's a yes and I think Kubernetes is really really well suited to both. But I do think, you know, Kubernetes is increasingly going to be something that people run through services and the services give you a little bit of flexibility to cheat. I mean, maybe it's not the exact Kubernetes code base underneath a little bit of what we've been doing in KCP is like, if it looks like a cube server, and it walks like a cube server and it quacks like a cube server. Does it matter whether it's version, you know, and like this is API is really important. If the API works, you don't really care what version it is. I think we're getting to a point where you probably want to not care what infrastructure it runs on you want it to run well in all of the places. And if the open source community does its job right. And that's really all of us just working on our own best interests, collaboratively. That idea of oh it's a place I can run apps. I don't have to learn 15 different systems I don't have to glue it all together with duct tape. That interface of deploying apps on Kubernetes like that can spread pretty far and I think we'd like to bring in more things. As I was saying before like connect out to other services like I don't want to have to make a decision about where my dependencies can run because I'm running on Kubernetes. I just want to use a dependency on your database. Somebody's given me that database in a dev flow though I might spend a really cheap local copy. How do we give that flexibility to do both extremes. That's, that's kind of where we're going, I think. That's awesome. And I agree a wholeheartedly right like there's room for everything. Right, like, and who knows, there might be something new comes along and supplants everything else right like it's just the nature of tech right. Yeah, it's usually a new paradigm or something that's that makes the previous thing much easier, but then the old thing is still there. And then a bunch of people build the adaptation between the old thing and the new thing I mean even going back to system D, you know a lot of people system D changed a lot, but I think Linux is better for it. I certainly did not grow up, you know, doing RC and it scripts and, you know, cispy in it and and every time I had to debug something in an Apache start script where I was like why am I reinventing starting a Linux process 500 times poorly because I don't understand bash as well as people who've been doing it for 20 years, you know, for all of like those clauses like you come and you take the system that underlying system still there. The new system layers on top, and it solves a bunch of problems. I hope that somebody comes up with a super awesome idea. I'm at the cubicle system is flexible enough to be like, Oh, we'll just integrate that to or we'll get integrated to like that's I think what makes you tech awesome is it's up to us to really adapt to change. Yeah, and I think it's funny because it like you say that and I agree with you. But at the same time, it's also one of our biggest challenges a lot of the time is it's really difficult for for most software to kind of adapt to change. I think the ideas that you're going to talk about with Kubernetes make a lot of sense to me, and kind of show and like and getting back to what I was talking about at the beginning of the show, kind of understanding the ethos behind a tool chain or whatever. Like, I really do think that's one of the things about Kubernetes right it's like you, you can't really do anything in Kubernetes without using CRDs right it even has its own slang, right. So in other words, Kubernetes is not sufficient to do most of what you want to do. Kind of in the core of it. That's the value in my mind right is that you do have that flexibility and you can entertain ideas like doing KCP, you know, or other things like that so I think that recognizing that a propensity for change can make you a better, you're, you're trying to become more active in Kubernetes. I think that's a real important message to take away that, you know, Kubernetes is about being able to change. And so you make trade offs to do that. And so actually what I would kind of like to ask is, do you see any of those trade offs where, where does it make it tougher that it is so flexible. I think Kubernetes, and I think this is a complaint and it is a valid criticism of Kubernetes which is it's just complex enough to solve 80% of your problems and let you be able to solve the 20% that it doesn't solve. Someone, a Microsoft Word architect made this comment a long time ago. They found that everybody uses 20% of the function in Word, but everybody uses a different 20%. I don't think cubes quite that there but it's like it's a complex system because the problem is trying to solve this you got a bunch of machines. You need to define something that lets you survive any one of those machines going down, and you want it to be stable enough that you can go print it. People writing cube are not perfect and they're not magical. They can't predict exactly how all these things would play out and so like you there's a reasonably complex system. But it's probably about as simple as you can get to represent the problem it's trying to solve that next gen though is what's the simpler ideas that keeps the core and 12 factor apps are like a great example 12 factor apps work until they don't. When they stop working, because you have a problem that's more complex, you have to go build a second system to do it. And I think, you know, one of Kubernetes successes is, you don't have to have a second system to run the vast majority of software in the world. So you, instead of 80% of apps being 12 factor and you got to go have a different system for the other 20%. And I think cube move that that ratio which is keeping run 97% of applications, and you can with some effort make the other 3% work or time in. What we have to be open to though is, but makes it's more complicated what are the layers on top that make it easy and the answer is nobody. So, there's teams that do self service on top of Kubernetes, and there's a network for a long time. But then as people got more and more clusters self services work it sounds like another angle with like KCP which is most teams, most individuals in an organization are looking for something to help them self service their development journey that's flexible enough that they're happy. And when the infrastructure teams need to put these rules in because they're afraid of security breaches that cost the company hundreds of millions of dollars, or expose customer info or result in like if you're a hospital and you know hospital applications are a little, a little more formalized but there's a big complex masses of software that run our lives. You got to have some responsibility there that balance between a development team yoloing it. The infrastructure team saying you can't do anything is where all of us who make software for a living, eventually sit whether it's, whether you know it or not. And so I think one of the things like maybe not Kubernetes sits as an infrastructure piece. I think the problem we're all trying to solve is how do we get, how do we let people accomplish most of the things they want to accomplish easier without thinking about it. That was the goal of Kubernetes. That's been the goal of platform as a service. That's what everybody is building in their large companies. They just, you know, they cobble it together, or they put a bit of time. I want to really focus that ecosystem of people who want to make self service and control that the developers doing anything they want. And the, and the operations teams or the security teams or the SRE teams or the CISO whose job is on the line. If we get it wrong, we want to, we want to tighten that and have like a really tight loop between those two teams. And everybody builds their own approaches today. I think that's the real opportunity. It's not about cloud. It's not about on premise, not about edge. You're building an app and it's got to run someplace. I think that's what Kubernetes is a first stage of. And there's plenty of other projects that are going to be completely unrelated to cube, like Terraform does this great Ansible, does this great, GitHub through their source code actions is a part of this story. How do you like keep iterating in the open source world so that you have this nice layer that you can rely on everywhere and you have the flexibility above it to do whatever you want and those work well together. That's what I get to do every day and it's awesome. Yeah, yeah. One of the things I actually kind of regularly use as an example is like, you know, software, you know, has is ridiculously young compared to most of the other kind of human exercises right like, you know, we've been doing medicine for several thousand years, right? You know, whereas computer science, you know, even, even in academia right is, I don't know, it's put, you know, arguably in the nearing maybe 80 years old or something maybe 100. You know, but that is a ridiculously short amount of time. And it's evolving at a ridiculous pace, right. And so I regularly talk about it's funny because we're still trying to solve that same problem of when I, you know, was first starting new development, you know, I would yell down the hall to the guy who was running the server room and be like, Okay, what version of PHP can I use, right, because because that guy is the one who's going to have to operate it right and so I would never build anything without knowing that I can actually put it into production. And I think in some ways we're trying to not formalize but we're trying to make that communication scale, because now it's not just me and the one guy down the hall with one server right, we're talking global we're talking, you know, 1000 person development teams, we're talking 1000 person SRE teams plus sys admins plus, you know, a database of you know, all these people involved now. So we're trying to articulate that same conversation in a way that is flexible enough that we can, you know, actually have all of the conversations. Well, and not just flexible enough but explainable right like we talked about an AI explainable. There's so much power available in modern infrastructure, whether it's a cloud service or what you can do locally is that one person can spend $10 million you know as someone made a joke the other day I thought it was awesome which is one person can spend $10 million in a day. If they have the right controls or the right quotas on a cloud. That power, you know that connection between like I've used a service, I can get it up and running an open source does the same thing every day, which is, you know, every time you bring up a instance of a Rust app or PHP app or a pearl app or Java, you're bringing up hundreds of millions of dollars of investment and you know years of people's lives, and you don't even think about it. The folks who have to think about it every day is like okay like how do we keep your supply chain going. How do we keep people from spending all of your money. How do we, how do we know what you're spending the money on and what if you have hundreds of people in different places spending, you know, building stuff. How do we keep people from building stuff because the goal for most organizations, most engineers, most teams is, I just want to get this one thing going. And then I want to keep working forever, even though we would say like oh of course we're going to do tests and see I and will have a rigorous release cycle and the reality is it's like 99% of it is like, oh it's working don't touch it. Yeah. So, trying to find that balance like, yeah it's that we're doing it at scale. We're increasingly we're not we don't need to think about the infrastructure. How do we build those kind of layers of interface which is like yep I'm building my app. I'm done. Somebody else can use that interface productively and I think we're kind of. Bracketing down right we had 12 factor, you know, Paz was a little too, a little too far up. Yeah, too far up the stack. And we had virtualization and Kubernetes and, you know, different types of things that integrate with Terraform like you can do amazing things with a virtual like, you know, deploy a huge fleets. I got to admit some days I'm like, I don't really want to because each of those little bits are designed by an individual, they take somebody else's API and they make their own API on top of it. I'm not depending on the cloud provider not to change the API. I'm depending on the open source volunteer who out of their time built the interface between, you know, this representation of an API provided by a cloud provider. It's got to support that thing so it's a new API so we have all these APIs. And then we have all these, you know, these high level things can we bring that layer so there's a nice, you know, thin, you know, I can just say like here's 99.99% of all apps that you'll ever need to build, go run them on whatever infrastructure. I'm not going to think about it anymore. And we're getting close. I mean, I don't know in the next like 510 years I mean a lot of, we've learned a lot like when I started we were talking about infrastructure as code. Or sorry, when I started we're talking about infrastructure as API or API driven infrastructure. Nobody talks about that now they talk about declarative to fake or declarative infrastructure or infrastructure as code. We take it for granted. I think another 10 years where we're much further down that scale of there's probably a standardized way to deploy every application you ever need someone has a new one. You don't have to change your tooling. You said a new API. Right. And that's I think where I want KCP and cube and things that you know that we work on to how do we help people bridge that layer. So, weirdly enough, so the first up I have to have to throw out there that I was really, really hoping for a Brewster's millions reference in there. And if you haven't, if you haven't seen that movie, I'm dating myself and you should go watch it. But the one of the things that, you know, kind of is the good and bad thing but like I find it kind of amazing. These days that you can manage to share your Amazon API key on the internet, right, because your, your deployment tool chain, all that other stuff is so, you know, codified right you can write code that does it that you can actually just drop your key in there and publish it by accident, because you can just automate the whole thing. So cautionary care, don't do that. But on the flip side, I think it's really impressive that we come so far along that I don't reference it on a piece of paper anymore, right, I just embed it somewhere and magic happens. You know, and it really it feels especially to anybody who's ever had to swap a hard drive and a server. It really does feel like magic these days. And I think it's pretty amazing. On that note, unless Chris, if there was any other questions you wanted to cover, I think. Yeah, there's pretty quiet like chat this morning. Yeah, you know, even though I did tweet about the fact it's a little later in the day. It is a Tuesday so maybe that's why people aren't I mean there's there's people watching there. But yeah, well, I mean it's Tuesday before the fourth of July. Oh, that's about the rest of you, but I'm already on vacation this week. I'm just hoping that no fires erupt that nobody finds a horrific bug and Kubernetes that suddenly I'm going to get work like this is the please let's just get to fourth of the weekend and have a nice long weekend. And one of the things that I really appreciate about red hat. And I think it's starting to become much more industry standard is people don't do deployments on Fridays, as much anymore. You know, or, or getting getting less monitoring you should be fine. Oh yeah, yeah. You know, that's actually a good question so I'll feed this factoid to Chris I'll go look up and see when people do their open shift upgrades, and we'll see whether people actually don't do their open shift and Kubernetes upgrades on Friday or not. Yeah, I'll get that back to you. We should, we should at the beginning of the show, you know, what it's going to be every month, we can just actually pause for a second and tell people, this is a good time to do your open shift upgrade. Watch the show, and it'll be done when the show is over there about. Yeah, right. There you go. And we like getting a cup of coffee, except even better. So, so Clayton thanks so much for coming we really appreciate you putting up with a little bit of, you know, noisiness with our first kind of production episode. Keep in mind, we did do a behind the scenes kind of why are we doing this show episode with a couple of people who are more in the background. And, you know, you should go watch that on Kubernetes by example. I think we put in the link earlier. I got it again. And then obviously, if you weren't able to catch this whole show it will be on the YouTubes, you know, in perpetuity or at least as long as Google's around. So, you know, please go check it out there. We'll be back again our cadence is going to be the last Tuesday of the month. So we will see you again anybody can do quick monthly math, let's see. Quick July, what's that 27, so we'll be back. And between now and then we will announce who our next guest will be. And it will be another insider from inside the Kubernetes world. But again, thanks so much for coming. Thank you to the audience for showing up. And we had a couple questions we appreciate it. And we'll see you next time. Take it easy out there, folks. Stay safe.