 Hello out there, folks. Looks like there are a few of you out there listening today. Welcome to Fields Tested on clownnative.tv. A reminder before we get started that this is a official live stream of the CNCF and as such is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct. Basically, please be respectful of all your fellow participants and presenters. Let me make sure I have the chat up. If you're joining for the first time, make sure you go ahead and follow the Twitch channel. That will give you access to join the chat. And I would really love it if you would chat with me today and help me as I explore an interesting topic of some sort. I'm gonna play it a little freeform today. But before I get into that, let me check to see if there are any upcoming shows to tell you all about. So I have a doc with some of the upcoming shows, but I think we've passed everything for this week. I think I'm the last one on the channel for this week. But we'll have more, of course, next week. We have tried to have shows most every weekday. So definitely check out what's gonna be going on next week. I think the standards are solid state CNCF faceoff, which is so much fun. It is a game show type of a show. So if they're gonna be having that next week, definitely check that out. I should probably bring up the schedule and check, but no. And certs magic is also usually on the next week, it looks like. Watch out for those. Oh, I need to change the icon. Sorry about that. Fields tested, not.edu. But if you haven't checked out Kunal's show, definitely check out the recording of .edu. He does a great job. And he focuses on students and what it's like to go through education and start learning open source and cloud native technologies. So let's see here, there are a couple of topics. I was interested in maybe researching with you all today. I'm gonna be learning along with everyone is my plan today. And those two topics are either stateful sets or Windows containers, because I've just been exploring Windows containers in Kubernetes a little bit for work. So I could pull up some of the stuff that I've been working on there. And I asked on Twitter and I got a couple of folks interested in the Windows container thing. So I'll start pulling that stuff up, but let me know in the comments if that's something that you wanna hear about or if it's not. So let me tell you a little bit about Windows containers and why I offered that as an option for me to explore today. I've been interested in Windows containers for a long time. Since, I guess Windows Server 2016 was the first one where they announced the private preview, previews, that's the right way to say it, of Windows containers. I started kind of researching the concept a little bit before Microsoft said that they were actually gonna do Windows containers. So what is this Windows containers concept? So Linux containers are what we're used to working with. When we talk about containers, we're usually talking about Linux containers. They are based on C groups and namespaces, which are capabilities within the Linux kernel, which I have a comic that I like to use to try to explain if you all wanna hear about C groups and namespaces, let me know. But Windows containers are Windows native containers. So Linux container takes the Linux kernel and basically you can put a lightweight operating system and just make use of the existing kernel so that they're making more efficient use of your resources on your machines. But Windows containers are making use of the Windows kernel. So it's completely different set of technologies that you're using to run containers. So that's why they're a really interesting technology. You can't just take a Linux container, any other regular container images that you've been working on and run them on Windows server. They have to be their own kind of separate thing. So they're a really interesting space. And in 2019, Kubernetes released support for Windows containers, Windows server nodes of Kubernetes, which is a little mind blowing to me because it's such a Linux centric technology. Kubernetes really grew up around those concepts of C groups and namespaces and containers and Linux. So Kubernetes adopting Windows is a little bit of a shift and it's kind of interesting to explore. So that's why I offered that one. And then the reason I offered stateful sets to give full look at both of those topics is we, on the last first couple of episodes of this show, we worked on deploying WordPress to Kubernetes and we used a tutorial on kubernetes.io to do that. And it uses deployments, Kubernetes deployment objects to deploy both WordPress and the database that goes with WordPress. But Kubernetes also has this concept called stateful sets, which honestly, personally, I almost never use in my demos and my use cases that I explore. So I wanna do a bit more in-depth exploration of stateful sets and learn a little bit more about how they're used and what stateful applications are a good fit for stateful sets. So that's why I offered those two areas to explore. If you're watching and you have opinions about what you wanna learn about today, please post them in the chat and I will happily talk about either one. But since folks mentioned an interest in Windows containers, maybe we should start there. So let me set up some stuff over here. And an interesting thing that came up on Twitter a few minutes ago, really like less than 30 minutes ago, when I mentioned Windows containers was the concept of container runtimes in Kubernetes. So maybe I should start there. Let's close this. Let's go to kubernetes.io and then I'll share my screen in just a second once I get this documentation up. Container, let me go look for, and go Google it, container runtime interface. Container runtime interface is not interesting to you. Please let me know. I wanna do something cool and explore fun new technologies. So let's share. I also just recently upgraded my Mac. So I've got this beautiful colorful background here that I'm still learning to work with. So that's something. So this is a blog post from 2016 about the introduction of the container runtime interface into Kubernetes. So someone asked on Twitter, what kind of containers does Kubernetes run? Does it just run Docker containers? Does it run RunC or something? And this is a pretty common thing I think that people don't think a lot about. But when they do, they start being like, huh, how does that work? Cause it seems like it just kind of works for a lot of Kubernetes implementations that people use. And so it's not clear exactly what container runtime Kubernetes is using. So let me, okay. Adjusting my screen layout here. So it can be kind of unclear what container runtime Kubernetes is using. But the reality is that Kubernetes can run a variety of different container runtimes. And it does that using this container runtime interface. And I see that there's a question. Can you talk about CNCF and Kubernetes and the data on Kubernetes community? I think that's what that is, right? Do you okay, data on Kubernetes? And what's the difference between these three? Interesting. Yeah, let me bring that up. CNCF, I just looked that up. Will I find a good website? Yeah. So here's the website for the Cloud Native Computing Foundation. So I'm an ambassador of the Cloud Native Computing Foundation. Here's how I relate to these things. I'm an ambassador of the Cloud Native Computing Foundation. I'm a member of the open source Kubernetes contributor community. And I'm not really part of the data on Kubernetes community. So I'll have to look that one up too. But I can also think of another one, which is Kubernetes is a meetup group and online community that does a lot of help with supporting people who are starting to learn Kubernetes. So I imagine it's kind of like that. But I'll talk about CNCF and Kubernetes first, at least since I'm part of those communities. So here's the webpage for the CNCF. Let's go to the about page and maybe who we are. Yes, so it has this little blurb at the top that says the Cloud Native Computing Foundation hosts critical components of the global technology infrastructure. CNCF brings together the world's top developers and users and vendors and runs the largest developer conferences. CNCF is part of the nonprofit Linux Foundation. That's like probably the most useful sentence there for understanding what CNCF is. So CNCF is a nonprofit, technically it's part of the larger Linux Foundation. And it's really kind of difficult sometimes to talk about the projects that the CNCF manages or supports. And Kubernetes is one of those. So if we look at projects here and we look at graduated and incubating projects, you will see Kubernetes listed right here. I don't know if you can see it. It might be kind of small. Maybe I should zoom in a little. But you can see Kubernetes is right here. So Kubernetes is one of the CNCF's graduated projects that it supports. And so CNCF has these levels graduated incubating in Sandbox. And those are determined by a steering committee as run by this nonprofit organization that looks at all of these open source projects. You can see that there's so many just in incubating and graduated. So cool how it's grown. You look at Sandbox, I'll just overwhelm you here. Look at all of these open source projects CNCF helps with. So there's a steering committee within the CNCF run by this nonprofit that looks at all of these open source communities. It looks at who's contributing to them. It looks at how much adoption that project is seeing. It looks at the kinds of problems that it solves and whether those are finding significant traction, I guess, in the user space. And it determines where those fall in this Sandbox incubating graduated leveling system. Let me see. There should be a good, there's a page in here somewhere that explains the levels a little bit better. Yeah, here we go. So you can see here that Sandbox projects are projects that are kind of new. They probably don't have many contributors or maybe many contributors from different companies which is something that the steering committee looks at. They wanna make sure that it's very sustainable that the group that's maintaining that project is gonna keep being there for a good amount of time and it's not just owned by one company and that kind of thing. And they wanna make sure that it's seeing good adoption. So as a project starts to get more adoption and grow a larger community that's contributing to it and maintaining it, you'll start to see it move from Sandbox into incubating. And then once a project has gained such notoriety, such a large contributor base and such a large user base that it seems kind of mature, they will put it in the graduated status. You can see that the ones from the CNCF right now that have that status are Kubernetes, at CD is one that you might have heard of, Linkerd, Service Mesh, Open Policy Agent is graduated now. I forgot that that happened. That's exciting. Prometheus is a really great one. So that's a little bit of a look at CNCF and Kubernetes. I hope that helped. And yeah, data on Kubernetes, I'm not that familiar with, but they're on Twitter and they like all of my posts. So you should probably learn more about the community. Data on Kubernetes looks like there's a podcast, day okay community. I looked this up recently, actually. So it's an open community for data on Kubernetes. What's originally designed to run stateless workloads, but of course today there are a lot of stateful workloads running on Kubernetes, which is why I wanted to bring up the stateful sets thing. As I've seen a lot of stuff recently about people running databases on Kubernetes, which I've always said you can do. And there are good use cases for why you would wanna do that. But it's really cool to me to see that a lot of people really are doing that. And I think their reasons for it vary or I really wanna learn more about those use cases. Awesome, great to hear that that helped. So this stateful area of using Kubernetes is a really interesting one. So it looks like that's what this community is. It's a community aimed at stateful applications on Kubernetes, so relevant to stateful sets. So folks out there listening, if you wanna hear more about stateful sets or runtimes in particular, please ask me questions and I'll use those to guide my exploration. Without that, I'll just kind of start looking at things. So I was talking about container runtime interface and how that is a piece of Kubernetes that allows Kubernetes to run a variety of different types of containers. I remember like early, early days of Kubernetes, I had been into containers for a little while and I started getting some customers at the company that I was at at the time asking me questions about Kubernetes. And when I came into the room to ask them what they were doing with Kubernetes, one of the first things I would ask was what kinds of containers are you running on Kubernetes? And they'd always be like, huh? What, is that other kinds of containers? Because Kubernetes does such a good job of hiding that from people. But yeah, you can use a variety of different types of container runtimes with Kubernetes and this has really come into play recently with the Docker shim PS go. So I'm gonna go over that. I am, I keep going over it in things and I wanna do a better job of that. So recently Docker shim was deprecated from Kubernetes. What does that mean? So Docker is often talked about as the most popular container runtime, right? It's a really easy way to develop containers. And I personally love to use Docker, it's so much fun. In the days when it was a question whether Docker swarm or Kubernetes would come out on top, I personally really liked Docker swarm, to be honest. It was really easy to use. But Docker actually is more of a usability tool it's meant for humans to be able to create containers and work with containers really easily. And the containers that Docker creates are created using the container D runtime. So if you actually look at the CNCF landscape, just threw me off at a meetup once was very embarrassing. I was looking at the CNCF landscape which is of course gigantic. I apologize for putting this in front of your eyeballs. If you look into the container runtime piece, so we're zooming in, getting a little bit of a closer view. You'll see that Docker isn't even here. It's not even listed in the cloud native landscape because Docker uses container D which is a project within the cloud native computing foundation as its container runtime. And so container D is compliant with the container runtime interface that Kubernetes uses to run containers. So that's good. But when you're using Docker, it means that Kubernetes has to kind of get around the Docker pieces and get to that container D piece that's running inside. And so Docker shim is something that allowed it to do that. So it's really a component of how your Kubernetes infrastructure is set up. When you set up a cluster, you set up the control plane and you install the kubelet on the nodes. And the kubelet can use a variety of different container runtimes as long as they're can paddle with the container runtime interface to actually run your containers. So you could develop your containers in Docker and they will actually be container D containers. And on the nodes, Kubernetes can kind of go, oh, I need to run a container D container. So I'll just do that. Yeah, so the Docker shim deprecation has been really confusing for people. But if you're using something like a managed service for Kubernetes, you probably aren't setting up that piece of the nodes. You probably weren't even thinking, oh, what containers am I running in Kubernetes to begin with? Because especially if you're using a managed service, the managed service provider already set up a container runtime for you on those nodes and made sure that Kubernetes and you had to run those containers for you. So you didn't have to worry about that. And that's still the case for generally managed Kubernetes services are just setting those up for you and you pretty much don't have to worry about it. You can run your containers and develop your containers the same way that you always have. Yes, the images produced from Docker build will work with all CRI implementations. All your existing images will still work exactly the same as long as those nodes are set up to know that they need to use container D as their runtime. So if you're setting up your own Kubernetes clusters from scratch, you should learn a little bit about this. If you are, that's pretty much it. If you're the one that's setting up the Kubernetes clusters and setting up those nodes, then you might have to worry about Docker show. So something I wanted to explore a little bit. Hello, someone said hi. I love it when people say hi. So I've been exploring a little bit about the CNCF and its projects and container runtimes. We went over Docker shim and we talked a little bit about stateful applications on Kubernetes. So that was the other project kind of area I was looking at doing some hands-on stuff with today. So does Kubernetes support more than one CRI runtime? I'm assuming not, but I might as well ask rather than assume. I am not 100% sure. Let's find out together container runtime interface. Is that in the Kubernetes documentation tree? Introducing the container runtime interface. So this is this blog post that I have up from when it was originally introduced. What is CRI and what is it needed? Overview of CRI. By that I mean running at the same time. Yeah, that's like you could be running a rocket container or a run C container as well as a container D container on the same Kubernetes cluster node at the same time. I wonder, I'm not 100% sure if that is a thing that you can do. If anyone watching has the answer, please do share. Otherwise I'm just gonna explore our documentation here and try to find out. Kubernetes has a declarative API, pod resource, imperative container centric interface, exec attach, port forward requests, current status. So this was in 2016 when CRI was still in early stages. Integrate container runtimes using CRI. CRIO was popular, Rocket was popular. I don't even know what this one is. Hypervisor based container runtimes. Okay, so that's related to like Cota containers and containers that use basically a lightweight VM to do their isolation. So different security posture for your container runtime, container runtimes, exploring them, so fun. So if you're interested in trying alternatives, you can follow the individual repositories for information about that. Integrating a new container runtime. So it doesn't say. Let's see if I can, I'm just gonna try Googling that. What, multiple container runtimes. Let's see, R.I. No runtime interface. Comprehensive container runtime comparison from Capital One, that's interesting. Here's a page on container runtimes from the Kubernetes documentation. That sounds like a good place to look, which I probably could have found in the other tab. Cloud Native Foundation, not related to container runtimes, but I'm new to Docker and Kubernetes. Could you please brief me? Absolutely, yeah, and let me read this comment. But every time the CNCF stream goes up, the notification message goes out, is something about cert magic. Would be great if that was changed to something more generic. Good to know, did I mess up my stream name again? I thought that I fixed it. Pop, so Pop is who helps us run the Twitch channel here. He's one of the chairs of the project. And so I'm asking him about this question that you asked related to your seeing cert magic popup as what's happening on the Cloud Native Twitch every time. That's not good and not what we wanna see. So we'll see if we can do anything about that on our end. But anyway, I'm gonna go back to Docker and Kubernetes and hopefully if I'll ping pop too and see if he wants to answer that. I pinged him to let him know that there's a question for him. I'll screenshot it next time I see it and tweet it. That's great, yeah, tweet it at Cloud Native TV. We have our own Twitter for the Twitch channel. So that'll be a great place to get that feedback. Thank you. Love to see that people are engaged in seeing our mistakes. We would like to fix them for you. Yeah, so could you explain Docker and Kubernetes? So I don't know how long you've been watching the wild paradox. I talked a little bit about these. Why don't I go to Docker for a little bit and well, maybe I should start here. So a container runtime is something that runs containers for you. If you're running Mac or Linux, we were talking about Windows Server containers earlier. So that one's kind of its own thing. Actually, that uses Docker though, which is pretty cool. That's a little complicated to explain. So if you're running Mac, you might install Docker on it. And if you install Docker on your Mac, which I have, so maybe I should just show this. I have a bash window here. If I zoom in on it so that you all can actually see, if I run Docker, you'll see that I have the Docker commands installed. So Docker is a really popular way to create containers. I could create a container right now. Let's see. Let's see what I have. Control-L, and then a little bit here. Okay, Docker images list. Let's do images I have right now. So they can't connect to this. Oh, it's been so long since I've seen this error. Wow, how do I wanna deal with this? Can I do Docker PS? No. Okay, so Docker on my Mac is not happy right now. It's been a long time since I've seen this issue. No, one way that I can deal with that is I can actually go to the Windows server I was running earlier and play with Docker. So I might bring that up. Oh gosh, now I have to log into it. Let me do that real quick. I'm remote desktoping right now into a... Okay, that's broken too. This is what I get for doing things live. Oh, oh no, it's actually on. Okay, hi. Oh, oh no, okay, that's fine. Let me minimize this and bring over my remote desktop. All of this just to show you some Docker commands. This is silly. So here I am in a Windows machine. Let me just show off. Oh gosh, that's the wrong command. Oh, that was a bad way to do this. So I just like zoomed in on the window in Mac, which was a bad idea. So you probably can't even see these commands, but I'm running a command terminal in Windows right here. Sorry for all of this exploration. And if I run Docker here, I have Docker installed. If I run Docker images lists, you'll see, oh, I don't have anyone here right now. That's silly, but it's just Docker run something, Docker run in the engine X. And it'll go try to pull engine X. It didn't find it because I don't have my registry set up and it won't actually run that Docker. Yes, don't show me my running images. Okay, so this was quite the aside. So anyway, if you have Docker for Mac installed on your Mac, for instance, you should be able to run Docker commands and create Docker containers, which would be run using Docker files, which I should actually have Visual Studio Code. Welcome to my work environment where I have all sorts of interesting things set up. I don't trust the authors. Yes, I trust the authors. Yes, I believe that I created those. So that's fine. I'm looking for a Docker file, a file in here called Docker file. I thought I would probably just have one up, but I don't even, I'll go look up an example then. All of this to say that you can create a container in Docker using something called a Docker file and Docker look up a tutorial from Docker sample application so that we can see a Docker file. But here's what it looks like to run an application. You're gonna contain it right there. You would do from, and you would specify a type of operating system that you wanna use. So a Linux container is based on the Linux kernel. And then you can build on top of that Linux kernel any type or flavor of Linux that you wanna use within your container. Alpine is a really lightweight version that's really commonly used in containers. Here we're running a command to do a Python make. So we're doing some Python stuff in this probably. We're uploading while we're switching our directory to one called slash app, where presumably our app is installed within the Docker file. I don't know why this copy command is here. Then we're running yarn install production. So we're running a yarn to install something on this container and then a final exit command here of whatever this is. We're gonna run node source to JS. So this is an example of what a Docker file can look like. It's essentially you take, I want to run this application in a Linux environment. Alpine is fine and super lightweight. I want to run some kind of Python application. So here's some things, some steps you need to do before you run that application. And then it basically runs the application. So that's what Docker is for, is Docker is for running applications. And Kubernetes is for running applications in containers at scale. It is a container orchestrator. So you've got all our containers, let's say that you're running copies of this across many different computers and virtual machines, cloud maybe, physical machines like right here. Kubernetes machines that the patients are gonna run on. It runs the containers so that they're isolated from each other within that environment. It has some tools for you to connect those applications together, uses those. So Kubernetes is all about running these containers. I'm sorry I got so out of place kind of as I went through that explanation. So let me know, I need to docker in Kubernetes could you please bring me, if there's more that I can explain with docker in Kubernetes for you. Make any sense to you, tell me and I will bring up my comics next time to just walk you through a way that I explain this more commonly. There's some fun with the recent Twitch that we're working through. Yeah, so sorry about that with the show names, but thank you so much for telling us about that. Cool. So I talked a little bit about docker files and about Kubernetes. And I looked through this container runtime interface. Introduction draft blog. Thanks, Pop. Pop said that he was popping out. We talked about the docker shim deprecation. So if you missed that earlier, I explained basically that docker uses container D as its container runtime. When you install Kubernetes, Kubernetes says I'm gonna have to run containers on these virtual machines that you want me to run applications on. And so it needs you to install a container runtime in order to do that. If you're using a managed service, you'd never think about it. Or if you're not setting up your own Kubernetes clusters, you'd never think about it. Even with some of the tools like QBDM these days, you don't really think about it. But the relationship between docker and container D is that docker uses container D. So I kind of covered there. I had another page up here about container runtimes. We were trying to figure out, I'm coming back to that from a while ago, if the container runtime interface can run multiple container runtimes on the same node at the same time. And I do not know. And let's see. There might not be a clear answer in these documents. I'm not seeing a clear one here. I'm not 100% sure. Maybe the easiest thing would be to run a Kubernetes cluster from scratch and install a couple of different container runtimes on it and try to run containers meant for those container runtimes and just like, see if it works. That might be the quickest way for us to see if that is a real thing. But I don't know that I can do that right now because I could run a Minikube cluster on my virtual machine that I'm running right here. But Minikube is kind of its own automated process to spin up a Kubernetes cluster that's just running on your own computer. It usually uses a Linux virtual machine, whether you're running Mac or Windows, it'll spin up a little virtual machine and run a tiny little Kubernetes cluster on it. So that's not a great tool for exploring this container runtime question because you'd have to look into Minikube and figure out what container runtimes it's requiring. Let's look at Minikube in order to run containers. See, it expects you to be running, oh, these are virtual machine managers. So it's not talking about container runtimes here. Container runtime on Minikube. Let's see what Minikube could do with container runtimes. Well, nothing in particular, configuration, that seems like the place where we'd want to find out Kubernetes configuration. There are a lot of different variables that you can set when you're setting up a Kubernetes cluster. And I imagine some of those are part of this container runtime configuration, which I haven't had to deal with in a while. So I'm not seeing that here, but I think the easiest way for me to test that would be to basically use like some Intel NUCs or Raspberry Pis. I have a Intel NUC here that I use. I have three of these, multiple of these, which I can then spin up into a cluster, but I only have one of them on my desk right now. And this one is running Windows. So I can't run up a cluster on them right now. That might be the easiest way because then I could install the different container runtimes on the different nodes and then try to run different containers. So like a rocket container image repository. I haven't tried to run any other container except for basically Docker containers and container G containers in quite some time. I know that actually this Kubernetes hard way, you'll have heard Kelsey Heikauer's guide. Back in 2016, when the container runtime interface became a thing, Kelsey Heikauer updated Kubernetes the hard way to use Creo, CRIO, a different type of container interface, but it looks like today it uses container D. Yeah. So this is a great way to go through setting up a Kubernetes. Yeah, this page might offer some hints to multiple runtimes. Which one are you thinking of actually? Probably not this one. Multiple container runtimes, I'm not sure. But yeah, kind of generally the process of setting up a cluster from scratch might give you some hints. Bootstrapping Kubernetes worker nodes is where I would look, provisioning a Kubernetes worker node, disable swap. So this is stuff that you have to do to make Linux okay with running Kubernetes basically. Stable and swap, installing the worker binaries that you'll need for the node component of your Kubernetes cluster, configuring the container network interface networking. This stuff is intimidating to me, quite honestly. And here we're configuring container D. So make your container D. We're creating a config file about container D. Create the container D.service system D unit file. And then we're just configuring the kubelet in which we probably have a specification of which runtime we're using. It's gonna be somewhere in here. And we can take the D and figure out what word to look for. Container runtime equals remote. Container runtime endpoint is container D. So here you can see where we've got this. Oh, let me zoom in. It's probably hard to see. Thanks. Glad that you tuned in too. I totally didn't expect this to go this direction today. I have friends who are better experts in container runtimes than I am, but I love to explore it. So I'm glad that you all are here exploring it with me. So here we are in Kelsey Hightower's Kubernetes, the hard way looking at how we would set up kubelet. And I mentioned earlier when you set up Kubernetes there's a whole bunch of options that you can set about how your Kubernetes cluster is gonna run. It's been a while since I've had to deal with them because I haven't run a cluster from scratch in a while, but you can see this one. Sorry, I keep highlighting things. It's the one I was looking at which says dash-container runtime endpoint. And it's pointing to this container D socket. That we set up in the previous section above configure the kubelet here, we're setting up container D. So we did a container D config. So we have container D dot service. We're looking at the container D service that's running on the Linux machine. So the use of service can be really confusing in the Kubernetes world because Kubernetes has the service object which you use to connect applications that are running in Kubernetes to each other. But also if you're from Linux world and you're a Linux administrator, you're probably familiar with the way that Linux runs services using system D, I guess is the right way to say that. So this is about running container D as a service in Linux. So it's defining all of these things that container D needs to know about. So in setting up container D to run as a service on our Linux machine, supposedly we created this sock file that we are then telling Kubernetes, hey, go connect through the socket to container D. So now when the kubelet is running and you tell the kubelet, hey, I wanna make a new pod that's running this container, the kubelet is gonna go, okay, I need to talk to the container runtime to run your container. And this is where it's gonna go look for that container runtime. So if I look at the kubelet, I can probably check if it can accept an array or just a single value for container runtime and get my answer. Yes, I think that's actually how you would do this. Is that a little, that seems like way too deep to have to go to answer this question. I'm gonna probably see if I can ping some people who work in container runtimes about whether we need some updates to our documentation or something to help make that easier to find. Thank you so much for asking the question. Maybe we should go look at the kubelet code. I don't know if we wanna dive that deep into this and try and find that. Cause it can be really hard in my experience to find the API definitions of things within Kubernetes so that you understand exactly what pieces you need when you're defining something. So I don't know if I wanna go into that on stream and dive down that rabbit hole. It's a silly question, sure, but I think it's a really interesting one. And it kind of hits at the base knowledge that I think a lot of people lack about container runtimes. If you could run multiple container runtimes on a machine, I feel like that teaches you a lot about how container runtimes run in Kubernetes. So I'm gonna make a note for myself on this. And I'm hoping in a future episode to get several of my little necks or maybe some raspberry pies, some games and raspberry pies and spinning up a local cluster here at my house in my office. And I might try that out on that. So I'm gonna make a note to myself. I use Post-it notes to make notes of all the things that I wanna do. And I have a board over here to my right, by the way, that you all can't see, where I put all of my ambitious dreams of things that I would love to explore with containers. Container runtimes on a local cluster. On a local cluster. It's the note I'm gonna take for myself. Cool. So we've already used like 45 minutes just exploring this awesome stuff. Thank you so much for your questions. Are there things that folks out there are wondering about and would like me to explore deeper? We established that if you wanna learn whether or not the KubeLit could handle running multiple container runtimes on a single node, that you could probably look in the KubeLit and see if you can specify multiple container runtime endpoints. I use Post-it's far better than digital notes and satisfying when you throw them in the recycling when done. So true. I also color code them with push pins. I have like a soft board on my wall that I put them all up in. And I use different colors for work tasks versus at home tasks and like personal exploration tasks. And then I mark them with green if they're in good shape, yellow if I really need to be working on that, red if it's about to be overdue and I need to put it above all of my other priorities. So I have a whole thing with organizing my tasks if you all wanna talk about that. And we also talked about today, DockerShim and Cloud Native Computing Foundation and how it relates to the projects that it supports. Thank you, saying that I'm organized. I have to put some considerable effort into it to make it work. So we went over the CNCF and how it relates to the projects that it helps with. We talked about the data on Kubernetes community and kind of the general concept of communities that use Kubernetes data on Kubernetes is one that covers stateful applications that run on Kubernetes. And we talked through the DockerShim deprecation and what it means for people. And that really it is about if you run Kubernetes clusters yourself and you have to set up that KubeLit component that we just looked into. So cool, thank you for guiding me on this exploration today. If you have to deal with setting up this component you maybe have to run this, create this file and run KubeLit on a Linux machine or a Windows machine for that matter. More Linux machine if we're talking about different container runtimes because with Windows containers you're just talking about Windows containers. It's kind of a much narrower scope. There are only Windows containers. There's not containers, container D for Windows or rocket for Windows or anything like that. But if you're doing this piece then you need to care about DockerShim. And if you're not doing this piece you probably have never thought much about running your containers and what container runtime you're using. You just probably use Docker and Docker is really easy to create a container with. We looked at an example Docker file of how it sets up your operating system and runs some stuff so that it'll run your application properly. So Docker is really easy to run your containers with but as long as your KubeLit, your nodes know how to handle the type of container you're running with in the case of Docker is container D containers, you'll be good to go. But if you are using this piece it gets a little bit more complicated if you have to worry about what container runtime you're actually setting up on your KubeLit. So I hope folks learned some cool stuff today. We're getting up to the top of the hour so I don't wanna do too much more. Oh, what are KubeLits? Thank you so much for asking. Let me see if I can find a good. I'm gonna add another tab here. KubeLit Kubernetes architecture diagram. That's what I'm gonna look for. Are there any good ones? Kubernetes IO components, let's see. Yes, this is good. Okay, this is what we wanna look at. Can I zoom in on this to explain what the KubeLit is? So here's a Kubernetes cluster. This is presuming that you're running one on the cloud provider, but it doesn't really matter. You can run it, like I said, on-prem or wherever you want. But wherever you're running your Kubernetes cluster there's a control plane that has the Kubernetes API. So whenever you run a KubeCTL command can I demonstrate that? Do I have KubeCTL on this computer even? No, I don't. I don't actually have KubeCTL installed on my Mac. It looks like. I do most of my work in the cloud, so that's why. But if you were to run a KubeCTL command that would be interacting with the Kubernetes API. And that's being run by this control plane, which is one or more nodes. And each of those nodes has the API server. It has a scheduler, which takes in information from the API server. You say, hey, Kubernetes, go run me a pod running nginx. And Kubernetes says, okay, I have several virtual machines. I could run that on. And the scheduler is what makes the decision of I'm gonna run that nginx pod on node two, for example. And EtsyD is the single source of truth within the Kubernetes cluster. Everything you tell Kubernetes, all of those pods that you told it to run, all of the deployments, all of the things that Kubernetes is managing. Those are all reflected in EtsyD, which is a key value store. And that's how Kubernetes knows what should be running. And say, for instance, you told Kubernetes to run three copies of that nginx container. And one of them goes down, so you only have two. EtsyD says, hey, there's supposed to be three. And so Kubernetes has controllers, replication controllers, which is, I assume what these CMs, the CCM is the cloud controller manager. And this is the controller manager. So the controller manager looks, here's from EtsyD. I'm not exactly sure how that interaction works, that we don't have three nginx pods running. So we need two. And so the controller goes and says, hey, let's go run another pod. So this is how Kubernetes works declaratively. And the kubelet is what's actually running on each node. So you said to the API, hey, I wanna run this nginx pod. You can pick which node it runs on. How it knows what nodes are in the cluster is that kubelet piece that's installed on every node. How it actually runs that pod on the node. Is it talks to kubelet? It says, hey, kubelet on this Linux node, make a pod happen. And then the kubelet talks to the container runtime and actually makes that pod happen. You can see there's also a proxy on here that Kubernetes is often deployed with a kubelet onto the node to help with some of the networking stuff with Kubernetes. So did that help? You understand what kubelets do? They are the node piece of Kubernetes. There's what makes Kubernetes able to create clusters. So if you work with Minicube, you technically, I think you have a kubelet as part of that Minicube cluster, but it's all in one machine, which of course is not a production configuration. You generally want the control plane pieces to be on separate virtual machines from your nodes or bare metal machines or what have you. But in a Minicube case, you'd have like both the kubelet and the control plane pieces all in the same machine. But generally a kubelet is what you put on all of the extra pieces of the cluster, the extra virtual machines in the cluster that you're gonna run your workloads on. Let me know if that helped or if it was confusing. I really wanna help you understand. We've got like five minutes. I'll probably shut down here, two-ish, though I don't have to, I don't think. No, I have nothing else after this. Got your point, I am new in this. Thank you for the explanation. Thank you so much for asking. Please don't be afraid to ask questions. This stuff is really confusing. As we saw when we started looking into the container runtimes on kubelet, this stuff isn't always straightforward and it can be a bit much to wrap your head around, especially if you're coming from a background maybe where you were a Linux administrator. Maybe you have really deep knowledge of Linux, but that doesn't necessarily mean that you will understand Kubernetes easily. Kubernetes is running a kubelet on Linux. It's running its control plane on Linux. So it's making use of Linux to run these containers, but it kind of abstracts that piece of a way. So it can be pretty confusing coming from either a Linux administrator background or from a background of a student or someone who's totally new to this space. Then you have to wrap your head around both, what does Linux mean? What is an operating system? How does that relate to things? And then what am I trying to do with Kubernetes? What is it giving me? So I'll go over that piece real quickly, just the highlights that I talked about earlier, a containers for running an application. Kubernetes is for orchestrating containers at scale, running a whole lot of applications at scale. Yeah, that's the relationship of containers in Kubernetes. We've talked a lot about operating systems and how they relate to Kubernetes. Please ask more questions though if anyone has more questions. I'll look through this document a little bit and see what else it has to say about kubelets. The kubelet is an agent that runs on each node in the cluster. It makes sure that containers are running in a pod. Because Kubernetes, by the way, doesn't care about containers. Just said that Kubernetes is a container orchestrator. It's for running containers at scale, but it actually doesn't care about containers. It wraps those containers into a pod. And so the kubelet is the piece that actually cares about the containers. I am a competitive programmer slowly moving into DevOps. Oh, that's awesome. I often wonder why they can't make all of this a little simpler. Yeah, the reality is that a lot of people grew up in spaces, basically develop their expertise in spaces where they described it from their point of view of this is how it makes sense. You coming in from a different perspective have the opportunity as you start learning these things to help us understand it from your perspective. I live in the Kubernetes space. A lot of folks in the Kubernetes Slack, we kind of get so deep into how it all works from our own perspectives that it can be really hard for us to explain it to other people because we don't understand your perspective as well. So please help us understand when you're asking questions of this is where I'm coming from and this is why I kind of see it this way. And we'll try to help you from your perspective. And as you're learning, make your own blog posts or make your own explanations of things and I'm sure they'll help others. That's something that I've done. I'm gonna go to my website now. Close out on my website, that'll be fun. I really need to update it though, I'm so sorry. If you go to caslin.rocks, you will find my website because I am very humble. And you'll see that I have these comics where I try to explain these concepts in a really accessible way. This one is from the perspective of a systems engineer. Like I was talking about the perspective of a Linux administrator earlier. That's what this comic is coming from the perspective of but it's introducing this analogy of C groups and namespaces as dogs. Linux operating system as a doggie daycare that takes care of your applications. Your applications actually are the dogs. And your applications are cared for by the operating system which is running them, which of course needs infrastructure in order to function some kind of virtual machine or machine. And applications don't always get along well. Dogs don't always get along well either. They have to share resources like memory and CPU. So one way to address that is to have virtual machines but the solution that we're talking about is like containers. So we're talking about using resources more efficiently but I see there's another question here. So as you're ending this, can you please explain to me is this a workshop or what's happening? I landed here accidentally. So wonderful that you're here. Thank you for coming. So this is Fields Tested. It is a show on the CNCF Twitch channel. So this channel will have information about cloud native technology about once a day during the week. You try to have a new show. There are different shows. There are some that are specifically interested in certifications that you can get in this cloud native area. There are some that are focused on students, the student experience and how you get into cloud native technology. That's run by Kunal, that's .edu. The certs one is certs magic. We have a few that explore different technologies in the cloud native landscape. So they have different goals but all of them love to hear from the people who are watching. So I hope you'll join us some other time here on Twitch, follow the channel you must have to be able to chat with me. And you should get notifications if you set up notifications when we're streaming and it's a bunch of experts in this cloud native space and we would love to help you. We're just streaming and learning things together and we wanna help you out. It's not really a workshop series or anything. It's just kind of people talking to people and learning things. That's how I see it anyway. So thanks so much for joining today. I hope, yeah, you said you look forward to attending again. I'm, oh, I think I accidentally hit it. I showed it. Oh, I can show it up here. That's so cool. Sorry, I just learned how this stream thing works. Wow, I've been clicking this this whole time and you all probably think I don't know what I'm doing. Yeah, I attend canals. That's awesome. No, I'm from Google actually. I work at Google. Yeah, so you'll see different people from all sorts of different companies on here. We all help out with the cloud native computing foundation, which is a nonprofit that works on these cloud native technologies. So we all just love to help people learn these technologies. Look at that. I can show the questions here. I learned a new thing today. So thank you so much for coming. If folks have any other questions, I'll hang out for a little bit. But aside from that, I think I'm pretty much done for the day. I hope you all will join me again next time. I should tell you all what I'm gonna be doing. So in two weeks, two weeks from today, I'm gonna be on a little bit later. I'm gonna be on at 4 p.m. Pacific time is my plan right now. And I'm gonna be exploring and capture the flag on Kubernetes that you all can do with me. So definitely join next time. If you wanna get hands-on with Kubernetes, we're gonna explore Kubernetes security with a capture the flag. And I'm gonna have some experts on who created the capture the flag. It's gonna be my first ever capture the flag. And we're gonna do it together. I hope you'll join me. I'll probably try to make a little promo for that so that people know it's happening. Yeah. Thanks. I'm glad that you got to join. I'm glad that you're feeling helped by this. Awesome. And I'll give you all a preview. I'll do a sneak preview here, securekubernetes.com. If you all wanna check out the capture the flag before I do it in two weeks, you can go to securekubernetes.com. And this was run at KubeCon North America 2019. And it's a capture the flag that's just available here online that you can do yourself. So I'm excited to do my first capture the flag in two weeks. Oh, you looked at the source code for KubeLit and it's not definitive yet, but I think it's only one container runtime per KubeLit instance, which makes sense. Thank you so much, Russ, is that a good way to refer to you? Thank you so much Russ for that question and for helping us explore that today. And if you wanna ask more questions about that, definitely join the Kubernetes Slack and which group would it be? There are different special interest groups within the Slack that own different things. I think there's like a SIG KubeLit or something. There's also a working group for infrastructure that might know. Regardless, find a group within there, maybe a general one if you're not sure. And if you wanna get in contact with KubeLit folks and ask about that, you could. I just wanna make sure that, I know it's kind of a silly question and you don't necessarily have a need for that, but I think it's really cool. So definitely chat with the KubeLit folks. I bet they would find your perspective really interesting. Thanks so much everyone for joining and don't forget that KubeCon North America registration is open now. It's gonna be hybrid. That is the information that we have at this point. We are planning to try and do hybrid event in person in Los Angeles, California, LA. I never say it out like that. And personally as well as online virtually. So you can attend either way. And they just released the schedule so you can see what's on the schedule. I'll be giving a panel there as well as a keynote. Very excited. Gonna be doing a sponsored keynote for Google. It's gonna be a little five minute thing. It's gonna be a lightning talk where hopefully you'll learn something cool and interesting. It's not Google specific. It's just gonna be super fun. So thanks so much for joining today everyone. Hope to see you again on CloudNative.tv. Make sure you follow us for notifications. Thanks. Thanks for the good luck wishes. Yeah, have a great rest of your day. Bye everyone.