 Hi, welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm your host today. My name is Whitney Lee. I'm a CNC app ambassador and I'm a developer advocate at VMware Tanzu. So every week we bring new presenters who showcase how to work with Cloud Native technologies. So we build things, we break things and we answer all of your questions. Today we have Cornelia Davis here with us to talk about how to Kubernetes all of the things. I'm so excited for today's presentation. So this is an official livestream of the CNCF and as such it's subject to the CNCF code of conduct. So that basically means be respectful, be nice to yourself, be nice to other people in chat, be nice to our presenter, be nice to me please and I'll do the same for you. Friends who are joining us live, please say hi in the chat. Tell us where you're tuning in from. I love the magic that is streaming. And if you have questions during the presentation today, please to add them to chat. We want today's presentation to feel like a conversation. So ask your questions, don't be shy. And with that I'm gonna hand it over to Cornelia Davis to kick off today's presentation. Cornelia, take it away. Thank you so much. And I'm gonna do a little bit of a fan girl stuff here for a moment. I'll tell every one of you who are listening that I joined 15 minutes ago, a little early for this live stream and was so surprised to see Whitney. I didn't know she was gonna be hosting and she doesn't know me of course, but I know her of course, because I've watched lots of your videos Whitney. So thank you so much for all the work you do. Super. So excited for today's stream. I think it's gonna be such a great conversation. It's gonna be pretty great. So all right, so with that, some of you maybe know my presentation style. You know that I use slides, but I promise, promise, promise, we will not get bored and we will not read slides. Got some diagrams and those types of things. But so without further ado, let me jump over to my slides. Oh, I guess I need to share it, right Whitney? I got you. It's on the screen now. Oh, your Slack's on the screen. There we go, my Slack. You're on it. I had to go back and, okay, so let me go ahead and start the slideshow. So the first thing that I wanna tell you, by the way, Whitney and I talked about this ahead of time. So Whitney is gonna be watching the chat. I can't see it, of course, because I'm presenting my slides. But if you have questions, we're gonna take questions throughout the presentations. So Whitney will absolutely interject and bring your questions forward. And all of you know Whitney as well. You know that she's gonna ask some really great questions from her perspective as well. Before we jump in, Cornelia, can I just tell you we have people here from Malaysia, France, India, with Phoenix, Arizona, London, Nigeria, Taiwan. It's so amazing. That is awesome. That is really amazing, super cool. Well, good morning, good afternoon, good evening, and maybe good middle of the night for some of you. Thank you for coming. I don't take that lightly. So I really appreciate you spending this time with us. So the first thing that I wanna tell you is when I say Kubernetes, all the things, some of you might be thinking, well, hang on, I can't containerize everything. There's some things that just start, it doesn't make sense for me to containerize everything. Well, I'm not actually gonna ask you to containerize everything. Kubernetes, all the things means containers and other stuff. So that's a little bit of a spoiler alert. This is not a talk about containerizing all the things. This is a talk about Kubernetes, so we'll jump from there. Let me just take a few minutes to introduce myself. I am a trained computer scientist. I spent most of my career as a developer. Wasn't Ops, but about 10 years ago, I had the great fortune of starting to work with Platform as a Service, specifically Cloud Foundry. I was part of EMC, went as a part of the pivotal spinoff and worked on the Cloud Foundry team for about seven or eight years, which was, I say I was so lucky to have that because that was really the beginning of this transformation over to the world that we live in today. For those of you who don't know Cloud Foundry, they're some of the architectural principles. In fact, the ones we're gonna talk about here today were there in Cloud Foundry. We had container images. We instantiated Linux containers. We had a scheduler. We had eventually consistent. All of those things that we're gonna talk about today. But by the way, all of those things before we had Docker, before we had Kubernetes. And so I had the great fortune of starting to learn some of these patterns we're gonna talk about today. And I realized very quickly that even though we called it a developer platform, we called it a PaaS and we thought it was all about the developer, there's as much value in ops. There's maybe even arguably more valuable for operations than there is for development in these types of platforms that Kubernetes is a part of. As a part of that, I got to know everything Cloud native, microservice architectures, all of that. And in fact, I spent at Pivotal, I spent a lot of time on Cloud Foundry, but toward the end of my time at Pivotal, I worked with VMware very, very closely on PKS, Pivotal Container Service, which was a Kubernetes as a service. So I think I've been touching Kubernetes since 2017 now, maybe or 16, somewhere around there. So quite some time. And then the other thing that I'll mention a little bit of shameless self promotion is that I'm also the author of a book with Manning called Cloud Native Patterns, which is a book aimed at kind of application architects and developers to help them understand things like retries and circuit breakers and configuration management for these highly distributed, constantly changing web architectures. So, okay. So anything, any interjections right now? No questions yet. Yeah, let's get right into it. All right, so if you'll all bear with me, I know that many of you probably know the content that's in the next two or three slides, but bear with me a second because I might be looking at it in a slightly different way than you're used to. I also am sure that there's some people who are relatively new, who maybe don't understand these patterns. So I'm gonna spend just a few minutes on Kubernetes 101. And I'm really gonna focus on what I kind of call the Kubernetes API. So the way that the, and I'm gonna use containers, I'm gonna actually use container images in kind of the standard use case for Kubernetes, which is, hey, I wanna deploy a couple of instances of an application. It happens to be a Hello World application so you can see the things that are highlighted here. What I want running is I want a couple of instances of this application running and I want it to be accessible via a URL. That's all I wanna do is I just wanna have these things. Now, in the pre-Kubernetes, pre-Cloud Foundry days, we would have had to deploy this app and then we would have had to stand up a load balancer and all of that stuff. And it was very imperative. It was very step one, do this, step two, do that. The Kubernetes API takes a different approach and it says, I like to call it the Jean-Luc Picard approach, which is, hey, just tell me what you want done. And then the machine will say, make it so and it'll be done. And so it's a very declarative thing. So what you see here is, it's just an expression of what I want. So that's the API, if you will. Now, the way that this basic pattern works is that we have this declaration on the left-hand side and we want to see it instantiated in some runtime environment. So if those of you who are a little bit more versed in Kubernetes, you might be getting a hint of where I'm going with this. It's some runtime environment. We'll talk a lot in the next 40 minutes or so about what those environments look like, but we've got some runtime environment. So then how is that API implemented? Well, what happens is that declaration gets sent over to Kubernetes. Technically, it goes through the API server. I'm actually just showing it here where the place that it eventually gets stored. There can be some transformation, but in general, just think of it as, hey, here's what I want running and I'm gonna store it in the store for Kubernetes, which is at CD. And there's something called a controller, which is watching that store and it picks up on what that definition is. And this is the implementation of the API. So this is the thing that's gonna make it so. So when I say make it so, there's a whole bunch of people on the Starship Enterprise who are gonna get all this stuff done. That's the controller. It's gonna get all this stuff done. So what the controller does is it sends things over and it instantiates things over in the runtime environment, but it does more than that because it also watches the runtime environment and continually makes sure that the runtime environment is matching what's on the left hand side. Now, again, bear with me. I know many of you know this, but in this particular case, in this example for containers and container deployments, what is that controller gonna instantiate? Well, first of all, the first thing I wanna point out is that it's not a single controller, it's generally multiple controllers. So for things like container-based deployments, we have a deployment controller, we have a replica set controller, we have a services controller. These are all these loops that are going through standing things up in the runtime environment and if necessary correcting things in the runtime environment to make sure that it always matches up. So in this particular example, we, when we're deploying, doing something with the deployment, well, we're going to instantiate pods on worker nodes in the Kubernetes environment. But you'll notice that I also just showed you that the runtime environment isn't just worker nodes, it's not just Kubernetes, it's also infrastructure that sits out of Kubernetes, outside of Kubernetes. So for example, in this case, I asked for a load balancer to give me that connectivity to my Hello World app. Well, that controller is also gonna set up the load balancer. So you can see that that's why I'm saying that the runtime environment isn't just Kubernetes, it can be anything, spoiler alert, that's where I'm going, it can be anything. Okay? So the last thing that we need to think about here though is that controller doesn't necessarily have to run inside of Kubernetes. And I was on a live stream a number of years ago with Kelsey Hightower, yes, that Kelsey, where he actually did a demo where he was running the controllers just natively on his local machine. He wasn't running them as a process within Kubernetes, but usually we're installing these controllers into Kubernetes and I'll talk a little bit more about how that happens. So usually we're gonna run those controllers and in this particular case, when it comes to these core Kubernetes controllers, they are in fact running on the control plane nodes of the Kubernetes cluster. So there's a single cluster in this picture, it spans kind of its diagonally, it's those control plane nodes and those worker nodes, that's kind of cohesively the Kubernetes environment and then we have infrastructure as well. Okay, now the last thing about this Kubernetes 101 is again, I wanted to point out to you that these controllers often run in Kubernetes. And you'll see later on that oftentimes those controllers are running as pods, but the controllers that I talked about here, the deployment controller, the replica set controller, the services controller, they don't run as pods because you've got this chicken egg problem. Like I need those things to run Kubernetes, to be able to run containers. So how do I bootstrap it? Well, the way that it's bootstrapped is that they're actually running as processes on the control plane node. So what I've done here is I've shown you that I have a kind cluster, I executed into that kind cluster, it's a single node, so it's the control plane. And what you can see here is that I just did a PSEF. So you can see the processes, and of course I didn't show you all of them, but you can see right here that the Kube controller manager, that's what's running that continuous reconciliation that's looking over at the runtime environment. Okay, so it's kind of fun to dig into the lower level details to see where these processes are running. And you'll see how it changes as we go along. All right. I have a question. Is the controller manager always run as a service? Like is it an operating system process or can it sometimes be containers? That is a fantastic question. And you know what, Whitney, I'm gonna hold off on that because we'll be an example of that later. Cool, thank you. So when you're running a Kubernetes cluster, my kind cluster, or I'm running it on virtual machines or I'm running it on bare metal, yes, it will be processes like this. But you'll see in just a little bit that there are some exceptions to that. Cool. It's pretty fun, so. Okay, so just summary, the core Kubernetes pattern is this. Oh, and what happened? I updated the slide. I'm not sure where it went. Uh-oh, I think maybe I updated it in the wrong place. We'll see. So here's the core pattern. I've got a declaration. I've got, and these are the terms I'm gonna use from now on. I'm gonna use desired state and actual state. And then the API implementation is the controller that's reconciling those two things. Okay, so that's the core pattern that we're going through. So then what else can I apply this core pattern to? One of the things that I'm very, very fond of saying is that Kubernetes is often thought of as a container orchestration engine, but that's just use case one. When I just went through the 101, that's just use case one. Kubernetes, the folks that created this, and I just had the great pleasure of watching Brendan Burns talk about this a few weeks ago at a meetup up in Seattle. He talked about kind of the genesis and the whole history. They were so smart to think about these extensibility points and the ability to take this core pattern that gave us abstractions that allow us to use that same core pattern and then apply it to a number of different things. Now those patterns were there when I first started working with Kubernetes in 2016 or 2017. I will tell you from experience that we were talking about applying this pattern to other things maybe five years ago, but it was still pretty nascent. There was work on cluster API, which we're gonna talk about in just a moment. There was work in a few other places, but there wasn't a lot of it. These days, oh my gosh, this pattern is being applied in so many different places and that's what I'm gonna show you today. So what else can we apply this pattern to? Well, I'll tell you that we're gonna look at three other examples and they're all gonna be kind of infrastructure-y and you'll see where I'm going with this as we go along. Okay, so we talked about clusters and what I'm talking about here is Kubernetes clusters, not other virtual machine clusters or anything like that, talking about Kubernetes clusters. So remember just a moment ago, I talked about the fact that I had a Kubernetes cluster and that I had these controllers running in the cluster, but how do I actually instantiate that cluster? And again, Kelsey was very famous for writing up Kubernetes the hard way and it has been hard for a long time. And so we've been, one of the things that we're doing is we're applying this Kubernetes pattern, which gives us all sorts of advantages like higher levels of resiliency and easier management and maintenance and all of those things. I'm not gonna talk about all of those in detail today, but we get all sorts of benefits from this basic pattern. Can we reap those benefits applying this thing to clusters? And the answer is yes. And that's where cluster API comes in. Now cluster API is a CNCF, it's an open source CNCF project. And what you're gonna see here is exactly the same pattern that I went through before. Here's my declaration. What is it I want? Well, I want a cluster. There's my kind, the type of thing. This is the type of object that I wanna create. So I want a cluster. I have to give you some details of what I want that cluster to look like. So I'm gonna tell you that the control plane is gonna be bootstrapped a particular way using Kube-Adam. I'm also going to point, give some references to some other resources that I'm gonna have in this manifest that it's gonna be an AWS cluster. I'm gonna tell you some things about what I want that cluster to look like. For example, I want three machines that are going to make up this cluster. And then in other places I can specify how I'm gonna distribute those machines across control plane nodes and worker nodes. So I'm just expressing what I want. Here's what I want and we're gonna make it so. So what does that look like then with this basic pattern? So we've got the manifest on the left-hand side. On the right-hand side, we've got the runtime environment. Now in the past, I showed you that the runtime environment was a Kubernetes environment. So we had the worker nodes. We also had some infrastructure. In this particular case, I don't have Kubernetes nodes in my runtime environment. I just have an infrastructure. That infrastructure could be a cloud provider. One of the big or the smaller cloud providers could be VMware, could be bare metal. I've got some infrastructure. That's all I have. Now I also, but remember that the pattern is a controller is the implementation. It's the thing that makes it so. So I've got a controller and again, there's multiple controllers. I've got a controller that's gonna handle provisioning of machines. I've got a controller that's gonna handle how I distribute those machines across the cluster, number of control plane nodes, number of worker nodes and so on. So I've got this controller and I'm gonna ask you to suspend disbelief for just a moment of where this controller is running. We're gonna get to that in just a moment. But what is it that the controller is going to be provisioning and then watching and making sure stays healthy? Well, it's gonna create some control plane nodes and it's gonna create some worker nodes. So it's actually going to create the Kubernetes cluster. So the things that it's managing, it doesn't have a Kubernetes cluster. It's creating and managing a Kubernetes cluster. Okay, cool. So now here's the question. Where does that controller run? Now, I told you just a little bit ago that controllers, technically they're just processes and you can run them anywhere. And you could, so you could concoct a whole way that you can have these controllers deployed and then of course the controller is gonna need to look at some state store to get the state which is the declaration on the left hand side. Well, you know what? I have a kind of ideal environment to run these types of controllers and it also includes a state store like at CD. Why wouldn't I just run this controller inside of Kubernetes? And the answer is, yeah, that's usually how it works. Usually we're deploying cluster APIs. So we're deploying the API into an existing Kubernetes cluster. So you might have a bootstrapping cluster. It could be a kind cluster. It could be another infrastructure cluster. You're generally gonna have something where you're gonna deploy this controller which is now going to, and you're gonna configure that controller with pointers over to the infrastructure, cloud accounts, those types of things. And then that controller is gonna be able to do its business. Okay. Makes sense. If you use just like a local kinds cluster to bootstrap like real infrastructure that you think you're gonna run later, that seems like a lot of pressure to put on your little local laptop on your little local places. Like once your real infrastructure is bootstrapped, can you move that management cluster to be something that's more formal than just a kind cluster on your laptop? Indeed. So I promise y'all that we didn't practice this. Whitney's just so good at asking just the right question at the right time. So that's exactly what we do. So what people will notice, you might notice on the slides is I just clicked a couple of times is that what we're gonna do, and this is part of the cluster API specification, is that this is all implemented in that open source project in cluster API. We're gonna do what cluster API calls a pivot. And what we do is we effectively move the state from that bootstrapping cluster because now once we've set up that control plane, and in fact, usually we just do it in the control plane, usually we will bootstrap a control plane node, then we'll do the pivot. You can do it a number of different ways. But what we do is as soon as we have enough Kubernetes over in the runtime environment, we take everything, we take that state that's right in the middle, we take the state that's an XED, we take the controller that's running in the bootstrapping cluster, and we move them, we actually move them over into the very cluster that we just bootstrapped. I have another question. Yes. This seems, maybe the problem I see with doing it this way is then if the cluster goes down, you also lose the definition with your cluster. It seems like a good idea to have a definition stored outside of your cluster. So there's a couple of answers to that. One is, of course, backup and restore. So leveraging backup and restore capabilities and many, many folks do, in fact, in their production clusters, will have backup and restore capabilities that are happening on a regular basis, and they have that as a part of their workflows and so on. Now, one of the things that I didn't talk about is that after I left Pivotal, I spent 18 months at Weaveworks. So you might also know that I'm a bit of a GitOps fan. So one of the other things is, and I haven't emphasized this, and Alexis would be so disappointed in me, that one of those advantages of this declarative approach is that I can leverage things like GitOps, where I can have all of my declarations, all of these declarations that you see on the left-hand side, they're all controlled, version controlled, everything's in GitHub. So one of the ways that I can actually recover from a failure is not necessarily to restore from backup, but to just instantiate all over again. And it depends on your use case. If you've got a use case where you can tolerate, it also depends on how you're going to lay out your infrastructure. If you're already going across different regions, maybe you don't do a backup restore because you expect a region to be up and your load balancers are just doing adjustments and you can use more of a GitOps approach to stand up another region because you've lost a region or you've lost something. So it really depends on how your overall infrastructure looks and what your SLOs are. SLOs and SLAs are basically. Make sense? Make sense. Okay, cool. All right, so we've done this pivot and now as soon as we've done the pivot, that same pattern that we see in the middle of the slide is happening on the right hand side of the slide. And oh, I lost a, there's supposed to be a little blue thing that came up here. I was struggling with my animations, but this loop that we saw here, there's a loop here that's happening as well. So on the right hand side, I don't know, can you see my cursor when I move it? I can, but we have had some words that the slide font is small. I think people are just having trouble once you get to that second level. But I do have a question. Sure. Or a question from the audience, which is, do you recommend any certain learning source to learn how to do a proper backup and restore capabilities? You know, I don't have anything off hand. You might know something better than I do. There are CNCF projects that remind me. Valero. Valero. There we go. Valero is certified and is used in a lot of open source installations, but also in a tremendous number of commercial products. So I don't have any resources off the top of my head. Of course, you have to understand the patterns as well. Anything from you, Whitney? Besides Valero, that's all I have. Yeah. And I'll look up the website and put it into chat right now. Super, thank you. Okay. So I wanna show you a couple of other things. And by the way, when the font gets really small on the right-hand side, it's exactly the same thing that I had in the controller in the middle of the screen. So it's really just think of it as an icon at that point. So here is, it's just, it's a coop cuddle, get pods. So I'm just getting the pods here on a Cappy managed Kubernetes cluster. So this right here is the cluster on the right-hand side after I've done the pivot, okay? So I've done the pivot, the controllers are running inside of Kubernetes. I've got EtsyD, all of that stuff running inside of Kubernetes. And so what I wanna point out to you here is just I bolded, I've shown you a few other things that are running in here. And I'm sorry, this isn't super beautifully formatted, but I really bolded the things that I wanna draw your attention to, is that here are those controllers. So you can see here that there is a Cappy controller. This is kind of the base controller that is part of the open source. But if you remember when I was showing you the declaration, I called out that we're doing things with AWS. Well, here's a controller at the very top, the Kappa controller, C-A-P-A, that is the controller that's doing AWS specific things. It's communicating out to the AWS cloud, for example. So that's where it's doing this AWS specific things. Here are the things that happen across all of the different providers. So those are the Cappy things. Remember that the declaration said something about, hey, you're gonna choose how you're gonna bootstrap this cluster. And I said, it's using Kubata. Here you can see the controller. And again, you can see in the name, these are all controllers. So it's a series of controllers that are doing its job. So these are, and there's of course, some web hooks for communication as well. The actual controllers are down here, but up here we've got a number of things that are the web hooks that are connecting into those controllers. So in this particular case, I showed you when we were, that the original controllers for like deployments and services and those types of things were running as processes on the control plane node. In this particular case, we're not doing deployments, we're doing clusters, the controllers are all running as pods in the Kubernetes cluster itself. Cool? Cool. All right. So that was clusters. Now, what about virtual clusters? Now, first of all, many of you are probably like, what the heck is a virtual cluster? Cause this concept is a bit newer. So what I'm gonna do is I'm not, this is not a talk on virtual clusters. There are a lot of really great talks out there on virtual clusters and I'd be happy if you can ping me or you can, you know, we'll maybe put it somehow in the show notes or something. I can dig up a, I watched a fantastic video the other day on video, on V cluster. So I can share that as well. But I wanna use this diagram just to explain at a high level what V cluster is doing. What V cluster is doing is it saying that instead of standing up this big old cluster where I've got, let's say virtual machines or physical machines for control planes and for workers, I just want something that's gonna look and act and be a Kubernetes cluster, but it's far more lightweight. So how can I do that? Well, I'll tell you, I've been doing Kubernetes long enough that there've been a number of different ways that people have tried to do this. What V cluster in particular has done is it has said, well, how about if we take a big old cluster, that's what's showing here on the lower part, host cluster. That's what we're calling a host cluster. That's a Kubernetes cluster. Have it if we take that and we partition it up into smaller pieces and have each one of those smaller pieces act like a full fledged Kubernetes cluster. This is different from namespaces. Namespaces, you still have a single cluster and you're partitioning that cluster up into namespaces, but there's certain things that are only at the global cluster level. So things like these definitions of, these installations of controllers, for example, those are Kubernetes cluster wide. I've got various policies, security policies, some of those are cluster wide. And so namespaces are still a single cluster and we're partitioning in a different way, level of granularity. What the cluster does is says, well, no, let me take that same big old host cluster, partition it into smaller pieces, but each one of those smaller pieces, I'm gonna add just enough stuff to it to have it act like a full fledged, full on Kubernetes cluster. That's what the cluster does. So what, I'm gonna rephrase what you said and you can tell me if I have it right. So Kubernetes is difficult to tenetize. It can be difficult to have lots of different teams share the same cluster, but it's expensive to give every team their own full cluster, so that's the problem. And so instead of giving every team a namespace, which is problematic because teams can't install tools into their namespace without it being installed at the root level, you're gonna give teams a virtual cluster to work with. Exactly. Is that correct? That is exactly right. And so what we do with the virtual cluster is we are in fact, it's so clever, we are in fact, when you deploy things into those virtual clusters, and I'll show you in a moment how the virtual clusters get created, but when you deploy things into the virtual cluster, you deploy pods, you deploy services, we are in fact running those natively in namespaces on the host cluster. Okay, so the actual bits and pieces, they're still running on the host cluster, but some of the higher level abstractions like these policies, those are just getting deployed into the virtual clusters. So when I say virtual cluster though, I kept emphasizing that it's a full-fledged Kubernetes cluster, so it has an API endpoint. It has controllers that are running, that are doing things like deployments and those types of things. Like remember the very first ones that I showed you where it was running in the kind cluster, those controllers, those deployment controllers and those types of things, those things are still running full-fledged within that Kubernetes environment. The only thing that we're bringing down to the host cluster are the workloads that we deploy into those virtual clusters. It's extraordinarily clever. And it does it via this sinker and so on. And that's kind of like VMs to a host machine or virtual clusters to a Kubernetes cluster. What do you think about that? So say that again. Virtual machines, like what a hypervisor virtual machines to a host machine, like a host computer, the way it breaks it up into isolated pieces that don't know about each other. Yeah, the concept is exactly the same, exactly. So, okay, so that's what vcluster is. So it gives you full-fledged Kubernetes APIs, core controllers, all of that stuff is running for each one of the virtual clusters and we see two virtual clusters here and the workloads are running on the host clusters. And by the way, there's a whole bunch of super clever stuff that's being done to actually secure those namespaces with fixed policies so that you don't have to, turns out that you can get namespaces pretty secure but you have to figure out how to do it. Well, all of that's implemented in vcluster. By the way, there's a common theme here, vcluster, it's an open source project, it's a CNCF project. Yeah. All of this stuff is in the open. Okay, so let's go back to our core pattern. So here's the vcluster API, here's the declaration. So you can see that it's a vcluster kind and it has certain things that are specific to vcluster in there. So for example, one of the things that you can do, and Whitney, we're just, the analog that you drew down to VMware or that virtual machines is a great one because what you're doing when you're carving up this host cluster into these virtual machines is that you're doing things like you're setting limits and you're setting quotas and things like that that you can have for each one. So you're partitioning up a shared resource but you're using policies to do that. So vcluster is all about partitioning a shared resource and so you have to have these policies and these quotas to do exactly that. So you can see here that there's limits on CPU and memory and storage. There's also gonna be some settings around some of those security settings. So you can say, how tight do I wanna secure those namespaces on the host cluster? Those are some of the types of settings that you get to say when you're declaring what you want your vcluster to look like, your virtual cluster. Okay, so here's our pattern again. So there's controllers and I'll show you what those controllers are in just a moment. There's a cluster controller and again, you get to set things like the Kubernetes distribution and the Kubernetes version and how many control plane nodes and worker nodes you have. What's the runtime environment in this case? Well, remember I said that vcluster is running on top of an existing Kubernetes cluster. So now my runtime environment, I'm not starting from scratch like I did with cluster API. I'm not starting with raw infrastructure. I'm starting with the Kubernetes environment. Well, gosh, if I'm starting with the Kubernetes environment, remember that question of where does the controller run? Well, the controller might as well just run in that Kubernetes cluster to start. So it's the same pattern. It's exactly the same pattern, but we already have a home for our controller and I'll show you that in just a moment when I show you the CLI screenshot. Now, what is that controller provisioning and managing? It's provisioning and managing the namespace and then all of the things inside of that namespace, the entire Kubernetes API and the core services, all of that stuff, it's managing. Okay, makes sense. I have a question. So if you have one host cluster, let's say five virtual clusters, let's say four out of five of those virtual clusters all want to use the same open source tool like Knative. And if they all install Knative, are you running Knative four times? Yes, we are. Okay. Absolutely, because they are separate clusters. And so what you're gonna be doing is you're gonna be bringing, so Knative, all of the things that run in pods and those things are all gonna be down in the host cluster, they're all gonna be running in their separate namespaces. Okay. So there's nothing in virtual cluster, in the cluster to recognize that in fact, you're running exactly the same instantiations because the whole point is that you want to be able to have potentially different settings. So the cluster doesn't do anything to recognize that in fact, it doesn't do any deduplication. Okay. Kind of by design because you, from a security perspective, you don't want multiple things running. It's not making Kubernetes like magically multi-tenant or anything like that. Okay, thanks. Yeah, you bet. Okay, so here we go. This is what virtual cluster looks like. Now, this is where you ask that really great question. Whitney at the very beginning where you said, am I always running these as root processes? And I said, ooh, not necessarily. And in this case, remember that we are running these full-fledged virtual clusters inside of Kubernetes. So rather than having to run these things as root processes, we can in fact run them as containers in that Kubernetes cluster. And you can see that these are some of the things that are shown in this particular screenshot. So you'll notice here again that this by the way is a screenshot from the pods that are running again. I have an EKS cluster. So again, you can see at the very top, it's not bolded because that's not what I'm talking about in this particular case. But earlier in the screenshot, I had the CAP-A controller manager stuff highlighted. I've let those fall into the background, but I now have a CAP-VC. So this is the virtual cluster. So the way that virtual cluster works is it's actually an extension to CAP-E. It's using the same basic approach of cluster API. In fact, it's using cluster API, but it's provided a new cluster API provider. It's the virtual cluster provider. So you can see here that some of those core processes like core DNS is running there. And right down here, this cornellia EKS-1 VC because it's running on my EKS cluster, virtual cluster, this is where all those processes are running. Right inside here, you can see there's three containers. One of those containers in this particular case is K3S. And that K3S is where the deployment controller and the services controller and the replicates that controller are running. They're all running in that K3S. For the virtual machine. For the, not the virtual machine, but the V cluster. V cluster. Okay. Yeah. Exactly. So in this case, it's running inside of a pod that is running K3S. It's super clever. Makes sense? It's all the way down. Yeah. Yep. Yep. Okay. So I am keeping an eye on the time. I think we're doing good. I have a third example beyond the Kubernetes 101 where I want to show you this same pattern in play. Before we jump in, we have a comment from before that I'm getting too late. So when we were talking about backing up and restore with Valero, someone said they're looking for something similar. Like they store their manifest files in GitHub. And then once the Empress set up again, they deploy again into the cluster. Yeah. I wanted to call it out. We talked a little bit about GitOps, but this seems like a good use case for GitOps. Yes? Absolutely. Absolutely. So whoever made that comment, I don't know what you're using to do that instantiation from Git. But the thing about GitOps is that there are themselves controllers. So there's a pattern here. There's controllers. And so some of the open source projects, again, both in the CNCF that are implementing controllers. So using the same basic pattern are Flux and Argo CD that can be used to connect. Now, what you're doing is you're connecting from your GitHub repository. It's drawing those definitions into your Kubernetes cluster. So putting them in at CD, for example. And then that's what that Flux or Argo CD controller does is it's job, if you will, it's runtime environment is the Kubernetes state store and it's drawing the declarations in from GitHub. So it's the same basic pattern. It's a really great call out that it's the same pattern that we're looking at across the board here. And then we have a question about vCluster. What are the potential security risks associated with using vCluster? Yeah, so I will, again, we can put it in the show notes when we do post this later. So I'll direct you to the vCluster GitHub repository and there's a super great chart. I didn't put it in here because I didn't wanna spend a lot of time on vCluster but there's a great chart that talks about a number of different characteristics of shared clusters, individual clusters and then this kind of middle of the road which is vCluster. And they talk about things like ease of use and those things and security is one of them. And to sum it up, what they say is that shared clusters using just namespaces, the multi-tendency is pretty soft. The security is not the strongest. You can put some settings in there and it also depends on a number of other elements which I'll talk about in just a moment. You can do a lot more but you have to know what you're doing. And so if you're just creating a namespace and you don't have those policies in place, it's pretty weak. And by the way, vCluster does allow you to deploy it with that weak security posture. If you're doing separate clusters, well, everything's separate. You've got a separate state store, a separate processes, everything's separate. So that is the strongest multi-tendency or that's the strongest security posture. vCluster, what it does is it says, okay, it has something called isolation mode. And when you're in isolation mode and you create a virtual cluster, it puts in place a whole bunch of policies both inside of Kubernetes itself. So it sets up policies that do not allow any type of communication between namespaces. So you cannot, if you go into one namespace, you can't see what's in another namespace. So I'm sorry, that's not communication but that's just being able to see. And then it also, now this is where it depends on not just core Kubernetes, but it depends on things like your CNI and your CSI. You can use your storage plugins and your networking plugins into Kubernetes. vCluster will communicate with those to also set up network policies. So the sum of it is that when you're in isolation mode and vCluster, what you're doing is you're getting pretty strong. It's not as strong as separate clusters, truly separate clusters, but you're getting really quite strong security boundaries between those two. So take a look at the GitHub repository but that's a fantastic question. And the answer is you can get pretty darn good. And we are seeing people like Codefresh, for example, who bring you Argo CD as a service, they are using vCluster in prod and they're in a multi-tenant environment. So it's pretty solid. That's cool, that was a great and thorough and a great explanation, I appreciate that. And we have one more vCluster question before we move on. Is it possible to communicate services within the vCluster's environment? So for example, app1.svc.namespace.cluster.local. I'm not sure that I'm completely understanding. Oh, what you want, yes, yes. So what you're saying is that if I have separate virtual clusters and I've got services running in those, can I do communication across those without going out and back in? So without going through a load balancer, for example, using just kind of the host cluster's networking. I don't know for sure. I think that there are ways that you can configure vCluster, but at that point, you're giving up some of those security guarantees. If you're doing it in isolation mode, no, you're gonna have to go out and back in. I would be a little hesitant. I mean, one of the things that we're getting from the vCluster and all of the IP that's in that open source project is we're getting security people who know what they're doing. And if you start mucking with those security things, you might be putting yourself at risk unless you're an expert. Yeah. Even if you're an expert. Yeah, yeah, absolutely true. So, okay. All right, so let me push into this final one. So you'll notice that, again, the question I posed is this Kubernetes way. What else can I apply it to? And I told you at the beginning that I'd be talking mostly about infrastructure things. Some of these questions are great because we were able to talk about how GitOps is doing the same basic thing. It's being applied all over the place. But here's my last infrastructure thing, virtual machines. And I promised you at the beginning that when I said Kubernetes, all the things that I wasn't gonna ask you to containerize everything. Virtual machines. There's decades of investments that have already been put in place around virtual machines. We have hardened virtual machine images. We've done so much work around that. We have workflows around virtual machines. Gosh, do I really have to move everything over? I mean, first of all, it'll never happen. But boy, that's an awful lot of work with what's the business benefit. And so the reality is that we still wanna run virtual machines. Is there anything that, can I bring Kubernetes and virtual machines together? Well, the answer is yes. And so, oh, sorry. This title should not say cluster API. I made some changes. I don't know what happened to them. Forgive me, I'm gonna show you behind the curtain here for a moment. This is Kubevert. Why is it not changing? And now it's not, oh, interesting. It's because I'm in a version that isn't allowing me to do my changes. So anyway, this should say Kubevert API. So here we have a virtual machine definition. And as you can imagine, you've seen this with every declaration. We have a virtual machine. We have the kind of thing that we wanna do. And then we give it some parameters. And so what are the things that we're gonna say about virtual machines? Surprise, surprise, compute, storage and network. So we're gonna specify all sorts of compute, storage and network in there. And then here's my pattern again. Now, here's the thing. And the project here is Kubevert. I've used the name a couple of times as well already, but Kubevert is again an open source project in the CNCF. And what does it do? It runs virtual machines inside of a container running on Kubernetes that has a hypervisor built in. So what we are assuming here for our runtime environment is that we have Kubernetes. We're leveraging that Kubernetes investment to actually be able to run our virtual machines on there as well. So my runtime environment is Kubernetes and I have a controller. This controller is going to handle these virtual machines, compute, storage, network and all of that stuff. Well, just like with vCluster, if I've got a runtime environment that's already Kubernetes, can I just run the controller over there? And the answer is yes. I can just run the controller right over there. Now, what is it that it's controlling? Well, it's controlling pods. It's controlling PVCs, the storage, all of the different objects within Kubernetes. It's leveraging those Kubernetes objects and it's connecting those down via CSI and CNI. It's connecting the needs of the virtual machine down to these compute, storage and networking primitives that it's getting from Kubernetes. And all of that is inside of the happening inside of this Kubernetes environment. Now, yep, I'm worried here. So here's the architecture diagram. I think I had a network blip earlier and I made some updates to slides and I think it made updates to slides that I'm now no longer using even though they're Google slides. But here's the key. Thank goodness I have my last slide here, my last CLI slide is what you can see in here is that I again have the controllers. This is the Kubevert controller that's running and applying that same pattern. And what is it controlling? Well, it's controlling these are all of the containers in my environment. These are all the different virtual machines that are running in my environment. Each one of those virtual machines has a hypervisor in it as well as in that hypervisor is running the virtual machines that have been instantiated into there. Okay, now the final thing I wanna show you here is that this is Kubevert and I just did. Now I've been talking about how the Kubernetes API is extensible and this isn't to talk about CRDs and controllers and how you install them. I know that Whitney has hosted lots of other people who have talked about this. So take a look at many of the feeds that she has available. But what I wanted to show you here is that this is the set of API extensions that are in place to run something as sophisticated as running virtual machines on top of Kubernetes. There are so many different resource definitions and there's so many different controllers that respond to these different resource definitions. So these are all those things that we can declare and that get declared as sub components. There's so much sophistication that's happening in here. And that's what the Kubernetes API has allowed you to do and that's the Kubernetes, all the things. So just- I'm gonna interrupt you briefly. Since we're wrapping up, if y'all have any more questions, please do ask them now or forever hold your peace. Now's the chance. All right. And so this is just, this is gonna be my last slide and we're out of time anyway, but I wanted to put it all together on one slide here. What's the thing? I said, Kubernetes, all the things. Well, what we looked at is we looked at container images. We looked at clusters. We looked at virtual clusters and we looked at VMs. They all have their CLIs that allow you to work with these different open source projects and they all have getting started guides. And they're all open source projects and they're all open source projects that are in the CNCF. The amount of, I mean, I think this is just like a poster child of the good of open source and the innovation that's happening in the open source space because so much is possible. And this is just, you notice there that I bolded many more. There's so many more open source projects. And by the way, commercial projects that are following these patterns as well. So I kind of have them in love. It's super cool. It's when I, back when I learned Kubernetes, which I don't feel like it was that long ago, but I definitely was taught, it's a container orchestration platform. And now in fact, I was a regiving an old presentation and I had that definition in there and I had to be like, well, not really anymore. It's an everything orchestration platform. Yep, exactly. We have an awesome presentation to you for some reason. I'm, yes. And I just wanted to second that. This was an amazing presentation. Really appreciate you very much. Many thanks, many thanks. Thank you. I'm going to, I'm going to, if you feel ready, I don't see any last minute questions, so I'm gonna close it out. Okay. Does that sound good? Thank you. Well, I'm today, it has just been such a delight and so great to partner with you today, Whitney. Yeah, I hope we can do it again soon. Yeah, me too. So thank you everyone for joining today's episode of Cloud Native Live. It was really great to have Cornelia Davis here teaching us about how to Kubernetes, all of the things. Y'all in the audience were especially wonderful. I loved your interaction and your questions. And here at Cloud Native Live, we bring you the latest Cloud Native code on Tuesdays and Wednesdays at New US Eastern. So thanks again for joining today and thanks to those of you who watched the recording and we'll see you again soon.