 Okay. Let's get started. I have so much content. So my name is Carson Anderson. I'm here from Domo. I'm here to talk about Kubernetes, though, so don't worry. It's not a pitch at all for Domo. Right there is my GitHub at Carsonoid. That's where you can contact me, and you can get the full source for this presentation, the whole thing's open source. To start off, who's read the description for the session, and you think it yourself, how is it going to fit all of that in two 35 minutes? Yeah, I thought I'd have an hour. The good news is, I have a link at the end of the slides. I've prerecorded the whole thing unabridged, and you can go watch that, and I go through things a little slower. I'm going to skip some stuff, and I'm going to tailor this to you. So raise your hand if you've used Docker before. All right. That's good. We can skip some stuff. Raise your hand if you've used Kubernetes before. Okay. Raise your hand if you really want to know the dirty work of Kubernetes. Oh, good. Good, good, good. Okay. We are going into the dirty work of Kubernetes. I know the keynotes are all about magic, but I'm all about real, like, behind the scenes. We're going to pull apart the curtain. Before I do that, I've been thinking about how to describe this. It's not a deep dive. It's whatever the inverse of a deep dive is. This is a, like, low-altitude flight over the landscape of Kubernetes. We're going to breeze by everything, and I don't have time to focus on any one bit because we're going to run right by it. But hopefully this will give you an idea where everything fits together, and you can go kind of look at the details later. Let's get right into it. I'm going to start with the basic user section. This is where we just make a really basic Kubernetes application, so we can talk about the magic behind it in the deeper sections. Let's get right into it. So for the basic user, I don't care about the details. I'm going to just really quickly make some stuff, and I'm going to start with containers. We're all Docker people, so I can skip this. So we have something we want to containerize, like this application, this presentation. That's really just a file system. You target that file system up, stick it inside a box, put some metadata on it so you know what kind of runtime information it needs, stick some labels on it so you can identify it amongst all your other images. Awesome. You've got one container, one application, one process tree. You're doing containers right. But we all know that that's not really enough. A lot of times you want to run things together. One application and don't do init systems, that's a bad idea. You want something better. You want to run maybe a Prometheus exporter, an extra application. You want them tightly coupled. You want to put an envoy sidecar. You've heard this a lot. Maybe even some volume data, and Docker doesn't really provide the low-level tools for that, but Kubernetes does. It's the pod. We all know what a pod is. I'm going to skip a lot of this. Container, multiple containers, container and a volume, any combination of all those things. It's what Kubernetes executes. You'll kind of see it used interchangeably between pod and container. In fact, the docs even do this, but really it's always pods. So we can make a pod. We can deploy this presentation in one pod. Awesome. But we want fault tolerance. We want load balancing. We want two pods. They can make two pods manually. We all know this. No, that sucks. So we're going to make a deployment, right? And a deployment is just everything you need to know about how to create the pod, the replicas, and how to label it. So we make a deployment, and it's going to make the pods. Quick note on this symbology here. Everything in Kubernetes is going to look like this. There's lots of properties underneath the name. I'm only going to point out the ones I care about at any point in time. There's way more than just a port to find in these pods. But we make the deployment. We've got our application running. That's really cool. But we need to get to it, and Kubernetes needs to get to it with a service. So we take our service, point it at our pods, and that's how you go inside the cluster at either the service IP or name, and you get to your pods. And then you do cool things like use the selectors so that as things change, we change the image in the deployment. It's going to change and cycle out. Again, I'm skipping this. Watch the video. So we just point at the service. It's magic. I'm going to tell you how that happens later, but I want to understand what we're creating as a basic user. Finally, even you people that use Kubernetes a lot may not have used ingress yet. Ingress rules, this is the way we get from outside the cluster into the cluster. So we create rules that say, hey, if you're coming into a predefined load balancer, again, as a basic user, I don't care. I'm just going to assume it's there. I will break it down. You come into a load balancer, destined for that host, go to this service, traverse all the way through, awesome. We do that with master access, right? We have a place to go, credentials. We're going to use kubectl. I know they say don't use kubectl, but I'm a really basic user. So we're going to use an imperative, create a namespace called this thing, declaratives, create something defined in this YAML. Here's the, I mean, this is boilerplate. This is like day one Kubernetes stuff. We've got a deployment with all the selectors, all the pod spec. We've got the service using both the load balancing, discovery, and abstraction parts of the service. Notice the service listens on 80, but we're going to some weird port on the back end. That's fine. And we're creating that ingress rule, which again is just a rule. It says, hey, if you're going to this host name, anything under this path, go to this service and this service port using the service abstraction. So take our credentials, our files, send them to Kubernetes. And Kubernetes is going to spit out the things we asked for. That makes sense. It's also going to spit out some things we didn't really explicitly ask for, but what we expected also kind of note these weird names. I'll tell you where those come from in a bit. Let's do it. While I'm doing it, you can start loading either of those. And hopefully, it should stop getting 404s here in a bit. I've also got it at the bottom. One just redirects to the other. So for the basic demo, this is just auto type because I'm not going to fat finger stuff. You configure the master address you're going to, the credentials. We knew we needed that. Use a context to tie them together and then use that context. So you can just switch around in clusters. Awesome. Now let's make them. Basic user. So I'm going to make things really simply. I'm going to imperative, create the namespace, and then one at a time, throw my YAML files at it. This is not advanced, but I'm a basic user. This is simple. So there we go. We're making all our stuff. We're going to get it. Say, oh, all right. So someone raise your hand as soon as that loads, just start hitting refresh on your. And from now on, if you're looking at your laptops, I'm just going to assume you're looking at my slides. This is how you guys can follow along if you can't see what I'm doing. So this is not fancy, right? Anyone got the application up yet? OK. We'll move on. I don't really know. I'm going to go back to my slides. It's up? All right. Awesome. It's all the slides. It's just web server files. So I'm a basic user. Yay, I've deployed my application. But you guys are here for the meat. Let's get into the meat. For the cluster admin, I'm not satisfied. I want to point out these symbols. I've scattered them all over this section and a little bit in the rest. They are places where you can replace, extend, and highly configure Kubernetes. I'm not pointing out all of them. There's stuff I didn't even know you can add on to. You guys know now it's built to be extended upon. But I'm going to point out some big ones throughout the session. So center files to Kubernetes, we got stuff. But we know Kubernetes isn't just a nebulous thing. We've been calling it that the whole conference. But it's actually a bunch of things. It's even more than this. These are just the big ones. The top three are master components. The bottom two are primarily node components that they can run on the masters. I'm going to cover them all one at a time. Also notice basically everything's pluggable, replaceable, configurable. In fact, kubelet, I've learned some stuff. That's replaceable. Like Rocketlet swaps out kubelet. So really basically everything in Kubernetes can be replaced or extended. Starting with the API server, when we send our files into the API and got stuff, we kind of guess that was the API server. That makes a lot of sense. Notice we've got our first extension here. This is custom resource definitions and the API server extension. They talk about this in the keynote. I don't have time. But they're really cool ways to just let Kubernetes do the dirty work of adding stuff to the API. We send it in, we got stuff out. But those things don't live in the ether. These resources have to live somewhere. And again, you guys know this, it's SED or an SED cluster more commonly. SED is great because a lot of the power and reactivity of Kubernetes is just SED features surfaced up through the API. They didn't even write them themselves. They just use SED features. It's distributed, fault tolerant, all that good stuff. And that's all the API server is. It's not magic. It's just the heart and veins of Kubernetes. That's how everything connects up and talks to everything else. But it's not doing anything for you. None of that magic's really there. We'll move on to the scheduler. Scheduler is the best analogy I've heard is it's the major D of Kubernetes. So it's going to hook up to the API server with a long watch. This is one of those SED features. That's really cool. It's kind of a pubs of event model. So basically nothing Kubernetes, if you're doing it right, is like listing and checking for differences and reacting. It's just telling me, hey, I'm subscribed to all of these. Tell me about it, and I'll react instantly. Everything uses that. So it's watching API server and still watching for pods that need a place to live. Like the major D sits at a restaurant, says, I got people coming in. I got lots of tables. I'm going to make decision who sits where. So the scheduler says, OK, you. You live there. You live there. There's lots of ways to influence that decision. It's really, really highly configurable. So I don't have time to cover all of these. This is basically things you put in the pod or node, and then you give scheduling information about what your service needs, what your pod needs, and it'll make smart decisions about where to run it. But if that's not good enough for you, replace the scheduler. Write your own custom scheduler. I've seen it done in a few lines of bash. I've seen it done as easy as pod doesn't have a place to live. First node. Next pod. Next node. It's not smart, but you can do it. And the custom scheduler is really useful if you need to make scheduling decisions based on outside information that the regular scheduler just doesn't know about. And if you are worried about doing that, do both. You can do both. You can run the kube scheduler and just let it do the default for all the pods. And just per pod say, oh, no, for this pod, use that scheduler to decide where he lives. So that's really cool. That's the scheduler. It's the major D. Controller manager, if the API servers the heart, this is the brain. He's the thing that made all this stuff. If we had time to poke around in the API, we didn't make almost any of this. We made two things on here, and tons of other stuff got created. And it's the controller manager that's doing that. But notice it's the controller manager, and just like a real life manager, he's not doing any work himself. He's in charge of the guys that do the work. Inside of the controller manager, there's way more than three. I'm just pointing out. There's all these core control loops that sit there and watch and make one specific thing, one bit of logic happened for Kubernetes. We've got the extension here. This is, you heard it this morning. This is the operator pattern or custom controllers. This is actually even easier now with the meta controller, which I'm really excited about. This is almost always paired with custom resource definitions. We're using this right now to basically add nodes to our cluster by creating extra resource in Kubernetes. It says, I want more nodes, and it'll make them. So all of these controllers talk to a bunch of things in the API. It is not a one-to-one. I'm just showing you there's lots and lots of these connections. All the controllers are watching and making things happen. These are the proactive parts of Kubernetes. So for example, a user creates a namespace. A controller makes a service account. Another controller makes a secret. A user makes a deployment. A controller makes a replica set. Another controller makes nodes. User makes a service. A controller makes an endpoints object, points it at the pods. I'm not going to cover endpoints. They're really just kind of the actual routing meat of services. I just want you to see that this is the part that's reactive. This is when you say Kubernetes, quote, unquote, did something for me, it's a controller and the controller manager that did that. And with kind of advanced logging and using RBAC extensively, you can actually get really cool logs that say exactly which piece did what action to all your resources. So as a controller manager, the master components, we have the heart and the veins, the matri D, and the brain. These bottom two do sometimes run on the masters, and I'll cover that in the cloud section. But they're primarily node components. The Cubelet's job is he hooks up to the API server like everything else does. And his job is to live on every single node and make containers real. All he's there for. So he's watching and said, oh, I'm on this node. You have this pod that's scheduled there. The matri D said, go sit there. Somebody has to make it real. And that is the Cubelet. He talks to your container runtime. There's actually way more than those two. Those are just the two I'm going to talk about. Again, anything that uses the standard runtime interfaces can do this. Talk to the runtime, creates the pod, makes the container real. He also does cool things like liveness probes, like your processes running that doesn't mean you're alive, and readiness checks, like are you ready to receive traffic? So he's constantly checking up on these containers and reporting information back up to the API server. Just makes containers real. Cubet proxy, I'm going to gloss way over this because it's in really depth in the network section in a second. But basically, his job is to talk to the API server, everything connects to the veins, and make services real. So he's constantly watching for all services. And on every node, every single service is made real all the time by Cubet proxy. So if something changes about the underlying service, a pod behind it changes, goes away. Cubet proxy makes it real. New one comes in. Cubet proxy makes it real. It's just his whole job. And you can completely swap out Cubet proxy entirely. Different network providers do this. Load balancing companies do this. I've seen it done with just Linux kernel features. There's a lot of ways to do that. And that's it. That's the magic behind Kubernetes, quote unquote, and where all that stuff. You made a couple of things and where it all went and how it all worked. Let's talk about networking in more detail. So for the network admin, I want to know actually how things talk to each other. I care. I know we're not supposed to, but I do. So we're going to kind of take the basic stuff we made. And starting with the pods, explain the way the networking works in Kubernetes. I'm just covering default Kubernetes networking. The provider you use may change a lot of this. But every pod in Kubernetes, as in fundamental tenant, has a unique IP. That pod is unique across the entire cluster. Pods live on nodes because they're containers. They have to run somewhere. Nodes have a unique IP across the entire cluster. They also have a CIDR range that says, here's all the range that any pod on me has to be in. That's mostly for routing reasons. That can change entirely based on the network provider. But I'm going to cover the default where they just get a big chunk of a larger range. So let's make some pods, put them on nodes. All the IPs make sense. They line up like you would expect them to. And they have to talk to each other. They have to talk to each other with a network provider. And I'm going to go way more in depth here. But this is the most replaced part of Kubernetes, arguably, to me. If you've ever gone through the install page and looked at all the tabs for network providers, and that's not even all of them, there's tons of ways to do this. And the reason is, it's not that hard. It really isn't. To be a network provider in Kubernetes, you've got to do three things. You've got to check these three boxes, and you are a valid network provider. Technically, you have to make CNI plug-ins. But I'm talking about functionality. So first, all pods, they're using containers. They use them interchangeably. This is straight from the docs. All containers pods communicate with all other without NAT. So this is simple. You've got lots of pods. It's flat. It's a very simple architecture. You just need to be able to get from everything to everything directly. If this scares you as a security person, do not worry. I have some solutions for that in the power user section. But this is the network architecture. It's very flat. So you do that. Second rule, all nodes communicate with all other containers pods without NAT. So nodes, pods, flat. Everything just talks to everything. I put it here, so it's easy to understand and to see. But it's actually like this. But again, it's just a flat. It's just everything needs to hit everything at a network layer. This is a weird one. The IP the container sees itself as, the same IP everyone sees it as. This basically means don't munch your IPs. Don't let me think I'm sitting somewhere, and everyone else thinks I'm somewhere else. It's very confusing. So just keep it simple. Mostly, according to the docs, this is done to make it easier to go from a VM architecture to a pod architecture. So you do those three things. The CNI plug-in, you're a network provider. There are so many ways to do this. So we've covered the pods. Let's move on to services and the details of how those work. So when I make a service, it's got a selector that points it at a subset of pods. It's also got at least one port, hopefully. So it's listening on one, forwarding to one on the back end. They can be different. That's right. It's abstraction, and load balancing, and everything else. You can have multiples. They have to be unique listening, but they can forward the same on the back end. That's fine. You can use the abstraction that way. We're going to talk about one, and services have a type. The YAML I use in the basic user section didn't specify type because there's a default. It's the bottom one here. There's actually four types, by the way. I'm just covering the three big ones. And I'm going to start from the bottom up because they all kind of build on each other. So when we create a service, whether explicitly or without one, it's going to get definitely a cluster IP. And you would expect cluster IP type. When you do that, a controller and the controller manager says, here, let me give you a cluster IP. That makes a lot of sense. And to illustrate what that's for, we're going to make some web server pods, a cache pod. And we need the web server to use the cache. We could point them straight at the cache IP. Like, it's flat. It'll get there. But that sucks because what happens if that pod goes away, or we want to scale up the number, or any of those other things, we'd have to reconfigure. So we're going to use a service. That makes a lot of sense. So cluster IP service, we care about the cluster IP. We can then point our web server pods at that, and they'll get to the cache behind it, and just point it at that IP, and it never changes. I do want to point out real quick. For those of you that don't know, you can use names. The DNS add-on in Kubernetes is optional, but really highly recommended. So you can go to service name, name with the name space specification, full cluster qualification. They're all valid ways to get to your service, and they never change. I'm going to talk about IPs because everything just resolves up to that. So that's what it boils down to. So we point our web service at the IP, get to the cache pod, and we never have to change. If something comes up and new one's behind it, we don't care. We just use the abstraction. Let's talk about how this hop works. So far, we've just said, oh, it's magic. Well, I care about the magic. So we're going to put these pods on different nodes, because they're going to be on different nodes more often than not. Notice the pods live on nodes, but the IP doesn't. You can go to every machine in your cluster, list all the addresses. You'll never find that IP anywhere. It's not on an interface. It's a target for IP tables with, again, default networking. That's all it is. It's a thing that says, oh, traffic coming out of here, destined for that. That means I need to randomly assign you to the pods that are active behind it. And the thing that makes those rules is the thing that makes services real. So we have the kube proxy sitting there watching all of the services in the endpoints and making that real. So he talks to IP tables. Something changes in the API server. He makes it real. Scales up. Another change. He makes it real. That's kube proxy's whole job, unless you've replaced it. And this might be a little different. So that's the cluster IP. Let's move on to node port. So this is great inside the cluster, right? Like pod to pod, easy. But not all our communication is pod to pod. We know that. We need to get to it from the rest of our infrastructure. And that is where, at its most port service, comes in. So we will make a node port service. Like I said, it builds on cluster IP. It still gets a cluster IP. But it also gets, ta-da, a node port. That makes sense. This node port comes from a weird, high-range manager because there's a controller in the controller manager doing this for you. So it'll get that node port. Node port means nodes. So we take that node port, use IP tables to make just another, basically another target, another entry point into those same IP tables rules that pod to pod communication uses, except for that node port is available from anywhere else that can get to that. So you can take your clients, point to that weird node port, and they'll get load balanced with normal methods from there. That's awesome, but ugly. Like, you're not going to that right now if you're looking at the slides or you look at them later. Like, you're not going to a weird node port. I'm not going to give you that URL. So we need one more step. We need a load balancer type service. This is really cloud specific, by the way. These two are always there. This one is cloud specific, whether it's there and exactly the details of how it works. But in general, we're going to suppose we're in a supported cloud. So we make that load balancer type service a controller and a controller manager because that's where all the real brain action goes. Seize that, talks to a cloud provider, makes the load balancer, points it at the node port. And then you can point your clients at that load balancer and you traverse in through a normal kind of, this can be static and not change, and go to all the nodes as they scale up and down. Again, that's a little different in different clouds. But this is the general idea is that you let the Kubernetes just make those load balancers and provision them for you. You can also just make node ports and make the load balancers yourself, if you want. So that's how we get all the way through. And at a network layer, that's kind of the meat of Kubernetes functioning. As we go on to the cloud layer. So as the cloud layer is a cloud admin, I'm a cloud admin. I care about execution. I care about where things live, right? This is all well and good, but I got to spin stuff up. And in Kubernetes, we make a big distinction between nodes, worker nodes, and master nodes. They can be the same, but in big environments, they're not because there's a lot of security safety that we get from separating them out. It's not elected like swarm. So let's put our components places, right? All these represent pieces of Kubernetes code that need to execute. They need to live somewhere. I'm going to describe a kind of a default HA master scenario. It may or may not look different depending on what you're doing, but we take the API server, we run it everywhere. That is not a surprise to anybody, right? We take the scheduler, we put it everywhere. Kind of? It's active. The code is there on every single master all the time. It's always running. But they talk amongst themselves to their local API servers and elect a leader and say, you, you're in charge. The person in charge is the major dean now, and the other two are just waiting for him to die. They're just waiting. So he dies. If he doesn't check in, they take over his job. We do the same thing with the controller manager. You don't want more than one brain. That would be really confusing. We're not what is Stegosaurus has one in its tail. So we elect, again, the controller manager. Runs on all the masters, but if one of them explodes, another one will take over. They don't have to be the same. Notice they are not the same. You can, you just need one active at a time. I said it was a node component, but the cubelet really runs on all the masters. So we also know that the API server has to talk to the data store. It has to put data somewhere, and that's at CD. We could use, as a cloud admin, I could use an SED service. Just make SED somewhere, let a cloud deal with it. And that's great. Or you do what we do and kind of hyper-converge. You say, OK, every master runs a copy of SED, and they all cluster on themselves, and then API server, each one uses its own local copy. That way, I have full control over SED, and I reduce the number of VMs that I need. We had our access. That's backed by a cloud load balancer, pointed at the API servers. We point our cubelet, all our node components at it, and even our client libraries from outside the cluster. If you're inside the cluster pod to API server, you actually go straight to their private IPs with like an endpoint, like a normal service discovery thing. But if you're components or an outside user, this is what you should be using because you get all the cloud magic of normal load balancing and fault tolerance. When I said ingress rules, I didn't cover that in the network section. I don't know if you noticed. It's really more of a cloud thing than the network thing. Ingress rules are just rules. We made it before, but we need something to make it real. And the thing that makes it real is an ingress controller. You will probably hear a lot of offers for a lot of these all over the expo floor. There are lots of things that do this right now, tons and tons of companies, because it's open. They don't provide when they say, go make one. So the rules have to be enforced. So we make ingress controllers. We'll also need a load balancer to get into them. I'm going to try a couple different ways we've done ingress just because we've kind of learned some stuff as we went. We've got nodes. They are normal nodes. I'm just not showing the other components. At first, we set aside, notice these are called ingress nodes. We set aside a subset of our nodes and said, OK, these are ingress nodes. And the reason for that is we used a daemon set targeting just those nodes and created a host port mapping. I didn't cover this because you shouldn't do it, but we did. There's a good thing. This is very direct. This is like the old school docker port mapping. This is like this port on the host goes straight to this port on this container, which is very direct and neat. And if you're really sensitive, you might want to do this, but you get into port madness. You have to start managing what ports things are listening on, and no one can have two listening on the same host port. But we did this for ingress because we set aside some nodes. And that's a straight hop. But if you don't want to go through that and you want to just create a load balancer type service, you'll get a node port. That's not a surprise. You'll bounce into that, and then you'll hop around from your service. The downside is you've got balance twice. You've got to double hop. You get load balance to once at this end, and then the controller is going to bounce you again somewhere else potentially. Most workloads, honestly, it's fine, but that's something to be aware of. If you're not jumping straight in, you might have to get jumped twice. You then take your clients, point them at that, and you're almost there. We've got to our controller, but it's enforcing rules that point to a service. So we have one last hop to get to our service. So we've got our service, our pod. They're even on some more nodes. That's fine. Once you've hopped in from the load balancer to your ingress controller service to your ingress controller, it can just do normal standard cluster load balancing to the back end services inside your cluster. And you've gotten all the way through. That's what you're doing right now if you're looking at the slides on your phone. So I'm going on to the Linux section. I have to skip a lot of this because of time, but I'm going to cover one part of container essentials, because I think you need to understand that, to understand Docker and Kubernetes. And that is Linux kernel namespaces. Again, I'm going to just focus on this one. This is not Kubernetes namespaces. This is Linux kernel namespaces, meaning that it's ways of isolating processes in Linux. So you can take your application, your process, really, but we're going to talk about containers. And you can isolate it in a bunch of ways. You can stick it in its own process namespace, its own file system namespace, its own networking file, and it's really all of them, right? Like, really, when you make a Docker container, you're getting split all of the ways. But we really care about the cool part of features is that it's not one-to-one. It's not one container, one namespace. You can join other containers or processes into an existing one. So this is how you take that one container and you make your pod. You just join all these containers into an existing namespace so they can share that one IP in the entire cluster. I'll kind of go over how Docker does that. I have to skip the rest of this. Needless to say, C-groups are the Linux way that you split up resources. And it also provides some built-in accounting and stat management stuff. Again, watch the video if you kind of want more details on this. And even file systems, I just put in here because they're neat the way they work and the way they save space. Again, I don't have time for them. I do want to break down the nodes, though. So we've covered a cube proxy in the network section. We care about the cube led. We care about running things, right? So it's watching the API server. It's checking to see, hey, do I have pods that you want on the node I'm on? OK, you do. Let me make it. We're going to make an interesting pod just for argument's sake. We're going to put nginx in there, beneath this exporter, Envoy. We're putting all the containers in there because I want to show you how they share an IP address. So we're going to talk to Docker. Docker is the main right now Kubernetes way that most clusters, I think, are running Docker. That's safe to say. Again, it's not the only one. There's lots of options. But I'm going to talk about how it does it. So when it sees that you want that pod running, it's going to talk to Docker. So you check the pod. And it's going to first make this weird infra container that I'll cover in a second. This is why, if you go on any Kubernetes node, and you do a Docker PS, you get about twice as many containers on there as you thought you needed. It's making this weird infra container. It's joining all of the other containers into the network name space of that infra, which is how they all share one address. And the important part of the infra is it's really just this tiny piece of code whose whole job it is to be there and not die. That's it. Just stay alive so I don't have to restart you, recreate you, so you can always keep your address. Then you join everybody else in. I don't have time for CNI. It's kind of that standard way that the IP address gets discovered. Rocket is the other big alternative for Kubernetes. And there's some important and cool differences between using Docker or using Rocket. So it can do CNI the normal way. Also, CNI came out of Rocket, so you don't have to have the kubelet do it. It can be native. But more importantly, that whole song and dance at the infra container, yeah, it's gone. Rocket's pod native. Its lowest thing is already a pod, not a container. So when you want to create a pod in Rocket, you just create a pod in Rocket. Also, you spin it off. Meaning there's no long-running daemon. There's no rocket thing just sitting there the whole time because you kick it off and it is self-contained. So if you want to make another pod, you spin off another Rocket. It's all file systems and what not to actually manage it. And also, last thing, you can do hypervisor isolation with Rocket. You just annotate the pod and say, hey, you're stage one, which is a Rocket term, basically run this thing inside of a mini super lightweight VM. So if you're scared of container isolation and you want some extra stuff, there's limitations. But you can absolutely do it. And you can do it for just some pods. I don't really have time for logging. We know we use kubectl logs. Most of you understand that it's connecting all the way through to stream your data directly. You can also basically any logging drive that supports Docker, supports Kubernetes. You just hopefully grab some metadata on your way out so you can tag your messages and filter them. Let's get to the power user section. What this is is kind of like this cool grab bag of fun Kubernetes features. I'm going to add a bunch of stuff on top of my normal deployment with no code change. I'm going to add more features without having to change anything in my image. So go ahead if you want to reload that or just add on to the slash markdown. I'm going to kick off the power demo. And it's doing an apply. So I'm a power user, right? I'm not going to create them one at a time. I'm just going to apply the whole directory, which says creator update. Just do what you have to do. And then hopefully in about a minute or so, you guys should have that markdown page running. And you'll have some links that break down. Basically, every feature that I'm going to show you here in a second is explicitly shown how you do it in markdown in the examples and pretty well commented. So just fun features that I want you to be aware of. Security context is a feature where you can basically define a bunch of execution rules about the runtime information for your pods and containers should be, things they can and can't do. You create that, apply it at the pod layer or the container layer, or both. They can even be different for different containers in the same pod. And just tell the queue about extra information security stuff about how you want to run your pod. Network policy is that way, if you're scared of the flat network architecture, that's that way that you create some rules that block access to different things. And it's just labels. It's just YAML. Policy is just rules. By the way, you need something on top that enforces network policy. There's a lot of options for that. You just need to go find one that will read those rules and make them happen. But you can define, hey, pods with these labels can only talk to pods with these labels on this port. And you can start locking things down in a reactive way where you don't care about the details. You just say, make this happen. And that's explicitly decoupled from the network model. Download API is really cool. It's a way of taking pod. This is not all of it. You can put labels, annotations, everything. But you take metadata about your running containers, feed them through to the containers at runtime. If you click either the proxy links in the markdown page, they will spit back their pod information and their node information because they're getting it from the downward API at runtime. ConfigMaps, I didn't cover. They're just key value data pairs, config maps, and secrets. They're handled a little differently, but that's all they are. You can mount them through to your containers as volumes. And you can do it. It's even live, which is really cool. So the markdown you're looking at right now is from a config map. And if I go change that right now in the config map, in about a minute, it will change. And I don't even have to restart the pods because it's refreshed live. You can also take those same key values and put them in as environment variables. So your Docker images that expect that as configuration will work just fine. Affinity is one of those things I wish I had time to cover. It's one of those cool advanced scheduling features that says, here's how these pods relate to each other. I want this pod to always be scheduled next to this pod. No matter where you put it, if one's there, put the other one there. Tightly put them right together all the time. Just extra information for the scheduler to tell it what you want from it. And it won't move around, but it'll always make sure that if one's there, the other's there. You can also loosely couple and say, try to, but I don't care if you have to split it up, and you can wait that even. You can do the same thing for nodes. This pod has to be on a node like this, or try to put a pod on a node like this. Useful if you've got workloads that have to use GPUs, where you would say, you have to be somewhere with a GPU, you're not going to work. Anti-affinity is the opposite of all those things I just said, where you have hard and soft things that don't like each other. Usually they're paired, actually. So you'll see anti-affinity, if you have a GPU node, you'd want hard affinity for your GPU workloads and hard anti-affinity for your non-GPU workloads. So you just reserve that. And that is the power user section. Move on to the credits. So I use Sozi as my presentation software. It's open source. You make an SVG. You can animate every layer independently, and it just spits out web server files. So if you like Prezi, but you don't want to pay for it, because we're all open source here. Or you just want really hard to make an SVG. This is so cool. And it can be as hard or as easy as you want it to be. Probably teach a class on that by now. All the logos I use are properties of their respective companies. And OpenClipArt was immensely helpful in all of the art and the diagrams and everything that I made. I just stole a lot of this and altered it. So I highly recommend you use that or contribute to it. And I do have some time for, I think, I have a few minutes for questions. Yeah, I have a couple minutes. And here's all the links. I will keep this up as long as I can. I'll probably move it to one of those free Red Hat clusters at some point. So why not? If you want the video, it may or may not be there. They told me they're trying to finish editing. So there's a placeholder. You can always go to that link. And that link will always be the right one, even when I redirect you to. Does anybody have any questions? Yes. I don't know. I'd have to double check on that. Yeah, I mean, it might just be a detail I missed. So yeah, thanks for coming. Feel free to come talk to me. Thank you.