 Good afternoon. Okay, I brushed my teeth. I took a shower. I'm clean. I smell good. Why are y'all so far away from me? If you don't mind, if you can, it'll just make me feel a whole lot better. If y'all came a little bit closer, just, you don't have to come to the front row. Just a little bit closer. Let me look into your eyes. Will you do that for me? Please? Pretty please? One knee? What do I have to do, please? Just a little bit closer. Thank you. I appreciate y'all. Thank you. Thank you. We have family here in OpenStack, so I want to feel like we're all a part of the same cruise. I can see you, talk to you, interact with you. How's everybody feeling? Feeling good? Anybody go to those keynotes this morning? That was pretty exciting. Some exciting happenings in the keynote. Definitely enjoyed that. Have y'all been attending? Oh, let me start this. So I am from Texas, and in Texas, y'all is a real word, so I will use it throughout this conversation. So when I say y'all, y'all know I'm talking about y'all. So have y'all been attending the Kubernetes, the cloud native computing sessions today? Good sessions? Yeah? Good? All right. How many y'all are using Kubernetes right now? Nice. How many of y'all are using OpenStack? How many are using Kubernetes and OpenStack? Two hands. Okay. Okay. There's a couple of you all. Good. All right. Well, today, we're going to talk about self-driving Kubernetes on OpenStack. That is a mouthful, but hopefully by the time I'm finished, you understand exactly what I'm talking about. All right. So a little bit about me. My name is Tony Campbell. I'm director of educational services at CoreOS. How many of you have heard of CoreOS? Good. Nice. All right. So I'm with CoreOS. Basically, take care of traveling around the world and teaching people about Kubernetes, about containers, about the CoreOS stack. Before joining CoreOS, I spent about 14 years at Rackspace. So I was at Rackspace when they founded OpenStack and worked closely with the teams who did that. So I've been around the OpenStack community for a long time. So that's why I wanted y'all to stay closer to me because this is family. This is my 12th summit. First one was in Boston back in 2011. For those of you who may not be familiar with CoreOS, so CoreOS is members of the Kubernetes community. We do a lot of work around Kubernetes and containers. So how many of you have heard of XCD? Nice. All right. So XCD was a product of CoreOS. XCD, for those who don't know, is the backend store, key value store, distributed key value store that's used across all Kubernetes deployments. So if you've deployed a Kubernetes cluster, the state for that Kubernetes cluster itself is being stored in XCD. We also started the project called Rocket. How many have heard of Rocket? Good. So Rocket is a container runtime, kind of like container D. So most people will use Docker containers, but there's also Rocket containers. I said docket containers because I made a rhyme. Docker containers. There's also Rocket containers. Also Clare. So Clare is an image security scanning tool that we have for images. And also Flannel. So if any of you are using the Flannel overlay network, those are all projects that were started by CoreOS. Along with a bunch of others, if you're interested, go to our GitHub account. Take a look at that. We have a ton of open source software available for you there. But today, the topic is self-driving Kubernetes. Self-driving Kubernetes. You got to let that roll off your tongue. Say it with me. Say self. Yes. Driving Kubernetes. Okay. These people over here. Self-driving Kubernetes. I love it. See this guy. Oh, wait. He's with me. Okay. That's why he's so loud. Okay. What is self-driving Kubernetes? So this is an image from, it's called Waymo now. It's the Google car, right? So this is what the Google car sees when it's driving an autonomous vehicle. It's driving the streets. Ideally, a human is not behind the steering wheel. But this is what the car sees. So it takes everything in its view and makes images of it and pictures of it and uses these sensors to drive the car, right? But the key element, only thing you need to take away from this slide is ideally the driver does not have to have their hands on the steering wheel, right? The car can drive itself. The driver does not have to have his hands or her hands on the steering wheel. Okay. What does that have to do with self-driving Kubernetes? Glad you all asked. All right. So operators, how many of you are operators? You take care of system. You take care of infrastructure. You wake up in the middle of the night when the phone goes off. Yes. We love you all dearly, all right? A lot of times the things you all have to take care of are numerous. So you got things like security patches, right? Dirty cow comes out or whatever the new exploit is. You got to get up and you got to patch all your servers, right? And make sure that they are not vulnerable to this attack. Okay. Got your hands on the wheel, all right? Deployments, developers, how many developers in the house? Developer, software developers? All right. Good deal. Software developers write this awesome code and we have to get it into production, right? So we have to deploy that code. Writing it is usually a joy, but sometimes getting that into production can be a pain. You definitely have to have your hands on the wheel. How about upgrades, all right? How many of you have upgraded an OpenStack cluster? A live production OpenStack cluster? Good times, right? You haven't lived until you've done that, right? How about any Kubernetes upgrades? All right? Been through that a couple of times? Good, right? You have to upgrade these servers. What if you want to scale? You've got much success. You want to scale these servers. You need to back up these servers. These are all things that our operators have to do and sometimes our developers help out, but they are with our hands on the wheel. We have to get in, build tools, pull this stuff off. What if, much like Google's self-driving car, we had self-driving infrastructure, meaning that as an operator, if I wanted to put my hand on the wheel, I could, but I could sit back relaxed and let the infrastructure do its thing. Well, if you play with my imagination for just a minute, walk with me here, I think that self-driving infrastructure, you need three elements to kind of make that happen. Controllers, this concept of self-hosted and automatic updates. We're going to dig into each of these. Let's talk about state. Let's start with a thermostat. What does a thermostat do? I'll wait. I'm patient. What does a thermostat do? Controls the temperature, right? Okay, so if you're in your house and it's a little chilly, a little cold, you can go adjust the thermostat, and the thermostat, through all its magic, will eventually make the temperature do what? Go up, right? You adjust it down, the temperature goes down. So the thermostat takes my desired state, how warm or cold I want it to be in the room and make it an actual state. That's what the thermostat does for us, okay? So in Kubernetes, we have something kind of similar to that. We call them controllers, all right? So for those of you who may be in robotics, you think about a control loop, right? This is an infinite loop that just keeps listening and checking the state of the system, okay? And if the state of the system, if the actual state of the system does not match my desired state, the control loop will take the necessary action to make my actual state match my desired state. Make sense? Cool? Okay. So this is what we have in Kubernetes. We have these controllers. So this is a sample one. Don't check this YAML code because it's just pseudocode. It's probably not right all the way. But this is to give you the concept. So on the right, I will describe, I will declare what I want my system to look like. And in this example, I'm deploying a simple front end. And you see there's a line, one, two, three, four, five, or six lines down. It says replicas. It says I want three replicas. So I'm deploying this front end service. Actually, this front end deployment, there's a service attachment, not in this YAML. And then I've got these three replicas across the bottom. So my desired state is to have three instances of this front end application. And I tell Kubernetes that in this YAML. And then Kubernetes uses its controllers to make sure my desired state, I want three, equals my actual state. You've got three. Make sense? All right. So I got controllers. That's pretty cool. Okay. What else do I need if I want to get to self-driving infrastructure? Okay. I got controllers. Also need this concept of self-hosted. How many have heard of self-hosted Kubernetes? Couple? Okay. Let me walk you through it. So here's the cool thing about Kubernetes, right? If I have an application, I can deploy that application into Kubernetes. And I can tell Kubernetes how many replicas I wanted to have, how I wanted to scale, scale it up and scale it down. And Kubernetes just takes care of that for me. It just does it, right? It just makes sure the state is there. Wouldn't it be cool if I could dog food, for lack of a better word, my own Kubernetes? What if I could use Kubernetes to manage the control plane of my Kubernetes deployment? Okay. So what are you talking about? So there's several things in a Kubernetes setup here. From layer zero up to layer four. So layer zero, we got this kubelet, right? That runs. And then on top of that, we got XED, right? Then we have this API server. Then on the row three there, we have a scheduler, controller manager, and a proxy. And then row four is our add-ons. So you can do add-ons like around DNS and other add-ons. Okay. So what if I actually ran the API server, row two, and row three, the scheduler, controller manager, and proxy? What if I actually ran those as pods within the cluster itself? Because once they're in the cluster as pods, if I need to scale one up, I use the Kubernetes mechanism for scaling to scale up my API pod control plane. Or to recover from failures on my control plane, if it's in Kubernetes, I can use the Kubernetes features to recover from failures in the control plane. Boom. Right? Mind blown. Crazy, right? So this is self-hosted. This is the concept of self-hosted. We're actually doing this today. And the way it works is with a project called Bootcube. And have I heard of Bootcube? All right. Good couple. Bootcube. So what Bootcube will do, you have a problem. You have a chicken and egg problem when you do self-hosted. So I want to run my control plane inside my cluster. But I need an API server to put any pods in the cluster. So how do I start to put stuff in the cluster before the API servers come up? That's where Bootcube comes in. Bootcube will spin up temporarily and temporarily play the role of your control plane. So it will spin up this API server for you, allowing you to spin up a self-hosted API server while that one's waiting. And all the other control plane elements, you can spin those up, place them in the cluster. And then once all that stuff is running in the cluster, Bootcube has done its job. And it will shut itself down and go away. Never to be seen again. Okay. Make sense? No, it's kind of crazy. But stay with me. Stay with me. Okay. So I got controllers now, which is cool. I got this control loop to make sure my desire state equals my actual state. And I also have self-hosted. So you've got a question already, haven't you? The first part? The kube what? Yeah. kube ATM. ATM. I'm not familiar with that one. But Bootcube is a Kubernetes project, too. This isn't something that Coros has written. This is an open source Kubernetes project. So we're not the only one self-hosting. It's just on my slide. All right. Okay. And then automatic updates. This is the next part, right? So this is a part that we pushed through with Tectonic, our particular project. Imagine a backup a little bit. So when Coros was started, the founders of the company wanted to solve a big problem. And the problem they selected was making the Internet more secure. And they figured the way they can make the Internet more secure is to make sure it was easy to update. So a lot of the vulnerabilities that we run across in the Internet is simply because people don't patch their boxes. So how do we make that easier where we can push those updates? So what if there's a way I could automatically have a channel that my Kubernetes cluster was listening to and that I could push updates to my Kubernetes cluster, either manually, if I choose to, or automatically, where it would push these updates to the Kubernetes cluster. Some of the ops folks are like, whoa, wait a minute. If you're pushing stuff to my cluster automatically, what if it breaks? The cool thing about being in Kubernetes is we can use rollbacks, we can use canary deployments, all the good stuff we have in Kubernetes to make sure we don't stomp ourself out with an upgrade. Interesting. That, my friends, if you put it all together, becomes self-driving infrastructure. I've got controllers. I got self-hosted Kubernetes. And I got automated updates. I put that all together. We packaged it at CoreOS in a product called Tectonic. But Tectonic has an installer that you all can get to from GitHub. I'll give you the links to that just a bit. You can all get to that installer in GitHub and look at the code and adapt it for yourself. But why stop there? Self-driving infrastructure is cool. But why leave it there? What else can we do? What if we did self-driving operations all together, not just the infrastructure, but all the other things that you have to do to take care of operations? What if we can make that self-driving? So there's a concept in the Kubernetes community called operators. How many of you have heard of operators? The software. Cool. So an operator represents a human operational knowledge in software that allows you to reliably manage an application. Said differently, an operator is software that takes all your know-how as a human operator and puts as much of it as possible into code, allowing that code to take the actions that you would take as a human operator. So these operators are real. There are a few built already. And they are based upon Kubernetes resources and controller concepts, all right? So this isn't something outside of the Kubernetes realm. This isn't some proprietary thing. This is based upon Kubernetes and the concept Kubernetes has around resource and controllers. These are application-specific controllers, these operators. So they know how to control a particular application. They know all the ins and outs, all the dirty secrets, all the little tricks of that application and how to control it. It extends the Kubernetes API. So it's based upon the Kubernetes API and simply extends it, third-party resources. And you are able to create, configure, and manage these instances through these operators. And the domain-specific concepts and knowledge are built into them. So, for example, in Kubernetes, if we wanted to scale up an application in Kubernetes, you'd use kubectl, or kubectl for those who prefer that. Use kubectl and you do a scale-up. You tell it to scale up by changing the replica set here. So you see at the very bottom left, it says my desired count is three. And if you look at the pod to the right, it says I have one right now. And I want to get to three. So I tell kubectl that my desired is three. kubectl will look at that and see the current state compared to your desired state. And it'll do what it needs to do to make sure the two match. So in this case, it'll count me up to three, right? That's a simple example of how we're able to use deployments, controllers, and whatnot in Kubernetes. But why not do something other than just scaling up? So, real-world, again, we talked about SED earlier when we started. SED is the back-end store for all Kubernetes clusters. At CoreOS, we've created an SED operator. So we have all those folks at CoreOS who know SED inside and out because they wrote the code. They understand it deeply. They have now built an operator, taken a lot of that operational knowledge, and began to port that into code that can be used by anyone. So now I can use this operator for SED, and instead of doing a scale-up, I can say do an SED backup. And this thing knows everything that has to happen for an SED backup, how we maintain Quorum, how we do the rollout. All that stuff is contained within the code, and the code executes that for you. We're able to create and destroy. We're able to recover a member. You can do rolling upgrades. All this is built in to the SED operator. If you want to actually see the code for the SED operator, link is on the bottom of the screen there. I'll leave that for just a second for those who want to snap a pic. But it's not just SED. Get them, get them. If you don't hurry up and get them, I'm going to jump in there with them, take a picture. All right, good deal. Y'all are good. I'll come back to this stuff later, too. Prometheus. So, Prometheus is a popular monitoring tool that's used in the Kubernetes community and other communities. There's an operator for Prometheus. We can create and destroy, do configurations, do service-level targets for Prometheus monitoring, all built into code, built into the operators. Snap, snap, snap. Photo up, photo up. Good deal. All right. Not just that. There's several other operators that are out there. There's one for Rook, one for Elasticsearch. We actually use one for Tectonic, which we'll talk about in a bit. Even one for Postgres. So, self-driving Kubernetes. If you actually want to try this out, use it yourself. See it live. There is an installer called Tectonic, the Tectonic installer. This is an installer that you can access, that you can actually use yourself, and it'll deploy this in what we call the Tectonic way. It'll install the Kubernetes cluster. It'll be secured by default with TLS, RBAC. You can automate the installation process. You'll be able to plug in your own scripts, CICD scripts to automate it. You can deploy this on any infrastructure. So, right now, we have production support for Amazon and bare metal, and we have pre-alpha support for things like OpenSack, Azure, and we're working on GCP. You can run Tectonic on any operating system. CoralS has an operating system called Container Linux, so you can choose that, but you can also run it on different operating systems. It's customizable, and it's HA by default. Cool? All right, so how does this work? Real quick, this is high-level. All right, so for an example, if you choose Container Linux, and you're going to deploy Kubernetes on top of OpenSack in a self-driving infrastructure way, the first thing you need to do, if you do it pure R-style, you're going to take our operating system, which is Container Linux. The nice thing about Container Linux is that it is self-updating, auto-updating, just like we want the Kubernetes cluster to be. The operating system will auto-update for us. So you can use Container Linux. You can upload that into Glance, and I just snagged the new Glance logo this morning any Glance contributors in here. All right, well, that's their new logo. There it is. You upload that in. Now it is based upon Terraform. Has she called it Terraform? Anybody heard of Terraform? Used it? Good deal. So it's based upon Terraform. So we have a pinned version of that. So in our instructions, you'll grab the pinned version that we used. And then the high-level steps are as simple as this. You don't need to grab all this, but this is just for me to walk you through it. You're just going to clone down the repo. You're going to download and make the custom version of Terraform. And then just going to make sure that version of Terraform is running. Then you can pull down a couple of flavors for, and this is an overloaded term. I know in the OpenStack community, flavor means one thing. I'm not talking about that type of flavor right now. I'm talking about a flavor of the tectonic installer. You can either have a Nova flavor or a neutron flavor. So Nova, if you just do Nova, you have to handle the networking at a band. If you do neutron, that means you're going to give me floating IPs and whatnot and handle all that. And then you got the regular OpenRC stuff down there at the bottom. Then I'm just going to export my cluster name. There's this variable file, this Terraform variable file. I can go into this file, and I can set all the customizations I need to set. So this is where I'm able to go into the installer. And if I don't like the way that CoreOS does it by default, I can go in and tweak it to my likings. And then I use Terraform plan, which will basically do a dry run. It'll go through and run through the whole script, do a dry run, make sure everything works the way I expect it to work. Here's the artifacts that are going to be built. And then you do an apply. And the apply runs out and actually creates the Kubernetes cluster on top of your OpenStack infrastructure. You can use the Kube config down there and Kube CTL cluster info just to confirm that it's working. Cool? All right. So Tectonic uses all those things that we talked about earlier. So it uses auto update capabilities. It uses backup capabilities. It has the ability to upgrade Kubernetes on the fly. So the cool thing is you can log into your console, and there's a new update for the whole Kubernetes cluster. You can literally push a button and say, yes. And it'll upgrade your Kubernetes cluster for you. It's doing that through a Tectonic operator, etcd operators that is doing all that to pull together to pull that off. Not only can we self-update, auto update, self-drive the Kubernetes layer, but if you are using container Linux, you can also do those updates across container Linux as well. Same concept supply. You can update your operating system. So a new vulnerability comes out, zero-day vulnerability hits. You can go ahead, push a button, get all that pushed to your container Linux updates, and get it all updated and pushed without having a fire drill. So self-driving Kubernetes on top of OpenStack. It's available for you to start playing around with right now. It's an alpha version, but it's an exciting project to find a way to take what's been happening with self-autonomous vehicles and figure out how we can apply some of that to the infrastructure that we're running. Early days, exciting days, but for those who like living on the edge, give it a shot. We'll see you on the road. I'll see you at the booth. If you want more information to talk through this stuff, our booth is right across from the theater there. We are hiring. If you're interested in that, I've got a link to the careers. If you want to connect with me, you can follow me at Tony on Topic, or you can email us training at CoroS.com. If you're going to be in the San Francisco area towards the end of this month, we have a core fest, which is our conference, our gathering, where we're going to talk all things Kubernetes, cool talks, cool conference there. If you're interested in attending that, if you go to the booth, you can get a discount code to attend that event towards the end of the month. And with that, I will open it up and try to answer any questions you might have. Come on up. I think step up to the mics. So can you use the ECD operator, even if you're not using Tectonic, just with stock Kubernetes? Yes, you can, because all the operators are based upon Kubernetes controllers, so it's just an extended controller. So it's a third-party resource, so short answer is yes. You can use it with any Kubernetes cluster. Thank you. Hi. I'm also interested in your auto update solution for both Kubernetes and Linux in general, and that was the point in your talk where you started pointing towards Tectonic, so I didn't hear a lot of detail about how you all are solving that. For the operating system, are Kubernetes in particular? Well, in general. That was the point we were kind of pointed at, well, Tectonic does this, and I'd like to hear about how. Yeah, so we actually use a Tectonic operator, so there's an operator that we wrote that's called Tectonic, and we use stable alpha beta channels that each of our clusters are plugged into and listening to those channels. So you can either set it to either stable release, or if you want to be on the bleeding edge, you can be alpha release, and basically what we're doing is we're doing rolling updates in the cluster. So you're taking a new release, you're deploying it on new pods, checking it, making sure those pods are working good, and then you're shutting down the old pods and having the new pods run. I can go into way more detail than that, but hopefully that gets you kind of where you're at. You bet. Hi. Hey, so I don't think I saw it on the slide, but basically when you install using the self hosted method, you can then just issue kubectl commands to scale the API proxy and stuff Yeah, it's the control plane is treated just like any other pod in your cluster. So yes, you can use those same commands. Now they run in a different namespace so that they don't get all cluttered with your user pods, but yeah. So just trying to understand, so when you're issuing that kubectl command to scale the API proxy pod itself, which API server is it going to? I mean, at that time the boot cube thing is gone. Yeah, so it's targeting itself. It's coming back to itself. Correct. Yeah. So now you're not running just one. So if you have just one, you're going to be in trouble, right? But you're already going to be running multiple API servers, right? And they're getting load balance for you behind a service, right? So you come back and you're going to hit that service API service, and then the different nodes, one of the nodes behind it will answer it and then begin that process of scaling out. But in the end, when the service is scaling, it's, I mean, the same API server that got the call that is going to do this, the scaling aspect. It is an instance. One particular instance of yes, of those API servers. Yeah. And does it only work when this is completely stateless? I mean, I'm just having a tough time. It's inception. Yeah, but if you have a state in your service, how is this going to work? I mean, for example, with HCD, yeah. Yeah. So for Kubernetes, for the Kubernetes control plane, I'll ask a question. Where is state stored? HCD. Yeah. All the state is in HCD. So it's not on API servers. Right. It's all in HCD. So you only need like a special way to scale HCD and everything else can be scaled using QQL? Yeah. So the one thing you noticed that I didn't, I said there's, you could scale, you could self-host HCD. We are not doing it yet. That's going to be our next step. But right now, HCD lives outside and is not self-hosted yet. Yep. Yep. Thanks for the question. Any other questions? Well, I appreciate y'all sitting closer. I appreciate your attention. I know it's right before lunch. So that means a lot to me that y'all came to see me before lunch. I know you're pretty hungry. But come see me at the booth. If you're interested in chatting offline, please reach out to me. My contact information is there. Y'all have been a blast. Hope you enjoy the rest of the show. Take care.