 Yep. All right, everyone, the trickle has slowed, it is 345, I've told my one joke, so I think it is time to get started. I'm Lucy, and today I'm going to be talking about how to manage CoreOS and Kubernetes with Puppet. These slides, if you want to follow along, and there are some helpful things to click and notes that you can reference. So these slides are available at slides.myname.me. If you just go there, there will be a link to these slides on that homepage. Or you can go to slash puppet dash on dash coreOS.html, and the .html is important. Who would have thought? I see some people taking pictures, okay. So in this talk, I'm going to cover each of the components of the stack and how they interact. So there's three technologies, puppet, coreOS, and Kubernetes, and all of them kind of have a relationship with each other, but of a technological love triangle, if you will. So we'll go over the role that all of those play. I'll explain why managing coreOS and Kubernetes with Puppet is something that you would ever want to do, and what use cases it might be appropriate for. And then I'll actually go through the practical steps of, first we'll do how to manage coreOS with Puppet, because that in itself is an interesting topic, and I'll demonstrate that. And then we will try to deploy Kubernetes to coreOS using Puppet and see how it goes. And that will include a demo. And then hopefully we'll have time for questions. Okay. A couple of caveats first. This is mostly a proof of concept, which means that it's currently just working on a single node. It's a pretty small demo, but what I'm really trying to demonstrate is that you can manage resources on coreOS with Puppet and that you can deploy Kubernetes to coreOS with Puppet, but it's not a production-ready demonstration that I'm going to be showing you. None of this is running an actual application. This is all also, all of my demonstration is going to be on local VMs and not really interacting with any cloud providers, but this probably could be extended to whatever cloud provider is appropriate for your organization. Okay. Who am I? My name is Lucy Wyman. I'm a software engineer at Puppet, and I work on a product called Bolt, which is our open-source ad hoc task runner, basically SSH in a for loop. Yeah. It is actually a really fun project if you want to check it out. It's very useful for many things, and I like to think that I bring a certain vivacity to our office, much like this person. Okay. So what are the technologies here? How many people have heard of or used Puppet? Cool. Does anyone want to give me the one line what it is? Yeah. Thank you. Oh. Do you want to? Oh. Okay. So Puppet is configuration management software, which means that it's used to manage resources, and by resources, I mean things like files, users, services on remote machines in an automated way. So it's really nice if you have a lot of machines that you are managing, have apps running on, et cetera, and you need to tell those machines, like, what users to have and what files to have and what permissions those files have, and so on. So that's what Puppet's used for as managing those resources. And Container Linux is the next technology that we're going to be interacting with. So Container Linux is an operating system which is made by a company called CoreOS, and originally the operating system was called CoreOS, and then the company CoreOS realized that they kept having to say the company CoreOS and the operating system CoreOS, so they renamed it to Container Linux. However, I will probably keep calling it CoreOS because that's a little shorter, or Corio, for extra short, but Container Linux is basically what it says. It's a Linux distribution that is made specifically for running containers on. So it's kind of a slimmed down Linux operating system that comes with several container tools, Docker, Rocket, etcd already installed on it, and somewhat configured, and CoreOS also makes a number of tools, some of which I just said, like etcd and Flannel and Rocket, all of which interact with the Container ecosystem, like Rocket is CoreOS's container engine, etcd is the key value, distributed key value store that a lot of container technologies use, etcd. But we're mostly just going to be interacting with the actual operating system of CoreOS. And then Kubernetes is the last one. Who's heard of Kubernetes? Yeah. So does anyone want to give me the one line? What Kubernetes is? Any takers? The future. The futures. Yeah. So Kubernetes is used to schedule resources. And in this case, resources means more CPUs and memory and more lower level resources. So it's used to schedule those for your applications that are running in containers. So you can say, here are all of the resources I have, and I want to have this application running in this many containers on these nodes. And Kubernetes will go tell those machines which resources to allocate for what containers, and you can scale it really easily, etcd. So yeah. OK. So at first, if you are kind of familiar with the container ecosystem, managing Kubernetes and CoreOS with Puppet doesn't really make sense. And there's a couple of reasons for that. So we like to treat containers in the container ecosystem as being immutable. And immutable means that we create the container with everything that we need already in the container, and then nothing in the container should change. Like it shouldn't really have any state changes, and it should just be the same as when we brought it up. And we could easily destroy it and bring up a new one, and it would be exactly the same. And this has two main benefits. The first is portability. So if I have the exact same container running every time that I bring it up from certain configuration files, then if I just give those configuration files to my friend, they have the exact same container running. And so there's no special, oh, it works on my machine, but not on this other person's, because they don't have this one file. Everyone has the file. It's coming from the configuration files, and there's no change in state that put it there. And by configuration files, I'm usually talking about Docker files. There are now other methods of configuring your containers, but in general, let's pretend we're all using Docker, and I'm talking about Docker files. So containers are portable, and they're also very reliable. So if we treat containers as immutable, then I am past the same container that QA signed off on and can deploy that to production and know for sure that it's working. And if everything actually is immutable, then there's no change. Again, there's none of that, like, oh, but there's this one weird, like, Steve has this file on the system that's making everything work, and then in production, it's gone, and now nothing is working. So there's a certain reproducibility and stability when we treat containers as immutable, and that can make deploying to production a lot easier. And it also means that we can treat containers as ephemeral objects, so if something does break, we just tear it down and bring up a new one in a version that we know is working, and that we know it works. Nothing has changed. It's just the configuration file, et cetera. OK, so we'd like to treat containers as immutable, and because CoreOS was looking to be a container-specific operating system, it is also, as an operating system, takes a lot of these immutable philosophies and has integrated it into the operating system. So for example, you have atomic upgrades in CoreOS where it will actually completely install a new operating system with any changes that you've made when you're upgrading. Install it into another partition, and then once it's verified with certain health checks so that partition is healthy, it'll just flip a switch to the new partition, make sure everything is working, and then destroy the old partition. So yeah, CoreOS also likes to treat itself as being immutable, kind of the same idea. It's configured using ignition as the newer, although at this point it's not that new. New configuration files used to be configured using Cloud Config, and it's all of the same ideas, all of the same benefits. It's more portable, it's more stable. Yeah, hopefully less things are breaking in production because you've already tested all of them and nothing has changed between when you tested it and when you're deploying it. And so everything is managed in configuration files, and all of these configuration files basically are your configuration management. Because everything is immutable, there's not really a life cycle to manage, which is what Puppet is especially good at. It's good at managing resources on a system, but all of those resources are already being managed by either Docker files in the case of containers or ignition files in the case of CoreOS, and all of those are probably under version control, so there's not really any need for versioning, which is another thing that Puppet might provide you. So there's, in this ecosystem, it doesn't really seem like there's a need for Puppet to be used to manage your resources. Everything's already managed, everything's already versioned, like there's not a problem to fix necessarily. But you might notice that immutable is in quotes because these things aren't actually immutable. Like if I add a file to my CoreOS system, it doesn't do that whole tear the operating system down and rebuild it with this new file, rigmarole, like there's just a new file on the system. Similarly with containers, there's a lot of caching in containers and other services that might have been added. There's dependency updates. My container might be depending on the latest version of another container, and then when I go to update my container, it pulls in the latest version, which breaks things, et cetera. Okay, so skipped a little bit ahead, but why this might make sense is when immutable infrastructure doesn't work for you, and it's really a case by case thing. There's no immutable infrastructure can never work, and there's no immutable infrastructure will work for everyone, one size fits all right. There are a lot of cases where immutable infrastructure is really great and has provided a lot of stability for a lot of projects, but there are also sometimes where it can be a little bit too brittle and too hard to make changes. So immutable infrastructure might not work for you if you have a large container infrastructure that's really hard to bring down and bring back up. So the example that I've heard used is one organization had a new person join and they needed to add their SSH key to several of the containers so that they could interact with the application. And they needed to bring down their entire infrastructure and bring it back up in order just to add this SSH key. They ended up having a lot of downtime and then something went wrong with adding the key. And it was just a really painful update for something that should be pretty simple, which is just adding a key. So changes can be expensive. Oftentimes applications will be relying a lot on cash and every time you tear your container in for down, you are probably losing a lot of that cash. And it can also be risky. As I said, you do typically have dependencies and while you can version all of those dependencies, there's like system updates, there might be a security update to something and you need to be watching for that in order to know that you need to update the version, et cetera. So managing dependencies and versioning can be really risky in an actually immutable infrastructure, whether you're using the latest version, which is more secure probably, or whether you're using pinned versions, which can also be kind of a pain to keep track of. Puppet can also be used to handle configuration drift. So again, if there's something where there's a service running in part of your infrastructure and somehow it gets stopped or a new file is added and breaks things, then Puppet can be used to revert your infrastructure to the old state without needing to do this whole bring everything down, bring it back up, can handle configuration drift in place. And you can also add or modify resources without restarting. Going back to that SSHQ example, it would be a lot easier to just run Puppet and add a file to your CoreOS system than to have to add it to your configuration, restart your entire operating system, et cetera. And there is a talk that goes through a lot of the pain points of immutable infrastructure that an organization experienced that I thought was really helpful in learning about, again, immutable infra totally works for a lot of people, but this was a good, like here's why it didn't necessarily work for us. Okay, so how do we actually manage CoreOS with Puppet? That's what I'm going to focus on for the first bit. So what we're actually gonna do is run Puppet agent in a container running on the CoreOS system that it's managing. And the reason we do this is because that's kind of idiomatic to how CoreOS runs, I don't want to say things, but you can think of Docker as the package manager for CoreOS. So CoreOS doesn't come with an apt or a yum style package manager, but in general, you want to run any tools that you're using in containers on the system. So that's what we're doing here. We could in theory manually install the Puppet agent to CoreOS, but I would need to download all of the files for the Puppet agent, know where all of them go, put them in the right place, et cetera. And it's a lot easier to just have an agent running in a container, an agent that's up to date, and have that manage the underlying CoreOS system. So we'll do that. We mount all of the directories that we care about to the containers that the Puppet agent can make changes to those directories. Yeah, make changes to the CoreOS system from within the container, and you will probably want to have a networking expert on hand if you're taking this on, because I have found in setting this up that pretty much all of my problems are networking problems either, especially because these are running in VMs, although in cloud providers, I'm sure it would be the same where you have to have your agent and your master talking to each other. You have to have your container talking to an underlying system. All of that needs to be able to access the internet. It just gets kind of hairy and tangled, so I don't even know that you would need to have an expert, but someone who's not a networking novice like myself would probably be really handy. So yeah, read up on it if you are interested in taking this on. Okay, magic. So let's do a quick demo. No, just a demo. I find this gift very pleasing, so I'll let it run at least once. Oh, so funny. Okay. Yeah, it's not as funny without audio. That's true. Okay, so I have a virtual machine. This is just a new Buntu box. Yes, 16.04, I think. And this is my puppet master, and then over here with the red prompt, I have a CoreOS machine. I've called it CoreOS agent because it will be having a puppet agent and will treat it as a puppet node, but it doesn't actually have puppet installed on it or anything at the moment. So the very first thing that I want to do is on the master try running puppet agent dash t and hope that this works. I literally went through this demo this morning, and this was the very first thing that went wrong in that demo, and yeah, it was heartening for sure. Okay, so we know that our puppet master is set up correctly. It's like running puppet server and everything. And on my CoreOS machine, in order to have the puppet agent check-in, so a little bit of a primer on puppet, you have your puppet master, which has all the configuration that you want. The word I'm going to, another word I'm going to start using is modules. So there are modules with commonly used configurations that you can have installed on puppet. All of that is kept on your master, and then an agent or the node that you want to have configured will check into the master and say, hey, are there any configurations I should have? Are there any files or users or services or packages that I need? And then the master will send what's called the catalog to the agent, and it will run a set of steps to match the stage that the master says it should be in. So that's kind of the back and forth of the master and agent. So the very first thing that I'm going to do before like installing any modules or anything is just make sure that my agent can talk to the master. So I'm going to be running just the puppet agent container. So it's literally like puppet slash puppet dash agent. And all it does is it's a container with an agent in it. It just runs puppet agent dash t and then exits. So pretty simple. Going to do Docker run. I'm going to mount ports 443 and 80. And I'm going to use dash dash RM. This is just telling the container to exit once the puppet agent exits or to destroy the container because I don't really have any use for it after the puppet agent exits. I'm also going to add dash dash privileged. And this gives the container access to all of the devices on the host. So if I didn't add this, I think it would only have access to the device it's running on. Whereas if I add dash dash privileged, then it just has access to everything on the host essentially. And then I'm going to mount several volumes. So these are, this is the mounting all of the directories we care about part. This is just a good assortment, I thought, of directories. Sorry. Oh, mount root. Sure. I think that's all that I wrote down. And then I'm going to be using the hosts networking stack instead of the bridged one. And that is all of my flags. And then this is going to pull from Docker hub, the puppet puppet agent container. Let's see if it works. Yay. Okay. So the agent checked into the master and it sent a certificate signing request. So I'm going to, on here, I can run puppets or lists. And well, it should pop up with the chorus agent. So we're going to sign that certificate because we know where it's from. I'm just going to do all because there's only one and it's shorter by like four characters. Okay. So signed the certificate. And then I will just run that same command. So we'll run the puppet agent again, now that it's authenticated with the master. And hopefully things will happen. Yay. Some of this might actually be read. So if this is actually your first time running it, it's clearly not because this is a contrived demo. But this is actually your first time running it. We would see something a lot more like just this. So the verbose output is because I've already installed the module and tried to apply it to this CoreOS machine. But you get the picture. And things are working. Yay. Okay. So to like actually make puppet useful, we have a number of modules. So I'm going to puppet module install. I'm just going to use the message of the day module. There are a very large number of modules on what's called the puppet forge that do any number of things. But the MOTD module is just an easy like proving that things work module. So puppet module install, puppet labs, MOTD. And I should have done this earlier so I didn't need to download anything. Charter sauce. I actually did not think this far ahead. So once this finishes, we will run the same command here. So we'll have the puppet agent check into the master. And actually I have skipped a step. So on the master in what's called a puppet manifest, I will add the like configuration that tells puppet that I want to apply the message of the day module to this specific node. And I'll show you what that looks like once this finishes. Oh man. Good question. So, what? Oh, sorry, say your question again. So I can repeat it. Yeah, the puppet from inside CoreOS know where the master is. Okay, how does puppet from within the container on CoreOS know where the master is? So I have, let me go back here. My virtual machine setup is at this get. So it's GitHub, Lucy Wyman, and puppet on CoreOS demo. It is hopefully all the way up to date. But that will have like my vagrant file and such. And that's where I've configured all of the networking between all of these virtual machines. So how it knows about it is I'm using the host networking stack with that dash dash network host. And in my Etsy hosts, I have where my puppet master is. And I've just assigned it a static IP. I've also used the vagrant plugin. I think it's just vagrant dash hosts, which automatically generates this. So I think even if the IP is more dynamic, it would probably be able to figure it out. But yeah, I just set a local IP to each virtual machine and then make sure that this is there. Yeah, so it knows that puppet master is this IP. And then in, I have some configuration files. And again, this is all from the vagrant files. Like this is just provisioning that I did when I brought the machine up. But in what's called my puppet configuration file, I've told the puppet agent what the server, so the server part is the puppet master host name. So that's how it knows to check into whatever it thinks puppet dash master is. And then Etsy hosts is how it knows the actual IP of that. Yeah, saw that I disconnected briefly there. I might have to abandon my message of the day effort. But hopefully just running the agent has proven enough. Luckily I have already got the Kubernetes one working. If we have time at the end too, I might see if I have any local modules and we can like copy those up to the virtual machine. But let's move on. Okay, so in order to deploy a Kubernetes cluster to CoreOS, I'm just going to use the puppet Kubernetes module. Luckily this was released the same day that I first gave this talk, and it would have been really helpful the first time. But yeah, so we have a puppet Kubernetes module. It does not quite yet have CoreOS support. If you go to like my branch of it, and then there's like a main add CoreOS branch, that's where I've been keeping my work. So if you're really interested in the nitty gritty, that's where to look. But hopefully that will be merged shortly and be part of the actual puppet Kubernetes module. So we're basically just going to use that to deploy Kubernetes to CoreOS. And there are going to be a couple of manual steps required for now, mostly around it's hard to reload and restart services from within the container. So that's like I can't really call system control daemon reload, and I'm not really sure what a workaround is for that yet. So we are going to have to like reload and restart a service and then run puppet again. But other than that, it's mostly hopefully working. So without further ado, I'm just going to stop that. Okay. So puppet labs, puppet, I guess I can just look at this. Code environments. Can I what? Oh, okay. So I'm going to go into what's called my production environment. Don't worry about it, this is just where I keep all of my like puppet configuration code. And as previously stated, I have manifests which tell puppet what modules I want to apply to which nodes. And specifically, there's one called site.pp, which is kind of the default. Like here's where to look for manifests. And I'm going to tell it that on my node called the CoreOS agent, I want to add the module classes are modules, boards are hard. I'm going to be adding the Kubernetes module. And then this is where I pass in any parameters that the module takes. So I just so happen to know that it takes one called controller. And I do want this CoreOS machine to be a Kubernetes controller. And then I also want it to be a bootstrap controller. True. I think I need commas between these. Okay. So I'm going to save that. And then specific to the Kubernetes module, we also need to generate some data that the module is going to use to make decisions about how to like install things on our CoreOS system. So if you go to the Puppet Labs Kubernetes module Git repo, it will have specific instructions for the Docker incantation that you run and all of the environment variables that you pass it. I am going to just copy and paste it from my instructions. And again, this is just, this is literally the read me from the demo Puppet on CoreOS demo that I showed earlier that was on that side. Okay. So I am going to run a container called Puppet slash cube tool. And this is literally just like a container that spins up and generates what's called a Hyra file. Hyra is Puppet's key value store. So it's separating your data from your code. So it's just going to generate the Hyra data that the Kubernetes module needs in order to like run things successfully. So a couple of similar flags. We're going to remove it once it's done. I'm going to mount the current directory to slash MNT in this container. And then I set a bunch of environment variables. So I'm setting the operating system to CoreOS. I'm setting the Kubernetes version, the container runtime, the CNI provider, the fully qualified domain name, and several other environment flags that are relevant. Most of these are just the same IP. And you want this to be the IP of the CoreOS machine, not the machine that you are currently on. Because I'm not looking to make this Ubuntu machine my Kubernetes controller. I want to make my CoreOS machine the Kubernetes controller. So yeah, set all of these. Again, like for your specific use case, I would definitely just look at the read me of the public Kubernetes module. So we can see that this is doing a bunch of work. And then at the end, that will generate a Hyra data file called kubernetes.yaml. So that exists. And we can open it. And it's just got some CA certificates, et cetera. And I'm actually not very good at puppets. So I do need to add this manually. Because for some reason it doesn't get added. And I don't know why. Okay. But 1.9.3 CoreOS.0. So this is the Kubernetes image tag that CoreOS needs to know what version of Kubernetes to get, maybe, I think. And we also just want to make sure that everything else is there. Make sure that this fully qualified domain name is right, this IP looks right, et cetera. Generally, this goes pretty well. I haven't had anything break, but that was good to double check. Okay. And then we are going to... Oh my gosh, I have 15 minutes. I did not think that was going to go that fast. So I'm going to move this file into my data directory, which is through the magic of Pyra, I've configured this data directory to know, Pyra knows that this is where my data is. So I'm going to move that into there. Going to look at my notes to see if I'm missing anything. Okay. And then I've already done a couple of setup steps that are in that read me for the puppet on CoreOS demo. Like I manually had to install the kubelet binary onto CoreOS because it wasn't there and the Kubernetes module expects it to be there because for Debian and Red Hat, it just is there when you apt to get install Kubernetes. So I've done a couple of those steps already. So if we ls slash opt bin, we can see that these two binaries are there. I put them there on purpose. And what else? Added Kubernetes. I'm going to start this EtsyD member service. Just for good measure. Okay. And then I think it is time to run the agent again. And now that I'm doing it with the Kubernetes module, there's just a couple of other directories that I want to mount to specific places where the puppet agent is going to be looking for or like within the container, it's going to want to run keep control. And so it needs to be in a place where that's part of the path. I could update the puppet agent that's a container that's hosted on Docker Hub, but that's more of a pain than just matching the directory to a different place. So I've done a couple of things like that. And I'm going to expect this to fail because I don't have the KubeLit service running yet. I need puppet to put the KubeLit service configuration files in place first. And then I need to manually restart the KubeLit service, do that like dame and reload, and then start KubeLit. And then we will rerun this again, and it might work. But this first one will definitely fail. And I know that ahead of time. Yay. We are running a little low on time. Do people have questions? No, excellent. Okay. So let's see what happened. So, yeah, this KubeCuttle get nodes is what failed, that is what I expected to have fail, and that is because, again, the Kube control service is not actually really running yet. So I'm going to see if I have already put this file in place, but I don't believe I have. Oh, I have. So the file that I wanted to put in place is this KubeLit.service. And then I'm going to run system CTL, dame and reload, and whoop. Don't know what that was. Restart KubeLit. Cool. And let's see if everything is healthy. Yay. And KubeCuttle cluster info. Cool. That is all what I want. One last thing that I want to make sure I do is, I don't actually know where this environment variable defaults to, but if it's not pointed at the actual configuration file, then things break. Usually when this happens, or when this isn't set correctly, it'll say, it'll think that the master is running at local host, so that's a good indication that you need to set this environment variable, but yeah, just good to double check. Okay, and then let's try it again and see. And yeah, once this is finished, if it, whether it succeeds or not really, I will make sure that KubeCuttle is working first. We'll see what happens. All right, last poll for questions. Yeah. I heard one time that it takes people 16 seconds to think of a question, so I generally count to that much. So I caught why we needed Kubernetes for this. I caught why we needed Puppet for this. Why CoreOS as opposed to any other container Linux? Yeah, there's a couple of reasons you might want to be using CoreOS. For the purposes of this talk, it's because I have a lot of friends that work at CoreOS who dared me to figure out if you could run Puppet on CoreOS. But in reality, if you're looking to start seeing if container infrastructure is for you and you have an existing Puppet infrastructure, I think that that's when this could make a lot of sense. So I have existing Puppet Infra. I'm interested in running containers on CoreOS because CoreOS is built for running containers on. I want to see if my application could run on containers. It would save me a lot of space, a lot of overhead that VMs have, et cetera, that CoreOS just does not have that baggage or resource use that's unnecessary that other operating systems have. Did that answer your question? It seems to me it's something like Alpine or something lighter like Serious could just easily have been used here. But when you did mention that friends were involved, I kind of understood that logic wasn't necessarily the biggest part of that. Yeah, definitely. From that point of view, in comparison to Alpine, I would say that CoreOS comes with several tools that are really handy for managing your container infrastructure already. So again, EtsyD, Rocket, Flannel, all of that's kind of already in there. And yeah, especially if you're novice to the container world, I think it might be a little more friendly to newcomers than Alpine. But totally Alpine, I think you could probably do this with, question mark. I actually have no basis for that statement, but I'll say it anyway. Maybe in this next talk I'll give. Yeah. I think this was covered the first time you gave the talk, but would Mountingvar help you run those services? I don't remember. I think so. I think that the actual problem is that system control in Docker is not, I don't think that Docker likes running system CTL. And so especially, what I thought the problem was is that it's trying to reload all of the services, including Docker. And since Docker is running the container, it can't restart itself. That's what I think the problem is, but other people seem to think that it's just that Docker isn't able to interact with system CTL in that way. So I'm not really sure what the actual underlying problem is, but I think restarting Docker from within Docker is kind of the crux of the issue. Yeah, that sounds problematic. Last call for Lucy. Thank you, Lucy. Yay, thank you. Yay. And it didn't work. Oh, boo.