 Well, hello everybody and welcome again to another OpenShift Commons. This time we're going to talk about one of the building blocks of containers and all this wonderful cloud-native stuff. My guest today is Mark Lamarine, a good friend of mine and a long-time Red Hatter, and he's going to really go through the underpinnings of everything and give us an introduction to container hosts. So, Mark, take it away. Good morning. I'm Mark Lamarine, just so people know, I've been working at Red Hat for seven years and I've worked on OpenShift One and OpenShift Two and I've been working on OpenStack as well. But one of the things that has intrigued me recently is the container hosts, both CoroS and Atomic Host, and it occurred to me in working and in viewing the environment that the container hosts themselves and that the application, the system administration layer between the operating system and the containers and the orchestration, steeped to be getting less attention than I thought maybe it did deserve, and this presentation is a result of that. This is actually a condensed version of a presentation I gave at the Lisa conference about a week ago, and that was a three-hour tutorial. So I think for a half an hour we could just cover the pieces, but this talk contains references to the original material I had for that tutorial as well. And if people have questions or if people want to follow up, there are links and references here to that material. Right. Go forward, be in the right window. This is just so you can see the slides and code samples from that Lisa presentation are here. I'll also make a copy of the slides that I'm using for this available, and Diana's going to make the recording available as well. I want to go back to kind of the beginning. Again, I'm talking about container hosts, and I'll need to talk about what a container host is, but first I want to kind of revisit what our conventional view of a Linux host or of a server is, and the fact that it is the way it is because of history. It was built up over time to serve our needs, and if you're old like me, you remember things like recompiling your kernel so that you can add a tape drive. You can remember going and getting your compiler on tape and building it three times with itself to verify that it still works. When we first started with these, we didn't have packages, we were still learning. We were learning from the very beginning, and we were doing things like putting our software in user local because we didn't have any way to keep it separate. We had kind of standard ways of building software. We'd grab a tar ball from somewhere, and there would be a file or a script in there called Configure. It would run Configure, Make, Make, Install. I remember a time when I thought shared libraries were evil because I was used to old systems, Windows 95 and 98 that had Windows DLLs, and the biggest complaint Microsoft had for service was people who were editing the configuration files and messing with the DLLs, and they literally created the registry to counter that. So I'm coming from the old days, but this is what we started with, and we created things to counter that. We created packages. We created package repositories, and we came up with a set of conventions for how to manage systems for our users. The conventional host that we're used to seeing is kind of built up of control domains and different layers of software. If this looks like the ISO layers, that's not what you need to see. If this looks like the layer related, at the bottom you have the operations layers, which are the basis for work, and at the top you have your applications, and then there's some stuff in the middle. These layers impose a series of dependencies. Each layer depends on the layer beneath it to do its job, and each layer from the bottom up provides an interface that has to be stable so that the next layer up can trust it. If we start at the bottom, the operating system is managed by an ops team. They install the kernel, they install the operating environment. It's generally managed by package management, or most of us are going to be familiar with Yum or Apt, and these are the layers that are fairly static. These people managing this tend to be very conservative. They get blamed if the machine goes down for whatever reason, or even through whatever interpretation. They tend to be pretty focused on stability and on conservative updates. At the top layer, we have our applications developers. These are people who they're moving fairly quickly, relatively quickly. They want to use the most up-to-date software. They want to use the most up-to-date libraries. They're constantly changing their business logic and their business data to meet new demands. They have a very dynamic update schedule. These people are really moving quickly, and they appreciate the stability from below, but there are times when there are issues. Now, the middle layer, this is where things get interesting. In the ideal situation, this is controlled by the ops team. They maintain it in the same way that they maintain their other layers, because these are the pieces that the apps people use. If they are moved too quickly, the apps layers can break, and the apps people will be upset. On the other hand, if they contain security flaws or something like that, then the operating system people will get blamed for it. They want to do the updates in lockstep with security updates, but these two things cause some conflict. This is more the reality of what you see in that middle layer. You see the ops people who control some app libraries, and then you're going to find the apps people say, I can't wait for whatever it is. I need the next version of my Ruby library. They start taking responsibility for those things, but they also take control. You get a mixed bag in this layer of pieces that are run in system space and pieces that are maintained in user space. It means too that you've got new software management systems, because the apps people at the top are maintaining their code. The operating system people at the bottom are maintaining their code a different way, and then there's a conflict in the middle over who owns those middle pieces and who's responsible for keeping them up-to-date and keeping them secure and stable. So to come back, the things that characterize our traditional hosts are a fairly tight coupling between the OS and the app. And you ask some questions. The ops team owns the binaries and libraries, but why does the ops team care which version of Apache or which version of Ruby the developers are using? And so these groups vie for control. Who's got what version of a library? The scripting libraries are provided by the ops team, but they only generally provide one version or maybe two for systems that allow that. And so the users have created user space library management systems like Gem and PIP and MPM which can be in conflict with the system ones. And then when something goes wrong, it becomes difficult to say, you know, who's responsible for the layer that you're working with. One of the things about scripting libraries in user space is that it actually kind of discourages interface stability. The users are able to make updates whenever they want. And then the developers of those libraries also seem to be less concerned about interface stability. They just keep adding new features because they're using the same mindset. There are some alternatives to this user space management. There's a software collections which are something that are provided as packages which allow you to run different environments of say Ruby or Rails or various other tools where the ops people can install a set of packages for a specific version. And then the users can enter an environment for the appropriate version. But this is really just putting off the problem. It's actually doing kind of a combinatorial explosion. In the case of where you just install everything, one of the things about current hosts is that we just generally install everything in system space because some users going to need it. But if you look at the system libraries for something like Python or Perl or Ruby, you're gonna find that there are lots and lots of libraries there that the two or three applications running on a typical host probably never use and so they never need to be there. Another important characteristic of conventional hosts right now is that when you do updates, even with a package system, once you do the update, it's very difficult to roll back. And so you pretty much have to very carefully roll forward and we come up with these conventions of rolling forward only a small piece of the ops team will roll forward their operating system and then wait to see if their services break. And then when we're satisfied with that for a week or so, we'll roll it forward to a slightly larger dev space where those people can do their work. And then the final thing we'll do is roll it all out to our production environments. But the reason we have to be careful in this way is that there's no way to go back. There's no good reliable way of rolling back the operating system layers. So one potential solution for this is something that we're calling container hosts. And I'm using this as a broad term. I'm gonna cover two of the varieties of container hosts right now, which are both popular and well-maintained and I'm familiar with them. There are others that I'm less familiar with and I can't speak to them. Just for background, there's a history of trying to create some hosts that have different characteristics from the container from the traditional hosts. The earliest ones were embedded systems, peace sauce, wind river, and there was a third one whose name I can't remember. The idea of these was that you would have a very small operating system that would run in some kind of very limited hardware and you'd build your application tightly bound to that environment and then hash it onto us and it would run or even run it in or maybe commonly you still commonly use today in aircraft systems, in aircraft flight controls and in other critical systems where the environment and the operating system have to be completely minimal and entirely reliable. Also used commonly in real time systems, again for flight controls and such. The more recent things people have created what I call a compact Linux, which is a single purpose Linux which is built for a particular environment. A generic one is busy box, which is kind of a solo binary that has everything built in. Android by Google is specifically designed for phone operating systems. So it's not quite as reliable as aircraft but it is a tightly controlled environment because they have security concerns and to some degree stability is a concern. Chrome OS is a third one. It's designed specifically to run Linux for running a browser on very lightweight hardware. Those are the precursors to what I'm calling a container host. None of those are fairly difficult to program for. They require rebuilds to add applications. They tend to be completely read only with very small limitations. The first of what I'm calling a container host is container Linux or core OS is only about three years old or four years old. It was introduced in October, 2013. Project Atomic came along about six months later. Project Atomic is an additional, it's a different way of approaching this problem. Rancher OS was also in 2014. I can't speak much to Rancher OS. I haven't used it much but it's from what I understand in the same kind of class of things. The container hosts have some specific characteristics and I'm looking at bullets here so I'm just gonna run down them but the important things are that it's minimal. You can do Atomic roll forward and reliable rollback. They do have writable space but it's not the slash user space that's actually read only on most container hosts. They're meant to have minimal configuration because they're not actually designed to have users login. Unlike previous services, you do have a login but it's not meant to have particular users. The users should access it through other means they're just logging on and getting a shell. They're designed to be clustered. Whether this means an integrated clustering mechanism or there's some built-in way of forming clusters and sharing resources. They have an integrated software defined network so that those resources can communicate with each other within the cluster without exposing their traffic to the outside. And one of the key features is that they actually have an integrated container runtime and they're intended to run containers at scale. Now, the individual hosts are still individual hosts but using the clustering and networking you can build applications out of individual containers. We'll talk about what a container runtime means in a minute. The two I'm talking about have very different architectures even though they have similar goals. Container Linux CoreOS is based on Chrome OS it's designed to be embedded or frozen in the system. It builds from source every time so you get a clean build but it means you have to build from source every time. I'll show how this works in a minute but it does the updates by having a read-only user partition and then overwriting the new version and swapping to it. This is a diagram of the runtime architecture of Container Linux. As you can see the Etsy var and user are in regular writable space but the Etsy var and slash are in regular user space regular writable space. The user partition, you only have one user partition active at a time and that's a read-only partition with all of your software, all of your libraries and when you go to do an update you download a new copy of the image you place it into another partition and then you reboot into that new one and what that means is that you it does require a reboot to update so you've got to have applications that can tolerate that hopefully that are spread out over many systems so that you can reboot a single instance but it also means that you can reboot directly back to the previous working version if there's a problem. The tools that CoreOS uses are something called Update Engine and Locksmith D. What Update Engine does is it checks for updates, pulls them down and initiates a reboot and what Locksmith D does is it forms a cluster and it locks, it grabs a mutual exclusion lock when it goes to do an update so that you don't get a wave or you don't get a lot of simultaneous reboots. You get a steady roll system fail in any way after the reboot it's fairly trivial to boot back to the old version. Atomic host is the second one and the one I know most about and it turns out the one that has the most tools in the area I'm talking about for systems to manage them. Atomic host is based on RPMs. It uses a thing called RPM OS Tree which is, it's a more complex management system than CoreOS swapping images but it provides some very valuable controls that a simpler mechanism doesn't. OS Tree is integrated right at the boot loader so when you, the GrubTube boot loader can recognize that it's booting an OS Tree and manages that properly that the code is upstreamed and maintained so this isn't something you have to worry about a custom GrubTube. Instead of merely mounting slash user back and forth it uses a more complex set of layerings. It uses LVM, Udev and Mapper to provide all the maps effectively hiding the complexity from the users and the operators. The swapping patterns are well known though so that while those things are hidden they're not really magic. They're very well known techniques that are used in other places as well. One of the characteristics of the Atomic host is that rather than downloading a whole image it only downloads the diffs and I'll show you how that works here. On an Atomic host, again you've got Etsy and Var which are read and write and you've got a real user partition called OS Tree Objects which is in the center of the diagram. That contains all of the files that are actually used by the system and then the two user partitions use hard links to the elements of that hashed data structure and where they're the same but if you look at the bin bash from both sides you'll see that they link to the same hashed object and so when those two versions use the same object they actually use the same object and there's only one copy there. And again for the directory if you look at the hash two, the third element as long as both are using the same thing there's only one actual copy of the target. In the case if you look at the second line of both the user zero and user one you can see that the hashes are different. In that case each one uses its own version of VI and so there are two copies of it and each one is linked correctly to it. And if you look at the last line what you see is that when you do an update and a specific hash is no longer referenced neither version of the neither bootable version references that file then RPMOS tree removes those files. And so you're never keeping multiple copies and as much as possible the elements are reused. That covers the operating system itself. I don't know if Diane if we have any questions at this point but if I don't hear I'm gonna keep going. Keep going if you're doing good time wise. Okay, the next piece I wanted to talk about is clustering and network. I'm gonna run down this also. So currently the clustering used by OpenShift and by various other container clustering systems is called EtsyD. It's a key value store kind of like LDAP and kind of like something like Sleepycat BSE but it's networked and it's clustered and unlike other databases it has a fairly high latency so you can't easily use it in the same way that you would for messaging although some of that is done but the goal really is fast reads and good consistency. The EtsyD itself is just a database. It's just a cluster database and it's actually the thing that forms the clusters in any of these. When you see an OpenShift cluster or if you see a Kubernetes cluster there really isn't a Kubernetes cluster. There's really an EtsyD cluster that Kubernetes or OpenShift are using for their database and that's what forms their actual cluster. The second piece is a software defined network. There's a couple of those. I detailed Flannel which is an older, simpler flat networking system that CoreOS has created when they were just getting started. More recently people have switched to using Calico or one of the other slightly more modern software defined networks, containerized software defined networks. The big benefit for something like Calico is isolation. You can build dynamically isolated networks where Flannel is a big flat network space. I walked down some of this for EtsyD so I won't go into too much detail here. It's a key value store. You can communicate, there are tools for managing it but you don't have to use them and in fact when I go forward in a moment I'll show you that you can actually tell the tool to tell you what curl commands it uses to get its answers. One of the interesting things about EtsyD is that while it has a put get mechanism that any kind of key value store is gonna have it also has a watch mechanism. The watch mechanism is kind of special because you can tell a client to watch a particular key and that watch will block until the key changes and then the watch will hand you back the new value. So you can use this for things like polling. You can also do proper polling but you can use this for things like weights on changes and polling and queuing. Flannel specifically, it's a software defined network. It was created by CoreOS. It's still supported although it's less and less the recommended SDN. You can run it on a public or private network and Flannel encapsulates the traffic so that if you're running with only a single interface the traffic running between the container hosts on behalf of the containers is not directly exposed to the carrier. Again, orchestration systems are moving towards Calico or OpenStack Courier or other cloud providing network to provide more isolation and control. Again, Calico is newer. I haven't worked with it yet so I don't have a lot of details with it but I expect to update the demonstrations that I have in that I'll get the samples as well soon. I do wanna cover something that it's gonna be a little of a side step because one of the things I've found is that people have a series of misconceptions about what containers are that sometimes incorrectly inform how they use them. Just so people know the key word in the first four lines here is the word just. A lot of people will say containers are just charuter jails. They're not just that. There's significantly more to them and they shouldn't be dismissed. They're not just Solaris containers. Yes, there's a history there but there are significant differences. It's not just LXC. LXC is a precursor to Docker and to other container systems that essentially was kind of like an erector set for containers. You could build containers but you had to be really dedicated and you had to really know how you were doing it. One of the probably the biggest advantage Docker brought when it came was that it removed this kind of erector set mentality. It made it possible for people to worry more about the software inside than the construction. I've heard people say that containers are just another packaging format. There's a repo out there. It's just software. We know how to do that. Why is, what's the big deal? I'll show in a little bit why Docker containers or where the container images are not just packaging. One really important thing is it's not, it's the containers are not going to replace VMs. There are software that are well suited to containerization and then there's some that's really not and VMs will continue to have a place. Another, the flip side of that is it's really important over time not to just try to stuff the contents of a VM into a container. It can be done. People are doing it now. They're doing it successfully and profitably. But I suspect they're gonna find fairly quickly that they're not really getting the advantages of containers that they're looking for. And finally containers aren't just Docker. Docker was the first, well, the first container system that made it possible for typical people to create containers. And again, to focus on the contents and behavior and not on the construction of the container itself. But there have been several follow-ons. Rocket is one that I'm particularly intrigued by. But Rocket has also been taken over, or excuse me, a number of people have gotten together to form the open container initiative which is seeking to define standards for containers both for the runtime and the images that make it possible for other container run times to use those images and to provide different sets of characteristics. So if those are the things that containers not, what is a container? And I think people confuse three things about containers. They confuse the container instance, which is execution, the container, the image, which is a software package plus some stuff and the container runtime, which is software management. They may also confuse the container image repository, although that's less common. Just to run down, what a container is is not a virtual machine. It's not even, a container is really not a great term. What it is is a process with blinders on. A fairly new set of techniques called kernel namespaces. And by new, I mean with the last four or five years. Kernel namespaces allow a process to see the host system differently from other hosts. And what a namespace does is it provides this new view. Containers run in a set of kernel namespaces so that when they make queries in the operating system, they see a different view from what the system sees. The limitations, the processes can be limited by kernel capabilities, which are also a relatively new addition to the Linux kernel. Again, in the last four or five years in internet times, it's not all that long, or it's a long time depending on your view. Capabilities allow very fine grain control of what a process is allowed to do when it accesses the kernel. NSE Linux, which allows labeling and control of access of file resources and process resources. A container image is, again, more than a package because while it does contain the base binaries and some scripting languages that an ordinary package would have, it also has code which allows the container to start a version of itself. And if you download your typical Apache package, whether a Debian or RPM, there is configuration there, but it stops as a static thing. You install the files and then you stop and it's up to the user to figure out how to run that. And there may be some scripts in there for startup, but they're not specific. So the second piece that a container has that no traditional package has is some metadata, which allows you to say how should this thing be run? What are the variables that are unresolved? It also contains a security hash or can contain a security hash so that you can be sure that the contents are unmodified in a signature so that you know that it came from the source that claims to have created it. And the link at the bottom is the link to the OCI image specification. And that was version one was released I think in October, maybe late September, early October. So we now have an official version one of what does a container image look like? And we have something that the various run times can converge on so that we can exchange containers between run times and expect them to work in predictable ways. So the container runtime is the environment in which something works. So it's more what it does than what it is. The container runtime is the piece on the operating system that takes an image and it takes some other information and it initializes the environment in which that container will run in which the runtime will happen. It creates the namespaces so that it creates the views the process will see. It unpacks the image layers into a file space and makes that available as a unit. It can mount remote file systems on behalf of the container when it's going to run. And there's a gap that the container runtime fills between the parts that the container image creator finishes and the consumer wants to use. It fills that gap and establishes that space file systems and other environments. It sets or removes kernel capabilities. It creates that initial process just like an ordinary process fork. There's another set of steps which initializes the namespace but it initializes that first process. And then it runs the command finally that the user wants the container to run. In addition, the container runtime provides some access stuff. And here I use the Docker examples but there are examples in pretty much any runtime where you can run a command which will execute an additional process within that set of namespaces so that you can do debugging. And you can take a look at the running container so that you can find out what IP addresses it has, what resources it's using, how it's connected to others. And again, the link at the bottom is the link to the runtime specification from the OCI. What's the wrong way that time? So now I've gone through what the containers are and covered some background. And now I'm actually getting to the piece that matters and maybe it's been a long time coming. We've got the container hosts, we've got the container runtime and we've got system administrators who have to make this stuff work. Everybody tends to focus on the containers or on the visible pieces, the orchestration, but there are still tools that you need to use because you no longer have the tools on these container hosts that you're used to having. So the container host comes with minimal tools. All it's meant to do is boot and attach to a network. And so when it boots, it needs to know who it is, it needs to know how to reach other systems with an IP address and some routing information. It needs to have time sync. It's really, really critical that these have time sync because many of the operations they do use are time critical and they need to have some kind of authentication mechanism to allow somebody in from the outside. Traditionally, that's an implanted SSH key for your net and then use a system like LDAP or LDAP authentication or Kerberos or even possibly, I don't know if it's done yet, that an OAuth mechanisms like Google or GitHub. So if you've got these machines and the processes, the software you want isn't there, what do you do? Well, you install your software as containers. Now they're gonna be special containers because as it says, you would run a container they have blinders on, you can't see the system. Well, it turns out the containers, because they're processing as namespaces, all processes on a container and actually all processes on a conventional host now that has the capability of running namespaces. Those are running in the system namespace. If you log on to your ordinary Linux box now, your process is running in the system namespace by default. Containers are running in their own namespaces. You can run containers in the system namespace and it's important to remember people think you're letting the container out. You're not really, you're letting the system into the container. There are some cases where you can install additional stuff that I will go into in a minute. There are several kinds of special containers for doing this. The first kind is what's called a super privileged container. And if you look at the Docker run back here, it spends a lot of time punching holes. If you see the minus E host equals slash host and the IPC equals host, net equals host and PID equals host and it's running privileged. What those three things do is it says the inter-process communications memory management and such are that the namespace to use is the host namespace. The network namespace to use is the host network namespace. So this container will inherit the network interfaces that the host has. The PID namespace is the same as the host namespace. So this container will be able to see all of the processes that are running on the host as if they were running in the container. And you can see it mounts the system var run var log and several other paths. Into the container. And what that does is it makes it so that the process running in the container has access to all those resources. And there are specific containers built to do this so that you can put the tools you want inside. Specifically, there was a container image called Fedora Tools that is used by both CoreOS and Atomic that has trace route and various other network tools, IP pieces that you need to do system administration. Now you're installing this as a container but it's really meant for only running individual commands. You log in, you get a shell, you run your commands, you exit and the container processes have gone away. The image is still there. You can reuse it or you can delete it. But what the super privilege containers allow you to do is run individual commands as a system administrator to examine the system, examine the state of the system and the process is running using tools that aren't normally embedded. There's a second special container which they're just getting started, I think, in my opinion, they're still working their way up. This is called a system container. What a system container does that a super privilege container doesn't is a system container can run system services on the host in a way as if you had installed the container instead of running, this is really common for network services. If you wanted to run a DNS name server, you probably don't want to run that in a container by itself. You want to run it using the system. You want people to query the whole system. So you can run your name services, you can run an LDAP authentication service. You can run NTP so that it's available to other things. But to run it as a system service, you want it to start when the system starts. You want it to boot up as part of the system boot up. And so these containers have some special metadata that's outlined in the links there. I'm not going to go into a lot of detail, but they're specifically designed to run as system services. You know, the first bullet says it's not dependent on Dockardy. These are start-up, these start-up using RunC or SystemD so that they can run before the Dockardy is started because the Dockardy depends on some local networking that may not be established yet. There are some special commands that it can be used to install these. They can be done with ordinary Dockard commands or with ordinary container commands, but there's a pattern that's emerged. So the atomic CLI was created to manage those. As an example of one of these, there's a system container called Cockpit. And Cockpit is a very neat web-based system monitoring tool. It runs as a process on the box. It's very small, but it offers a web interface and an API interface for interacting with the system. It uses the system authentication to grant control. I only have it here just to go take a look at it. It's an example of something that you could run in every container house to give you some visibility into the system that you wouldn't otherwise have. Dan, are there any questions yet? I'm going very fast. Well, I think we're both pretty happy here right now. This is really shedding a lot of light on a lot of terms that we've all bandied about and always had the depth, so this is great. Okay. I'm nearly at the end. Something that's very new then is the Fedora modularity project. And they have an actual bootable image of their own called Boltron, which is not like either of the two that I've described, but it's intended to be a true container host in the sense that it really uses nothing else. And they're not using container. They're creating a third kind of system container characteristic that they're calling a module. And in the modularity model, you can build up any kind of host that we would normally use using these modules and containers. And it really, you can even use it for things like desktops. They're talking about the idea of a containerized desktop where non-graphical services would use a module for shells and for shell tools. And there's a graphical version called a flat pack. There are already several applications. I know Maureen Duffy uses Inkscape as a flat pack, which is a graphical containerized version of Inkscape that has no dependencies on the underlying operating system. So you don't get any kind of conflicts if it needs a shared library or something that hasn't been installed by the ops team. All of that's self-contained and you can update completely independently of the operating system underneath. There are a set of modules. I provided the links for the modularity project in Voltron and for the set of modularity containers that have been created so far. The goal of modularity is very ambitious. The idea is to completely decouple the OS and the app update schedules so that you can do something that approaches continuous OS update every time there's a change instead of waiting for a new distribution to be composed. If you make a change to the underlying operating system you grab that change and you use it and you are confident that it will not have detrimental effects on the applications you already have and are running. And the same goes for your applications. When you need to upgrade one, you just update it and all of your applications are no longer coupled to each other. You no longer get conflicting shared libraries or situations like that that make it difficult to maintain the applications you want. This is very young. It's very experimental. I'm not even sure if they would call themselves beta yet. And it's not something that I've had the time to try but I definitely want to try it out and see. And regardless I think of whether modularity and Voltron themselves become widely adopted I think they're going to start establishing patterns that others are going to adopt. I suspect this is going to be influential whether it takes off or not. We got to the CLI tools. I talked about the different containers but I haven't actually talked about the tools themselves. On container Linux there really is only one tool. It's called Toolbox. And what Toolbox does is it actually downloads a copy of Fedora Latest and you can install packages inside it because it looks like an operating system and you can delete them when you're done. But that's about as far as CoreOS goes in assisting the administrator in managing the host. They really are concentrating on both ends and their model really is build the box, install Kubernetes, go, don't do anything else. So you can install a trace rat inside it and then because it's in the system name space you can manage it. Atomic has noticed a bunch of patterns in creating containers or creating these tools. They notice those patterns where you would say I want to start a container with trace route in it but I want to use all of these system name spaces. And so they created the Atomic command originally just as a shortcut for some of those long container invocations because they noticed that there was a constant pattern. Over time they realized that there were a whole lot of kind of logical operations that they wanted to be able to do that were related to how container hosts work and they've gradually incorporated a bunch of them in. These are just a set of samples of the things you can do with the Atomic CLI. Again, I provided the links for it. Atomic CLI is available. You don't even need to run it on an Atomic host or on a container host. You can actually install the Atomics to services and our client I know you want on kind of the Raspberry Pi stuff. I haven't seen a port of the Atomic CLI to CoreOS yet but I would love to see that just to keep the playing field even. What they did discover was that there were two sets of operations that people wanted to do. As you can see from these, you can see the Atomic install and run. That's kind of what I was showing by the Docker install of a system container and then Atomic run would be the runtime invocation of that service. The bottom two had been used, I have another meeting coming so it needs to see that. Should it turn that off? In any case, the Atomic host commands are system maintenance rather than creating individual processes. The Atomic host commands the upgrade and rollback and status of the running system. Again, it's kind of a wrapper for RPMO history. Most of those commands can be done directly as RPMO history commands but they found over time that there was a pattern to how they were doing it and that the pattern could be encapsulated. So when you're working on a system with Atomic and especially on the Atomic, the project Atomic hosts the Atomic CLI is kind of your go-to command for management of the system. Again, I think this part is probably obvious to people. One thing to be aware of is you can deploy these pretty much anywhere you would deploy an ordinary host. You can pixie boot them. You can boot them in cloud providers. You can configure with cloud in it. There are images for CoreOS in all of the major commercial cloud providers. I haven't yet seen Atomic hosts whether it's Fedora, CoreOS, or REL offered as standard containers from the vendor in those areas. But it's simple enough to upload a single image, boot your machines from it and then roll it forward from your cloud. You can also load them in a personal VM through Vagrant and I've got the links I have provide a couple of different examples, one for CoreOS and one for Atomic. Customizing, you can customize container Linux again because it's based on Chromium and Chrome OS. You can download the tools and rebuild it. When you do that, there's a couple of different... You can build the image and then there's a couple of different ways to turn them into development or production images which have slightly different characteristics. The production images will have a signature that the development ones won't. Currently, building requires a manual patch and I've listed the pull request to do that. It's fairly straightforward to apply that but it hasn't been applied upstream. Customizing Atomic host is more complex because you're managing not just the image but a whole tree and there's actually two steps to it. You're creating an OS tree which is very like a GitHub repo where you can do commits. It's actually much more like a GitHub repo and then even then an RPM repo. There's a set of operations on the OS tree repo that are really out of scope for this but if you're going to be doing customizations you really need to look into the build process and then the maintenance process. Those will be detailed in other places. If you're updating CoreOS, again the update engine in locksmith control that and it just downloads a new copy of the image, traces it in the unused user space and then reboots. If you're updating Atomic, we've already looked at this briefly. You're going to do an Atomic host upgrade and it will tell you what RPMs it's updating, what differences there are between your current version and your new one. You'll reboot to go to the new one once you've done the upgrade. If you're unhappy you do an Atomic host rollback and then reboot and each of those operations it will tell you which packages are changing. Just to wrap up, container hosts are good for container orchestration. That's what they're designed for. They're small single-service hosts. They're fairly small. The CoreOS image is about 200 meg and the Atomic image is currently around 400. The tradeoff there is that the CoreOS image you have to download that 200 meg every single time you update where with the Atomic one you load the entire image onto the box and then your updates are going to be much smaller. Your long-term network traffic is going to be much smaller with the Atomic than it would be with CoreOS. They're not so great for complex and monolithic apps and that's true of containers in general. You really want to decompose your application so that they fit into containers. It doesn't mean that sysadmins need to adapt and developers need to adapt. People need to start building system containers and then you need to plan how you're going to do updates and then automate and then let the automation do its job. That's what I've got for today. Thanks, everybody. Thanks, Diane, for the opportunity to talk. It was a bit of a whirlwind but I hope there was something there for everybody. There definitely was, Mark. I think that the interesting thing was it gave us a really good... For the first time for me it really clarified the difference between CoreOS and Atomic in a real succinct way. Thank you very much for that. Plus all the talk about the tools. You are opinionated and I love that about you. Also really good because most of the time when I interact with folks, I'm interacting with developers who want the most simplest tools. To get the sysadmin point of view on container hosts is really very helpful to understand once it's day two as we were talking about before and you really have to start maintaining and using all of this. It was really good. That's really the gap I'm trying to fill because it seems like people do rush straight to the shiny objects. While I have an opinion, I'm perfectly happy to be swayed if people have other opinions or have other pieces. I think it's very useful and really I think gave us the underpinnings of what I started out saying at the beginning of this is the underpinnings of what's running all of our wonderful microservices and cloud native apps. And if you're using the full hour, huh? And if you're using the full hour, I knew you would. That's why I said do it in a half an hour. I know you take an hour to matter what I said. I really tried. I know you did. I'm going to make a joke about this because you could tell your sysadmin type because you don't use any cute little animals other than the canoe at the beginning and you're using a font that's like timed Roman. You could just absolutely, it's a wonderful thing. I use the default. I'm an e-max guy. You could use my drawings retro because I was using, God, I don't even remember the name of it was Daya at one point. I still like Daya. So thank you very much for doing this. I think we're going to end up having some follow-ons on this topic, but I wanted to get it out there. So thank you very much for taking the time today and those of you who listened in, thank you for your patience and your questions, which I think you answered all of them in the talk as soon as you started going. So we'll do this again very soon. So thanks again, Mark. Thanks again, Daya.