 Hi, I'm Sven. I've been working on Docker-based Linux operating systems for a while. And I thought I'd tell you guys a little bit about them. First question, of course, is how many of you have not used Docker? One, two, really? Excellent. OK. I'm going to assume that you know how to use Docker. Because, yeah. So I've been working years ago when Docker started. I started playing around with a thing called Boot to Docker that made a Solomon's prototype. And I basically went and took it, added installers, added a management tool, and so on. And eventually got hired by Docker to continue working on that and documentation and things. And then last year, I joined Rancher OS to do the same. I mean, Rancher Labs to do the same with Rancher OS. Now, historically speaking, there are a number of Linux micro distributions. And by that, what I mean is ISOs or install distributions of Linux that are somewhat reduced. Like Boot to Docker, I used to try to keep below 32 meg because I used to like to boot it from an old SD card that I had from cameras from 2001. Now, Boot to Docker was kind of the first. I think CoreOS actually happened before that. But basically, what Steve did is he took Tiny Core Linux, which is another micro distribution, but it's a more fully-featured one, and ripped the guts out of it to make this tiny little live CD that you could run. And I think you could install so that we could give people on Windows and on OS X a way to learn about Docker, containerize, and have access to Linux tools. Since then, that project was taken and converted into a thing called Docker Toolbox, where we added more things. And then Docker Machine came along. And Lib Machine seems to have become a very important way to create virtual machines and hosts out there without needing to care about the details of AWS as your packet, cloud, digital ocean, all of those. You basically just go Docker Machine, whatever the driver is, and it's parameters and say, give me a machine, which is, in my opinion, an awesome legacy to have had. And I'm really glad that people like Garth and Nathan have made Lib Machine what it is. And then last year, or the year before, Docker started working towards unicolonial things, and the unicolonial team that they have created a new version called Moby, which is used for Docker for Mac and Docker for Windows. And it's based on Alpine Linux, which is another small, general-purpose distro, and then they've added things to link those things together. After that, there's CoreOS, which I think, in many ways, is one of the most important developments where the guys at CoreOS basically took Chromium and Gen2 and turned it into a very small, system-D-based container OS, and then they did a whole series of really quite cool things. Unfortunately, I'm not a fan of system-D, but that's one of those weird little things. And then Red Hat with what's called Project Atomic, where they're re-spinning Fedora, REL, and CentOS to be smaller and more container-centric. The idea of all of these operating systems is heading towards the idea that you don't install your application workloads directly into the file system of your host operating system. What you're doing instead is you're leveraging the container concept and just taking the container file system and running it in its own little space. And with Docker containers, you're obviously using Linux kernel namespaces and all of those sorts of things to constrain those applications. And then comes along ClearLinux, where Intel have gone and said, that's really cool, but because we're sharing the kernel, you have ways to break out of those containers. And ClearLinux adds lightweight KVM virtualization and so you end up having your containers actually being in a micro virtual machine. And to me, that's one of those security things where obviously if you trust your container's contents because you developed it and you're not worried about people breaking in or breaking out, you don't need to do that. But if you have a front-end web server and you know that at some stage somebody's gonna attack it, you can throw that inside a KVM and that just adds to the difficulty of breaking in. And then there's RancherOS, where basically Darren and the guys that started it threw out everything and said, what we need is a Linux kernel and a little tiny Go application that starts containers and that's it. And one of the cool things that they did is they chose to use Cloud Config files to drive that init process to then start each of the services and each of those services is a container. And I'm gonna go through and show a few of the examples. I've just kind of covered this in that the idea behind RancherOS and in my opinion, the way that all of the other microOS as a heading is that instead of creating a general purpose Linux distribution, what we're doing is creating a micro distribution that does almost nothing and then delegating all of the services that we want to containers. One of the things that puzzles me is how often people want to be able to SSH into one of their machines. Really useful for debugging, but as far as configuration management goes, it's almost impossible to then know what state your computer is in. So I'm trying to get to a point where there is no general way to get into your operating system because it shouldn't matter. What that of course means is that we need to develop really good logging tools and ways of inspecting these operating system hosts if something goes wrong, but if nothing goes wrong, we don't care. We just want the thing to run. Okay, so RancherOS at the moment is a 4915 kernel, sadly 4916 came out last night and a little tiny Bootstrap file system, the Golang init. So the Bootstrap file system, there's a few things I'll show what they are, basically because we're not rewriting everything and go yet. And everything is driven from Cloudinit where our Cloudinit is not the same as Ubuntu's or the standard one. It adds Docker compose service definitions to it so that we can specify what services to start. Our Bootstrap file system is busybox ROS which is our Golang tool, IP tables, XZ, and the config files and probably the easiest way to show that assuming my Wi-Fi works is, there you go. Almost nothing in there. We've got busybox, we've got IP tables because we haven't rewritten it yet. ROS, XZ to unarchive our init, I think. In fact, I'm not sure why we still have that there. It might have been historical. A series of config files and lastly, a tileball containing the containers that we've saved off or exported that we're just going to load in and then start. And probably the most important file here is this guy because what it is is the Cloudinit file that makes Rancher do its thing. And we start off here with a set of images as in their services. Can you see my cursor at all? All right, that'll be fun later. Anyway, so if you know what Docker compose files look like, you'll see that these are relatively familiar. We've done a few little things like the IO Rancher OS labels which give you an idea of either where it's running, what order it needs to run in, and that sort of thing. Okay. All right, so that's lovely little thing. System services. These are the containers that we start up by default and they give us networking, time, access devices, ACPI, that sort of thing. And let's see. And there they are. If only the format was nicer. Okay, so what you can see here is basically Syslog daemon, ACPI, Udev, Network, NTP, and a console. And the console is basically a really simple little build route busybox console with SSH running which is why I can get into here. And that's it. This is the grand total of the operating system. Sadly, PSA UX looks a little more hideous. Because there are an awful lot of internal little commands that are running and things. But that's it. And that's all you really need for an operating system. In fact, it's probably more than you need. But pairing things down is essentially one of those fun fraught with danger type jobs. I removed some things in the last release and one person out of 100 finally had a problem. So we've got to figure out how to put some of that tooling back. Okay, so user consoles. As I was saying, I've got a little busybox by default. I'm just gonna, but, here's our wifi really slow. Ross. So, I can actually choose to give Rancher OS different personalities. I'm a Debian user, generally. Although, as you can see, I'm on Windows at the moment. So I can switch to Debian. And this is useful because it means that I can now actually do my development in this console. And sort of thing. Well, unfortunately, SSH is running in the console at the moment. So it restarts. There we go. And now I have the ability to install stuff. Which means that I can now install make, git and go lang and so on, grab my source and rebuild. To the wonderful test for a micro OS. Okay, the other thing I can do because I'm all containerized is change the version of Docker that I'm running. So we can go Ross. Engine list. And there we go. I can run 110, 111, 112, 113, or 1703. Depending on what things I need, what I need to be compatible with or what my real system is running. And the same sort of thing applies where I can go Ross Engine, switch. I think now it'll enter type. That's not, and off it goes and does it. Now this is the beauty of being fully containerized is that I can swap out and swap in and upgrade at any time, each of these components. So now I can go, hello, and there we go. For those of us who've been using Docker for a while, there's a whole stack of new commands. Most of which I don't remember. Okay, and then there are miscellaneous system services. The core of this is that there are a stack of service YAML files that allow us to run whatever we need. One thing I have completely forgotten to mention is that Rancher OS is actually running two container engines. It's running the base one, the system Docker, as we call it, which is what gets started in our Ross. And that then starts each of the system services. So what I have here is I can go sudo system Docker PS. So that's where all our system services are. But then I also have the user Docker, which is where we expect most services to run because what you're doing is you have a workload that you want to use in Docker. These services here, for example, I mean, ZFS and Open VM Tools and the kernel headers, those are still system services because in a lot of cases we need these before there's a user Docker running. Man, this is starting to irritate so Ross. Okay. And because these beasties are again YAML files, we can go off and make our own. And in fact we will. Okay. I think I've kind of covered that. All right. So if we want to create our own servers, which in the end, if you're running Rancher OS without something like Rancher server, you do, you can make these files yourself. Let's see, user, the Rancher and test.yaml. And all right. Does anybody remember how to do compose files? Because I'm at the point. Please tell me if you noticed that I've typed something. Have we lost wifi by any chance? Because I'm typing and I'm getting not much. All right, just lag. Fine. Okay. So what I've done here is done, made a simple little docker compose file. And then I should be able to go Ross service enable. All right. I might just start my, as a hotspot and switch over to it. Okay. All right. So I can go and enable hello, that. So what I should be getting is a service. I'm about to blame my computer if it gets any better. All right. So I made this docker compose file out there in the cloud. Nope. Maybe I can get there. Let's see if it worked. Okay. So there we have our service and I should now be able to enable this service. I mean, run it. Why I pay address is, who's going to help me kill my computer? Hope that's the public one. Please be the public one. There we go. I have engine X. That struck me as a lot harder than it should be. Okay. So I have made a really dumb simple compose file. Obviously in real life, you'll be setting up all the other things that you need to get your non-trivial application running and all the dependencies between different services. Now this thing here, I don't really want to be sharing files like this. So instead, what you can do is create a Rancher OS service repository and what that looks like is incredibly simple. I've got one here that I play with and the important files are that there is an index YAML file that lists all, this is like working from home. Voila. Basically, your index YAML file lists the names of the services and for the purposes of the demo, I've made a simple engine X service and then in a directory like that, there is effectively the same thing as what we just made by hand. And so the trick then comes to add this service into our configuration. And now I should be able to go Ross service. Wow. Clearly there's something wrong with my computer. Who's hacking my Windows box? All right. Is there any point? Okay, that's boring. Anyway, so what I can do then, once I have this extra repository is that I can see RV, I see E, actually see an engine X service and enable that and it's going to fail to start unless I also stop, Ross service, stop. Now I can also go Docker PS. Let's see if we're here. Docker PS minus A. So there's that guy. And now I should be able to go Ross service up engine X and naturally enough, it looks exactly the same because I am not much of a web programmer anymore. Ross service, logs, engine X, so I can go and see that was the request that was made. Cool. Now the last awesome thing about this is that I can now, instead of doing all of this manually, just can define a cloud config file and then create a new server with that. And to do that, knowing that this is a config, that's what my internal config to this machine looks like. And I should find, that's interesting. That's a cool bug. Wow. Okay, I'm just gonna fake it. From memory, what I should do here is you'd like that and okay. So basically what I should be able to do is give my server on AWS or on packet in this case or digital ocean or whatever, a cloud config that says add my custom repository and include the service called engine X and started up. And that's it. And then when I hit create, I should get a new server that has Rancher OS installed and is running my service. And that's it. And in most ways, to me, this is what the really cool thing about Rancher OS is, is that we have basically gone, we don't care about the underlying OS, we just wanna run this service and that's it. And so that pretty much concludes the talk. Let's see. Yep. All right. Has anybody got any questions? Thank you. Yeah, anybody have any questions about Rancher? Docker. Excellent. Well, it's built in now with the latest kernel. It's in permissive mode because we have to build our own policies because our OS is so different. So if you know SE Linux, yes. If you're hoping that it'll just work like it doesn't Red Hat, no. And I think that's a common thread where all of the main container OSes have SE Linux built in for which we say thank you, Red Hat. Thank you, Sven.