 That's good. Someone gets to see my big head staring at the screen. OK, so how many people attended my previous talk on container security? Excellent. So this is the exact opposite. This is totally turning the security off. How many people here have played with Project Atomic, or Atomic Host? OK, if you're following the development of the operating system, what we're trying to do in the Atomic project is sort of build a new way of using operating systems. And the goal here is to make an operating system that the only software you ever install on it is going to be containers. In order to do that, we need to allow containers to have applications in them that might manage the host, or might manage other containers. So in that case, what we need to do is build a container platform, a way to run containers in such a way that the container can break out of its containment. So I've been working in container technology for many, many years. Obviously, I'm the SE Linux guy for 100 years at Red Hat. So I've also been using namespaces all the way back to REL5, so Fedora 5, who started using namespaces. So namespaces, and SE Linux, and other types of confinement, capability, things like that, are all things that we're using to define what this thing is called a container. So for many, many years, we've been using this technology. But most people weren't really didn't understand it until Docker came along. And what the really interesting thing about Docker is not so much the containment. All that Docker is really doing is taking advantage of stuff that the Linux kernel provides. But the really cool thing they did is new packaging format. So what we're looking for is, as we move forward, I believe that people should start packaging software in high-level applications in the form of containers. So your low-double tool should still be built in IPMs and stuff like that. But as you get to higher-level application suites, you're going to package those up in containers. And we want to basically make containers a new way of shipping software system. But in order to do that, we say we're going to do that. We want to build platforms that are really optimized for running that. But again, we need more advanced applications. So the only way I'm going to ship software is inside of a container. And then I have a host system that all it can do is run containers. Then I have to get to handle these advanced system management-type tools. So actually, this is a Unix way of containers. I guess in China, they decided that there they should tell me it. I guess not. They jumped the gun on container technology. So obviously, no one in this room should tell DC Comics that I'm stealing their logos, OK? So we're talking instead of just standard containers, we're talking super containers. So on Atomic Coast, we don't have M install. We want to make the Atomic Coast minimal. So everybody that plays with Atomic Coast, the first time they get on there and they say, this is awesome. Atomic Coast, it's a minimal install. But what I really need is I need this package install. I need S trace. I need Ping. I need man pages. I need everybody gets on there and they get instantaneously frustrated because it's a minimal install. So everybody wants a minimal install with just the additional packages. So our goal is just to keep it as minimal as possible. And if you need extra stuff, you're going to install containers. So how do I administrate? How do I admin a machine without S trace, GDB, trace route? So really, as an admin now, you're going to be starting to look at, you got to start thinking about the world differently. So this container platform. So this machine, Project Atomic, is not your desktop machine. This is a pure server play, a pure cloud image environment. It's not something that you want to run a desktop on. Although I think someone's building a atomic-based desktop platform. Customers want to install their favorite tools on atomic host. So the rule on the atomic team, the atomic host team, right now the rule is if you want to get a piece of software into atomic host, you have to prove that you can't do it inside a container. So you have to basically prove that you can't do it inside a container. We put some stuff in atomic host we want to get out, because we made mistakes. For instance, Kubernetes. We want to get Kubernetes the hell out of atomic host and make that run as a container. The only thing, my end goal with atomic host is the only thing you're going to get is the kernel, systemd, journald, and Docker, OK? Getting rid of SSH demons a little bit difficult. And then there's some other package. Right now, for instance, Isis log, if you want to run Isis log, that comes in the form of a container. So really what we want to do is just get that thing down as minimal as possible. So another problem is that I want to ship an application that will manage the host. So what happens if I want to manage the host operating system? Why should an application manage containers? But we introduced about, I think, back before Christmas this year, we introduced the concept of super-privileged containers. So there is no such thing as a super-privileged container. I'm going to explain the concept. Anybody in here has played with Docker at some point? So you know about dash-privileged. So dash-privileged just basically says, turn off all security. It basically literally says, root is root on the system. But what it doesn't do is it doesn't get rid of namespaces. The super-privileged container, we're going to get rid of namespaces, or the concept of it. So it's really just a concept. Way to run certain types of containers, SPCs will manipulate the content on the host. SPCs can be used to manipulate other containers. Turn off the security. First step to running super-privileged containers, turn off the security. OK. So we go through that stuff. OK. First time I did this talk, it actually Twitter. Some people did go out and tweet it. So privilege container, you need to turn off the security. So the way you do that in Docker is you run. So I'm going to go through a whole bunch of commands here. But basically, in Docker, you run dash-privileged. That turns off SC Linux, turns off the capability. And as I said, if you went to my previous talk, we talked about read-only file system, read-only kernel file system. We talked about SC Linux. We talked about capabilities. We talked in the future about set-comp. All that stuff, use the namespaces. All gets turned off as soon as you do dash-privileged. But we still have a problem that you have namespace separation. So even though I turn off even around Docker privilege, if I do a PS command, I only see my processes. I don't see all the other processes in the system. I'm still in my network namespace. So I still don't have access to the network, the real network, on the host. I don't have access to shared memory from host processes. I don't have other accesses to the host. So what Red Hat's been working with Docker is contributing all sorts of things to turn off namespaces. So you can do a Docker run now, dash-net, host. And that says to share the network namespace with the host operating system. Really, it says don't do the network namespace. Docker run IPC equals host says share the host IPC namespace. For instance, dev-shmm, dev-qmet, and all that stuff is shit. Docker run dash dash PID equals host means don't give me a PID namespace. So as soon as I do this one, I get to see all the processes on the system. Next part, so that gets me to, so what I can do with these commands is I can get to the point that the only thing I'm using inside of my container is the mountain namespace. So I have my own version slash user, which I want because if I'm going to ship my own software into the system, I want to have my own operating system, my own host operating system. But I need to get access to the host operating systems, the host file systems. So we want to mount the host file systems into the container. One of the things we do, you don't have to do all these to do super-posed containers. You can do partial ones, but we're going to get down to the final command. But if I just do a Docker run dash B slash run colon run, that means by mount the host slash run into the container. As soon as I do that, I'm able to interact. If I do that with dash dash privilege, I'd be able to interact with system d. I'd be able to interact with debus. I'd be able to interact with Docker. So I could actually, if I run a container like this, I can actually start and stop Docker containers. If there's anybody that can talk to slash run Docker SOC, I can get access to it. If I run Docker run dash B dev dev, that means share slash dev into my container. So we have one of the first super-privileged containers we built was Libvert. So in Libvert world, we wanted to basically allow Libvert to launch VMs, but we didn't want to ship Libvert. Everybody knows what Libvert is. It's a tool we use to launch virtual machines. We wanted to run Libvert on atomic host, but we didn't want to have to add all Libvert into it. We wanted to run Libvert in a container, but we needed slash dev so that it could go out and create device nodes and things like that. Actually, Libvert also required the dash dash pit equals host and a few other additional features. But now you can run Libvert. And as a matter of fact, if you look at the COLA project, K-O-L-L-A, that's an effort to containerize all of open stack inside of Docker containers. And those require, a few of those require super-privileged containers. Allows a container process to communicate with the device system that already covered that. So the granddaddy of them all is actually down here. Docker run, dash v slash to slash host. So that basically says mount the entire root file system into the container. That's also, any mount points under slash basically is bind-mounting slash into the container. And we've basically said we're going to standardize us being quite the atomic that the way you do this is slash host. Not only that, but we're also setting environment. We say you should set an environment to variable for dollar sign host in the container to point to slash host. And then you could build your scripts to be able to run if the dollar host isn't set, if it run on the native machine, if the dollar host is set, then it'll change. Basically, it'll go into the sub-director. I'm going to show you that in a minute. I'm cutting off the bottom of the screen. And I can't, even standing up here, I can't look down and see what that is. But I'm sure it's real interesting. OK. So it's best right now if I do a demo and I have to walk around and I show you all what happens when you do all this stuff. OK. So here's a standard container, Fedora container. OK. I don't even have a PS command. But basically, one of the interesting things inside containers is we actually lied to SC Linux inside of the container, telling it that SC Linux is disabled even though SC Linux isn't enabled. The reason we do that is to stop applications from trying to do SC Linux activity. It's not for security, it's basically to stop tools from, oh, I'm on an SC Linux enabled system. I should try to do something because it's going to be blocked. So this is actually, obviously, I would never run an SC Linux system disabled if I go over here. You see it's not the same whole system. It's enforcing. It's just lying inside the container. But if I did a CAD of Parac, that shows you the SC Linux label. So yeah, I don't have a PS command, but basically, it shows you the SC Linux label. So this is a fully confined container. So now I'm going to do the same thing. Yep. So I'm going to question that. Do you still use LXC for no one who does? Yes. Do you still use an LXC label? LXC, well, let's use a little history of LXC. So LXC means different things to different people. So LXC, in your definition, means LXC toolset is developed by IBM. We have old LXC containers. LXC is also a shorthand for Linux campaigns. So we're using it as an interpretation of these LXC campaigns. So LXC means that. All right. It also would be Libvert LXC, which is the implementation of Libvert that uses those LXC campaigns. So yeah. If I had a go-back machine, and I might eventually go back, I'm going to call that container T, and we're going to have all that crap all together. So that name was developed. This policy was developed for Libvert running the old LXC for the same box tools. OK. So now we're going to go into a container, and I'm going to do the same thing. Prox self at our current. And you can see it's now running as SPCT. OK. So SPCT is basically on confined process. But I'm still in a container environment. So even though I'm privileged, of course, I can't show it because I'm on a PS running it, but believe me, there's a lot less. OK, so there's only two processes in the 10 and 1. So I basically still have no access to the host system. So now if I do, so if we start to do some of the pit equals host, equals host, that's just IPC equals host. And if I did that, so I am in LXL, for rock, all of a sudden I start to see lots and lots of processes on the system. I also, I don't think I have anything to do yet, but IP, what's the IP? I don't have any of these commands inside the container, but basically I'm now using the host network. So I've basically turned off all of the namespaces, network namespace, I've turned off IPC namespace, and I've turned off PID namespace. So I now have full access to the namespaces, but I still don't have access to other parts of the operating system. But I can start going further and dash v of slash run, colon slash run, dash v of dev, colon slash dev, and slash v of slash, slash host. And so here I am inside of the container. That's still no command, but I can do it to root, slash host. And suddenly I got the PSN. Why? Because I'm on the host. So if you read Project Atomic blog, I actually blogged about the security problem of allowing Docker to listen to, to allow global users to use Docker sockets. But basically this just shows I totally broke out of containment. I'm now fully unconfined T on the host and I can do anything I want. So it's kind of interesting. So this is, all this stuff is the idea of a superpillage container. If I went and looked back into the container and I looked at slash dev, I'd see those files. And if I look at slash host slash dev, I'd see the same amount of different command. But basically, I'd also notice that suddenly SELinux started to work. Because basically I turned, because I'm in a superpillage container and I've now started to tell SELinux you are enabled, so you can start to see SELinux stuff. So you can start to do relabeling and fix labels and anything you want. So let's go back to presentation. So who wants to type all that stuff every single time you write an entertainer? I certainly don't. So one of the fundamental problems with Docker is we get to these more advanced environments as you start to get more and more complex command lines. So one of the things we've looked at with, I guess I'll continue the presentation, rather than jumping ahead. So when we looked at it, we wanted to build some new tooling to make this easier. But one of the things we talked about when you go into a container, and I don't know if there's a Fedora tools image out there. Probably should be, especially after what I'm about to say. When you get onto an atomic host, I talked earlier about doing S-trace and things like that. You really need access to these tools, S-trace, GDB, SOT, and relimb versions. We need a SOS report. We don't even have man pages. So you can't even say man Docker to figure out how to run the Docker commands while you're in the container. But what we've added is a tools image. And what a tools image is actually a great big container image that brings all this stuff. Really, it's sort of the admin shell. That's the way I would think about it. So we package up a whole big ton of what you would expect. And we bring it down to the host. And then you could go into the container or in a superprivileged mode. And all of a sudden, you get your man pages. You've got your other content. In order to do this, we wanted to introduce a new command. So we didn't want to just continue working with just the Docker command. We wanted to introduce a brand new command called Atomic. And I'm going to explain some of the goals of Atomic. But basically what you can do with Atomic, you should be doing Atomic, rel, tools, shell. And all of a sudden, it's going to pull down the rel. It's going to basically do a Docker pull, pull down the rel tools container. And you're going to be in superprivileged mode. We'll show you that in a second. So it allows you to run containers in superprivileged mode. And to run rel tools image, if you do Atomic run dash dash sbc, rel tools, bin shell, it will run that. When we first started out with this, it was nothing more than basically a huge alias to say, hey, if I execute this command, then put all this superprivileged stuff into the container. I think we're doing some other weird stuff, like making sure the local time inside the container matches the local time on the host. We also add some name stuff. We also do something by default on Atomic shell is basically we're going to build a container here, pull it down to the system, and then we leave that container around. So you can do a YAM update inside this container, YAM install inside this container, add content. And every time you run the Atomic command, you're going to enter, re-enter that container. So that container becomes sort of a permanent way that you can continue to do your updates in the system. So we also wanted Atomic. We wanted Atomic to be the only command that you would need to execute to do management of the Atomic host. Obviously, you can do Docker as well, but we wanted to also wrap our IPM OS tree commands. So if you play with project Atomic, you have to do things like move to the next version, reboot the machine, Atomic host reboot, Atomic host, I forget the name, shows how often I use Atomic host. Basically, the tools to manage OS tree, manage your host operating system, move it up and back to version. So, huh, just gone with this. So you can do Atomic host upgrade, Atomic host rollback, and Atomic host status to basically switch back and forth between different versions of the Atomic host. So promise me, my application is nicely rolled into a container. How do I tell you to use it or run it? This is one of the fundamental things we found wrong with Docker images. We want Docker container image, or now it's called open container image format, to be the default way that everybody shifts applications. The problem right now is everybody has to ship an image in a description or a page somewhere that describes how to install the image, how to run the image, what is the command line, what benefits. So you have to go to two different sites to get information about how to run an application. So if I build a big application like IPA or a big application like OpenStack, we have to go out to these random sites and say, oh, you want to download this, this, and this. The developer of the tool cannot build something into the container image that tells, basically instructs how to run his application. Another way to look at it is we play with RPM. Right now, Docker does not have post install scripts for RPM. So we wanted to build the concept, the way to do a post install or install an application. My application runs mostly confined, but needs one additional privilege. Free IPA was building an NTP deeming container, and the only thing he ran totally locked down, but the only thing he needed was this time, because NTPD has to change the system time. It could run with everything else locked down, but he needed to do that. If he just puts out a NTPD container, you just download it and run it, it's going to blow up out of the box. So anytime you do a Docker run in that, it's going to blow up unless you run with the cap added to SysTuck. So as a developer, I wanted to somehow specify in the container that the way you run this application is by specifying. So we worked a long time, this patch was ridiculously long to get in, a way to add image data to the JSON file associated with the open container image. So we added a label patch, we've got Docker to add, finally add a label patch, and developers can add the contents of the image to JSON data. So one of the fields we've added is the label run. So what you can do is you can basically put an image data into your JSON file, it's in the Docker run dash D NTPD container, and then cap add SysApp admin, and then we put keyword image. What we can do with atomic command when it sees image like this, it will substitute the image that you run. So if you say atomic run NTPD, it will change the image word there to say NTPD, we've added a few extra fields since then and actually now it's dollar image, but you get the idea. So now to run, if the NTPD container is built and you run it with the atomic run command, it will go into the JSON file associated, it will pull the image down, it'll go into the JSON file of the image, figure out what the Docker run, the label run command is and then execute the label run for executing the command. This container will work perfectly fine with Docker and you can go in, pull the image, and you can go look at the label and it will tell you how to run it, but you would have to run it manually. All we've done is wrapped the ability to look inside the container for you how it's gonna run. So let's look at containers differently. Container image is a new way of shipping applications. We wanna look at images as software delivery mechanism that talked early. I packaged up my Jboss application in a Docker image, moved to a repository, and then what, at that point, that's where to me everybody stops. It's like, okay, here's my image, go to my website and it'll have 15 pages of description of how to install it. Anybody have a look at the installing OpenStack? It is a freaking disaster. It's like page after page after page of things you have to do. Yeah, well, I mean, there's been like 10 different installation procedures of it, but if you look at the way Packstats get an easier, but if you look at COLA now and what the COLA's gonna become, it's gonna be a single command. I want it to be is atomic install OpenStack. And then it comes up and it's gonna come up to you and say, how many versions of Glance do you need? How many versions of this do you need? And you'll just go out to Kubernetes and configure the whole thing. I want to provide, I don't wanna do that. I just like to come up with ideas. I want someone else to do that. Wouldn't it be great to do that? Yeah, yeah, yes. I'm all about Tom Sawyer in the fence. Okay, how does the customer install it? How do I configure it? I run, where does the config, where did the install script go? So we want to embed the installation procedure within the container image. So we also added a label called label install. So if you do an atomic install, it will read this line and I can do a Docker run privileged a few of the SVC commands and actually run an install script from inside the container that knows about slash host and we'll actually put out the system to unifile to set it up to run the container. You know, go out and do all sorts of configuration data in my prompts, user for commands, things like that. So this is not, when I'm building up any high level tools here, all we're saying is, is the command that I would instruct the user to write and I can embed my installation script right into the end of the container image. And then obviously I need an uninstall for when I remove the application. This does not mean the application has to be an SVC application. It just means when I'm doing the install and uninstall, I require privilege just like when you're running an RPM install, patchy that you run RPM install of a patchy it goes out and does privilege stuff on the system but when you run a patchy it doesn't require privilege or run Firefox or something like that. So the idea is we can put built in super privilege container activity for uninstall and uninstall and you don't necessarily need super privilege container functionality for running the application. So we also came out with the concept of meta container images. So we have an idea of something really complex like free IPA where it has 10 different services running. We want to get to this microservices environment where free IPA is made up of Kerberos, LDAP, CERT manager, I don't know, probably two or three other applications. We really don't want again to go to the page of install or I mean right now free IPA has a full install script. So I pull the end install script so I have to rewrite that to use containers or can I build it directly into the container? So I envision that we will have a atomic install IPA all at our free IPA and all that's in the free IPA image is the install procedure, right? And it comes down and it's going to prompt the user or say, how do I do certain things? How do I install it? And then the IPA can go out and figure the install procedure can figure out is a Kubernetes available? Okay, install, I need a replica server, Kerberos slave server is running on these IPAs, I need all that stuff. And then I've already mentioned OpenStack. So there's been a big effort to make all of OpenStack inside of Kohler have these labels so that we can start to automate the installation. So again, right now Kohler is actually made up of 10 different containers, Glance, Nova, Libret, whatever all those tools are. But we want to get to the point that instead of having to go out and get the pack stack script, all you have to do, you don't have to pull down anything onto the machine, all you have to say is atomic install OpenStack. And it's going to download the meta container image and that meta container image is going to start to prompt you to install stuff. I'm not going to be covering NulliQL, but NulliQL's goal is to get into a new way of defining these applications inside of the atomic application, inside of the atomic manner, inside of the image. So what might end up being these containers is a NulliQL specification. So imagine that you've got this NulliQL which now my team is trying to rewrite it all and go as a static binary. So now the NulliQL becomes it's not relics, not fedorics, not anything, it's a NulliQL application. It comes down with the definition of how to install an application and then that thing goes out and pulls down additional containers onto the system and tells Kubernetes to go out and pull down containers onto the system. So you start to build it. Again, you're building an application, a distributed application suite here and we're trying to make it as simple as humanly possible for an administrator to do this. Cole, you've pronounced that, OpenStack, product that? Fola, K-O-L-L-A. K-O-L-L-A. Yeah, so in Boston we would say cholera, okay. We would say tonic. We would say, yeah. Only old people in Boston would say tonic. We've been, we've switched to sodar, so. In Boston, people don't understand, in Boston, when a word ends in E-R, you make it sound like A. When it ends in A, you make it sound like E-R. So doca, doca, doca, and sodar. So now I'm gonna demonstrate project with tonic and I don't think I'm prepared for this, but. Yeah, I think that's it. Questions? Let's see if I can do this, I mean. Okay, so hopefully I'm not gonna say it. So here we have, so one of the things I can do here is I can actually create my container with all sorts of labels in it. This is a standard Docker file, it's not, there's no special, there's no special package that we have yet. We also use labels for identifying, there's all, there's no information in a container right now to tell you what to name the applications. Okay, it's not built into anything in JSON file, so now we've added the ability for us to say, it's Apache, it's version 1.0, it's at least that. Hey, the vendor, it's Red Hat, we're gonna license it. Well, we can basically put any string we want in here. We have a document that we're trying to, we're working with that CoreOS and other people to sort of standardize for some of these basic labels, our primary labels that, and, but basically. So what this is gonna theoretically, if this works and I didn't test it beforehand, it's gonna install a Apache server onto the system and it should set it up with a system to Unifile and let's see. So I do a Docker build in the pre-stages. Oh, let's, let's, as this builds, anybody have any questions? We'll jump ahead to the question section. Nothing? You guys all think this is awesome? So yeah, I was talking to you about the issues I went into with Docker build, where in the other case, the issue is, they will set the ID finally in the build phase. Yeah. There's no like, gas hash privilege from the build phase. Right. Yeah. There's no, I don't know. There's a, Docker right now refuses to screw around with Docker build anymore. There is a new tool called Docker RAM that's in the experimental stage right now. This is why you don't do these. Live. All right, you guys are gonna, we're gonna leave this as an example for it. Yeah, it needs to be changed to DNF, I don't know. There's something about it. No, I think DNF, I don't think it's going to work. Because it serves to be for some post. Is it like a third party thing? Yeah, I think this is related to the network. The network's safe. Anyways, trust me, it works. And matter of fact, if you pull, if you go out and do an atomic pull of REL tools, you'll see all this stuff. You basically go back and play with the atomic commands. It's really kind of cool. And what we're looking to do is add additional commands. The talk about earlier was going to be an atomic scan, which atomic scan will pull down a scanning container and run in the system and actually expand all the container run in the system. So we want to start to build out, but I can actually show you some other. So if I do the atomic. So right now we have the atomic post, which I explained. Info basically shows you the labels. Install, explain that, image list all the images. One of the interesting things here, when we get to scanning, we want to be able to mount a docket container without running the docket container. Right now docket has no way of doing that. So what we've done is we've built tooling to be able to mount up images without actually running containers so that we can examine and look at the images to see what kind of content they have. These are all obvious. We also added upload, now upload. Upload is probably going to get changed to push because it does pretty much what docket push does, except it allows you to push the satellite server. It also allows you to push the pulp servers. But we want to use something other than docket registry and use sort of containers like that. Atomic Verify is kind of an interesting tool in that it's looking at the labels. Imagine you have a container on your system that's MongoDB based on top of REL7 or MongoDB based on top of Fedora. If Fedora starts taking advantage of these labels, I think we have to take advantage of these labels. We want to start incrementing the releases. So we'd have Fedora 23 release one, Fedora 23 release two, Fedora 23 release three. Now I build an application on top of Fedora 23 base two and I have a whole application suite on my machine. What Atomic Verify will do is look at the application I have installed in my post and will say, oh, you're gonna get an application based on Fedora 23.1. I notice that there's a Fedora 23.7 available and I want to rebuild the Docker container. Doesn't necessarily tell you the security vulnerability, but just tells you that Fedora is like five new versions of the same image that you're using that you're at the application based on so you might want to think about rebuilding it. So it'll tell you that. Atomic Scan is the one that's going to come in and actually do a security scan to look at the container to see if you're susceptible to any CBEs or you have that configuration, your Etsy password filers. So those are all the commands that are available for Atomic. So I jumped over the Docker build problem. Yeah, the Docker build right now does not support any type of dash-data privilege activity. So if you're inside of a container and you're trying to do special things on the file system, I don't know if this is a good one, but you can build a container using Docker commit. So you can build SBCs. Well, let me give you a couple of SBCs that right now don't work well. I'm gonna do a work and fix them. So what happens if you have an SBC that's gonna load a kernel module? A lot of the kernel modules, so right now we can do that with an SBC. But what happens is that kernel module is required to configure the network that Docker needs in order to run. But obviously I have to install a container before Docker is run. So all of our containers are in there. Docker basically tried, Docker corporation wants the Docker daemon to be central to the existence of containers. I think it's a colossal mistake. So what we're trying to do is work with Docker to break the path that Docker containers are. Two of the announcements earlier this summer were around open container formats that Docker does no longer controls the container format. That was mainly to get corobats and Docker or anybody else. We don't want to end up with, we don't want to end up with RPM and depth. Two different formats that everybody in the industry has to fight about. We want everybody to consolidate on one container format so that we don't have this distraction with it. If that's going to happen, it can't be under the control of one company. So basically open container format, basically I think it's not limited to the Linux Foundation. I think Red Hat has a couple of people on it, Docker has a couple of people on it, CoreOS has a couple of people on it. So they're specifying what the container image format is. Another thing that Docker spun off is the thing called Run-C. Run-C is a little tool that allows you to run containers. But they didn't split off. It's a thing called Graph-C, which is a graph driver. So if I install a container image, it's going to be based on something like Oval-AFS or on top of it. The better device map or a better FS. When I install the image, it's installed, that's called a graph driver. So where that content gets installed, it's called a graph driver. So right now there's no way to take a container image mounted up so Run-C can do it other than using the Docker name. But we're working real hard right now to get a proof of concept of the graph driver being a separate program. If we can get a graph driver out of Docker, now we could build a system D unifile that would execute the graph driver in the mouth of the container and then use Run-C to run the container and do a superimposed container without ever having to use it with our community. So we could do it really early on in the new process. So that's one of our goals going forward. It's to get Graph-C into the open. And I think we have a good chance of it, whether or not Docker likes my idea as much as I do or not. But then what we slowly want to try to do is get the Docker daemon now to have all these low-level libraries and low-level function costs that you can do in the container, low-level you can do in the Docker daemon but they're separated out of the processes. Now, we're also trying to get lots of system D content. The system D controls into Docker. And one of the big problems of my mind is the Docker doesn't like system D. So I'm actually going to system D content to do a talk called Docker versus system D. I'm going to talk about both sides. I'm in the middle of the two sides trying to know what's happening. So they don't get along very well. Yes. So that's one thing that I ran into. It's like Docker is just like, you know, one process per container. Right. One service per container. Right. I hate when people say one process. I bet you run five processes. That's true. So the idea of microservices is one service per container. And that should be everybody's goal. Yeah. But the problem is everybody's not going to get there very quickly. Yeah. So everybody that runs multi-service containers right now, the problem is there's not a good tool for doing that. So there's been a tool called SupervisorD that's been out there for many multiple services in container. I hate it. It's Python. There is a great tool for doing that. It's called system D because that's its job. Yeah. It's system D. It's working on its system D version. The problem is that a bunch of did not adopt system D early enough before Docker became Docker. Yeah. So now they have the excuse that system D is everywhere. I'm saying it is going to be everywhere. Well, it isn't everywhere yet. So what I've been trying to do is get Docker to adopt system D. To adopt the functions to make system D work well inside of container. Right. And there's a considerable battle going on there. I think it's a lot of mechanism like trying to real high not to block heavily on this fight. I probably shouldn't say much more. I mean I've worked around with no desktop technologies in Fedora. Yeah. And of course we've been in the BDI. Right. And there's lots of interest in using the BDI with the containers right inside of the community and processes. They're already a commercial company. But they're always required for QBD to use in C. Oh, no. Is anybody using Docker yet for desktop environment systems? Yeah. I know there's an atomic desktop effort going on. Have you looked at what Alex Lawson is doing with SDGF? Yeah. I was just going to ask you about that. But the same, this Sandbox app stuff, you know, that's what I'm going to have on top of you. Right. Well, I mean, this is... He actually, Alex and I worked together originally and he went off to work on that. Yeah. That spec does not use Docker. It does not use over-container format. He's using a different format. But he's actually using it up in OS 3. So it's kind of a strange world. Yeah. The SDG app is a container format that's being developed to run desktop applications and individual containers, Sandbox applications. Take a look at it. It's kind of neat. And sometime in the future you might be running Firefox and Open Office and other tools all inside of containers. In my glorious vision of the world, I've been running Android apps inside of my browser. I don't know if that's really possible, but it's been really cool. You know, very solid, Chrome and Hasselaces are probably the great components in your app. Right. The Firefox, the latest Firefox on the ability, is also really... What the SDG app will allow you to do is basically take the Open... When you click Open inside of Firefox, instead of it actually going out and opening files on the disk, he would send a signal to the desktop and say, Hey, I need a file. And if desktop then would open up the file browser, and you go select a file, and then the desktop would hand the file into open-up and use Firefox. And now Firefox would have no access to anything other than content inside its container. And the desktop would be sticking into the container of Firefox. So I want to save content. You go to the file browser, and think about applications like that. I mean, I'm very interested in the SDG app, because it's got a lot of apps, but the other use cases, you know, virtual desktop, you know, so that should carry multiple processes. Right. It's not just multiple processes. So the things about running system D inside of a container, there's about five fundamental things that are broken, like if you look at my hybrid dash-dash-dip cache, so Docker run dash-dash-dip with system D. Yeah. Yeah. So it basically fixes those five fundamental things that a lot of you run system D inside of Docker right now. Oh, okay. The kind of things that are... How do you use them? What is Docker accepting your package? Docker basically wants me to do it totally on the out there. They want to basically have a command line that looks like an SPC command to do it, but there's some fundamental problems we're doing now. So Docker right now is trying to implement everything required to run system D as individual commands. So for instance, to run system D inside of a container, you have to have slash one is the time of sense. You have to have container UUID inside of container, specifying the UUID of the container so system D will create a messy machine ID file. Right. You need to have by log journal UUID mounted into the container because you want to see the container context on the journal outside of it. Yeah, that's right. That's what's good. You want to register machines, so you have to have machine control register machines so that in journal D, you guys should have a command to run. So there's like five things that have to be done. The problem is that the chicken and the eggs and the system needs to assist. If I create a container UUID that's created by Docker, you'll inside the container. So I need that value in order to set everything up. And right now, so Docker is setting everything else up. It's that kind of thing I'm saying. The main question though is like, you know, say Docker isn't going to just product my four-pot company or my company. I feel like you're working with you guys on the Docker system. Right. I mean, in some of our responses, if there's not going to be one, then we're going to work slowly, make Docker less than it's expected. That's one, yeah. One reason for doing that. The funny thing is, LibPotana, which is managed, is an open source project now. Actually, so that's still controlled by Docker, but there are more responses to that. It's a ground stage. Docker needed it. One of the big problems with Docker even two is that it's client-server. So if you man a system D service inside of a container that's running Docker Run, which most people do right now, that Docker Run is not in child, the direct child of the system D is the direct child of the Docker D. So the client program has been a direct child of the system D. It communicates with the Docker. So if you go into the system D unifile and say, I want to have this CD with some problems, why do I need a container? Yeah, it doesn't work. Your client needs the Docker client. It's not client to the Docker D. So the Docker D doesn't pass it down to the container. There's lots of problems with using a client-server application. But we've been working for some of these problems. Lots of year and a half. And I'm showing too much frustration. So Atomic Tool is basically our idea. Right now it's available to Infador, Rahai, Sentos, Rel, where you can use Atomic Tool everywhere. And it's Atomic Tool that's not only on projects with Atomic, just on Atomic Post. You can young install Atomic anywhere. So it'll run very well in the folder. Anything else? That's good. Make sure you go out and stay in the coloring books if you haven't done one. Thanks for coming. Oh, I think I've signed up to do SPC training on Saturday. So if you want to come into play and you can get into that connectivity you can actually play and try to set up some. There you go. If you want to set up some play with Atomic, you can sit here. Do you sign the coloring books? I will sign the coloring books. You need a... I had someone marringed up his kit. So if you want to get a collector's item, that's probably well $0.15. So I'm glad you're giving me this item. It's worth a lot of credit. You need a sharpie to do that. Thanks for coming.