 called the Sunlight Foundation. I do government transparency and accountability, who's paying who how much, who's voting which way, stuff like that. Been doing a bit too stuff for a while, getting stuff for some like, well, a little while 2009. I'm on the FCP team and like whatever else, I'm sure. I got other stuff that I'm doing, but that's not really as important. Oh, great, link down found. Cool. So that is a giant beautiful picture of the sunlight logo. And yeah, so if anyone's, this is just like the generic intro to any of my slides, so I apologize, but if anyone's interested in sunlight, feel free to talk to me. I will gladly tell you all about how awesome sunlight is and how much fun I have working there. Right, so Docker. What is Docker? This is sort of like the existential question. No one quite knows, right? Like everyone is using Docker for all these different things. And it's kind of super confusing. And that's really disappointing. Basically, Docker is a process level isolation framework. That's all it is. It uses the Linux girl nameshacing to isolate a process, a single process, and tracks the processes inside the container using C groups. Also, I forgot to mention this, because I dove right in. This talk is going to be on the short side, because I'm hoping that we're going to have a bit of discussion about Docker's role in Debian and in what ways us as the Docker maintenance team can help Debian and in what ways this story can flow back forth. Also, I do not work with Docker. I'm just the Debian hacker. I do this for fun. So I will cover some of the cons that maybe people don't talk about as much. And yeah, cool. Right. So Docker provides a whole bunch of tools used to manage and like wrap these processes. So for instance, Docker run, which lets you run code inside a container. And remember, it's for a single process. It's not a virtual machine. You just spawn it up and it wraps a process and keeps it semi isolated so that it runs properly. Or stuff to pull images if you have images in a central location on the index. So if you Docker pull, pull tags slash Postgres, then you get my particular Postgres flavor. Docker is higher level than LXC, but lower level than something like Ansible, Vagrant or Fabric. So Docker sort of provides these primitives to work on the system. These things to allow you to run processes in kind of a sane and normal way. But it's not there to solve all the configuration management problems. And there's definitely configuration management to do once a Docker install is on your system. So one technique that I have is all my containers are read only and then any data that changes in the container. So for instance, Postgres, you have our live Postgres, that's volume mounted, which is sort of like a bind mount out of the container. And that's on the host system in slash serve. And then I can just snapshot that and keep that backed up. And I often use Ansible to provision the stuff that's in slash serve. So I won't provision anything inside the container because it's only running a single process. You can't SSH in there. But using something like Ansible, Vagrant or Fabric, you can coordinate a whole bunch of Docker containers to do some stuff that's pretty powerful. Originally Docker wrapped LXE just to kind of give you a place where it is in the stack, but that ended up getting reimplemented just sort of in Rob Go. So it no longer uses the LXE back end by default. I think you might be able to, can you still turn that on? You can, but yeah, don't do it. Yeah, probably going to end up breaking stuff in kind of nasty way. We're like a whole bunch of incompatibilities after a couple of versions. So yeah, basically Docker is sort of slightly above LXE, but not quite at the level of Vagrant or anything like that. Docker is currently in Jesse. Yeah. Docker 1.0, which upstream assured me was stable, but they've never released security or patch fixes. So I'm probably, we're probably not going to upload 0.2.0 pretty soon because there's a bug with Golang 1.3 that affected us until then. So that's, I'm waiting on a package in the new queue, ironically. So, right. Yeah, that's embarrassing. But Docker is not. Docker is not a virtual machine and I cannot beat this point home enough. It should be a single process. If you start stressing that, weird stuff is going to start to happen and you're going to be in for kind of a bad time. Some people use supervised D or whatever else to manage a whole bunch of different stuff. And if you're careful, that's fine. If you know what you're doing, fine. But like in general, if you're just trying to Dockerize something, the single process per container. So I have like a Postgres container and then a web app container and then they're linked so that they can talk to each other. So that's usually the architecture of sort of standard by the book deployments. It's not a process for the entire application. So it's not like I Docker run like, I don't know what, what do people run these days? Like EtherPet, whatever. That might be kind of outdated. Yeah, that was the question. I mean, yeah. All right. So the question was single process, question mark. And the answer is yes, single like pit. I'm sorry. No, that's actually wrong. The Docker instance should be starting a single pit, but that can spawn other things perfectly fine. If like, for instance, you have USG and that has a whole bunch of workers and spawning up a whole bunch of workers totally reasonable. If that's how it operates, but not for something like EtherPAD and you're having it like database and the application in the same container. Sort of like logically the same stuff. Right. And Docker is not perfect isolation from the host. The goal is to isolate processes and not prevent exploits. And the Docker group is root equivalent. So if you have, if you're part of the Docker group and you can launch Docker containers, it is trivial to get root on the host. Because you can just start a new container with the root of the file system mounted inside the container, which you can charoot into and then be root. So don't think of this as like one size fits all security system. This is just providing basic sort of basic wrapping around the process to make sure it's running in an environment that you can kind of hold down for a minute. Basically, this lets my unstable server run Postgres from stable and the web apps that I'm really tightly controlling because they're all running on Python 3.4 are in unstable containers. So I can have different things for different daemons, which is kind of neat. So why? Which is kind of a bigger question. Like why are we wasting all of our time with wrapping all this stuff in Docker containers? And that's a good question. Basically, it lets you abstract the idea of a process and really not care about the host environment too much. So when I test something locally on my local machine and I deploy it to one of my VPSs, I can be pretty sure that that process is going to run in roughly the same way. Obviously, there might be differences in the kernel. If there are differences in the kernel, like, okay, fine, yeah, that's going to have some problems. But basically, it lets you contain and abstract these processes and it lets me sort of trivially move stuff between servers or test on my local machine, sort of reproduce the environment with a very lightweight environment, contrasted to something like a virtual machine where you have the entire overhead of the entire operating system and you're actually sort of, you're virtualizing the entire OS and all of the daemons included with that. And that's not entirely necessary in a lot of situations. So yeah, essentially, it doesn't matter what, that's a typo. Yeah, container. So I definitely wrote these kind of late, so I apologize. So essentially, this means I can deploy my stuff on whatever host I'm given because I'm cheap and I really don't like paying for servers. So if someone decides they want to give me access to a Fedora host, I can host my stuff inside stable containers and not stress about it too much. There's a little bit of stuff you have to worry about. But yeah, essentially, this lets me run stable Postgres using Python 3.4, play around with code, isolate it, move it around, that sort of thing. The comparison that Upstream makes a lot is where the name sort of comes from is the like ISO containers that you see on the trucks, like constantly, like the big metal things are super cool, like hipsters are living in them now. And basically, you can just put stuff in them, just like pack it full or whatever, like seal it up. It doesn't matter if you put it into a boat or on a truck, it's just going somewhere. You don't really care. And so the sort of comparison here is these are just ISO containers of the future with processes and computers. And so the Docker itself is sort of like big Docker ship full of ISO containers. So you can basically create hosts that host all of your code without really carrying what's inside, because they're all sort of look the same to you, right? They all have the same Docker run interface. They all have the same Docker pull interface. And then inside the container, you can just be concerned about how you pack it, but the host doesn't care. And that ends up being pretty important. And for super complex and like hard to set up software, you can basically, this can help remove a lot of complexity in the actual initial setup. Because sometimes setting up processes like these can be extremely difficult, as I'm sure everyone here knows. And so if you have like this weird historic way of setting up this application or require some weird configuration files, but they're mostly kind of standard, then you can just make sure that's all in place. And in fact, at work, I've Dockerized a whole bunch of scrapers. So a large part of my day job is scraping terrible government websites that have like three or four body tags. I'm not joking. That's not, I mean, we're all laughing, but like, this is my life. And the like sites are all like complicated and like one of them times out every five minutes. So like, even if you're a human browsing it, like kicks you back to the main page and like, Oh, it's terrible. And so like a lot of the scrape infrastructure is kind of gnarly and setting up the actual scrapers can be a bit of a pain. And making sure that those scrapers run the same way and development and production is like super handy. And while it's easy for me to get them going, because I like wrote a huge chunk of the code, it's not as accessible for other people. And so one of the things I did recently is all of the scrape infrastructure I'm currently working on a daemon that will run the scrapers inside Docker containers. And so essentially I've packaged up the the particular scrapers that I have. So I have state scrapers, which are like state legislative scrapers. And then so nightly, it'll like Docker run Paul tags slash scrapers us state Alabama. And they'll go off to Alabama scrape all the data down and insert into postgres. This lets us build continuously from git. So as soon as I push, it'll rebuild the image and then that image will be used in the actual run later on the day. So it doesn't require mucking around with like, I don't think once you use bamboo or just like some non free at last and stuff, which is what we're using, which I guess we're still using, but essentially makes you rebuild an AMI one of the Amazon images every time you update the environment, which is horrendous. And it has a 30 minute like you feel like Indiana Jones, it's like this wall that's coming down. And after the 30 minutes, it shuts the machine down because it thinks it's idle. And so you have to like make the change in 30 minutes, you're like trying to get right under and then like shuts down. You're like, God, God do it again. Before that, we used Jenkins, which was good enough, but kind of pain too. And it's just everything's running in the same environment. It can be a bit of a pain. And so by dockerizing all this, essentially, I can give anyone like this scraper and if they're interested in having the data, they can just docker run this command and everything just kind of works. It's like open gov in a box, which is pretty awesome. And so I've been working on trying to dockerize more and more of the periodic jobs that get run. And so far, it's pretty thrilling. And the results are really, really promising. And I hope that we're going to continue to develop docker to the point where that becomes a better use case, because I think it's a really good one. Now for the fun part, my opinions, right, so docker is, docker can let you get away with murder. You can do some pretty gnarly stuff and people do some pretty gnarly stuff. So I'm just going to brain dump a couple of the things that I care about. For instance, I only run my docker containers off systemd unit files. I actually do use upstart on a couple of machines. Essentially, they look like this. Here's the spec file for one of them. So basically, the spec file declares that this is for my nginx config. And so, right there, we got docker start nginx if it already exists. Otherwise, it has the setup of the actual docker container. So it does mount that into serve, play around with serve, pull tag, nginx serve. With the image, which is pull tag nginx, and the binary that it's running, user sbin nginx with a couple of flags. The stop command is docker stop t5, which means terminate after five seconds, nginx. That's kind of a lot. It's kind of ugly. I understand that, but that's okay. This basically lets docker be treated, or the nginx in docker be treated like any other system level process. This means that nginx inside docker is treated identically and nearly everything else that I do, because I just do pseudo service nginx restart. And what does it matter? It's just launching commands. And the commands are happening to isolate it in docker. And basically the same thing here for upstart. Slightly cleaner, actually, which is awesome. But basically start on file system and start docker, source the file to do some work, and launch essentially the same thing, same config. And these are nearly identical. And so I really don't like deploying docker anywhere unless there's a startup script in place. I want all of my machines to be able to hard shut down in the middle of whatever they're doing. Sometime in the transient like ether, have all of my docker containers disappear. And when the machine starts up, have it be back up in a state in which I can use it. And having unit files like this and spec files really saves you a lot. As for whether or not system d will replace docker, I have no idea. So I'm sure the system d people like to think that. So pseudo service docker restart right plays around with the docker containers. Are there any questions on that part in particular? Because I feel like I moved a little bit faster than that. Ashish, yes. How did you put nginx into that docker instance and like what is running in there? Debbie in something? Yeah, totally. Thanks, Ashish. So essentially, let's see if I have my docker files around. Yes, right, fonts. Unfortunately, XFCE terminal does not let me use control plus, which is disappointing. That should be pretty good. Okay, this is gigantic, but we should be able to do something. That's a little bit too big. This is a little bit better. I'm just going to go a little bit smaller. Sorry. All right, this will do. So essentially, what you do is you declare from what base image you start from. This can be any arbitrary image. So I'm sitting from the Debian unstable image. The Debian unstable image is maintained here by Tianan upstream. It's roughly similar to what you get from a debut strap. There are some differences. The differences are documented in the creation script that's also shipped with docker itself if you want to create them yourselves. The actual modifications, we've talked about moving them into a deb before, but nothing really came of that yet. If anyone's interested in making sure that the differences in the Debian docker image are better documented, I'm sure the docker upstream, Tianan in particular and myself would love to talk to you about how to make that possible. Thumbs up. So I haven't said anything entirely wrong. Great. So I'm saying from the Debian unstable image, the first part is the name of the image. The second part is a tag. They're very similar to how get tags work, except you're encouraged to change them often. So tags essentially point to a given layer and you essentially can use them for nearly whatever you want. Maintainer, useless bit of metadata, not really important here. Run means when you're creating this image, run the following command, which is app get update and then app get install, yes, nginx, which will actually install nginx into the container or into the container that we're currently building in. And then iRMRF, all of the nginx site stuff, because that gets volume mounted in from my file system. So that when I configure a new app, I just drop a file in the host slash serve slash docker slash nginx and then I just kick the container and then it sees it in its etsy nginx sites enabled. And then the command, which is the default command that's run if no arguments are given. There's also entry point. Entry point is sort of like run except a little bit harder and it's always put before run. Confusing these two can get confusing. So essentially it's sort of this declarative style, sort of declarative style. And it it's powerful enough to basically do what you want. You can actually do this stuff manually in a container and then tag the resulting container, but it's generally a good practice to use docker files so that you can create what are known as automated builds. They used to be called trusted builds, but that name was terrible. And the automated builds are basically builds that are being done on the docker index routinely. So that's the quick question for those who haven't played on it yet. So that means if your machine, that doesn't mean if your machine crashes and you build up again and it'll detect okay I need to build this image so you suddenly get a new version of packages from unstable and all hell breaks loose or is it fixed at some point somehow? Yeah, so there's two different concepts here that I think I've failed to clearly delineate. Essentially there's a concept of an image and there's a concept of a container. A container is an instance of an image. So a container is always started from an image. So this declarative style of building things is building an image. So the resulting image here is called paltag slash nginx. When I run this with the docker run, paltag nginx, it's assigned sort of a pseudo random name built on words. So like I don't know like feisty Turing or like angry Stahlman. Stahlman was added recently. It's amazing. And so the actual individual containers are given these sort of opaque names and so you start an image and you're given a container that's sort of from that image. So if my machine was to shut down and everything was to start up again, they would still be using the same version or they would still be using the same image unless I rebuilt it in the meantime in which case that's probably expected behavior. So yeah. Any other questions about the stuff so far? I'll just fill the live time with me singing or something. Wait, there we go. So how do you deal with security updates? Yes, security updates. That's great. So essentially best practice here is to continuously rebuild your images. And the docker index has support for this. You can give it a git repo. It'll watch it for changes, post commit hooks. When you change something it'll rebuild the image and put it up on the index. At which point you should pull it and kick your containers. If you don't use something like that and you're building it locally, you can have something on a cron that rebuilds the images and kicks the containers that are currently active. So the idea is by using something declarative like this, then every time that Debian unstable image updates it can have the latest security fixes. So that when we rerun this and re-tag the image locally then we're going to get the security updates as well. Essentially containers should be, in my opinion, always read only ephemeral. And so you shouldn't be making any changes inside the container. If you're writing anything that should be mounted onto the host. So that at any point I can just trash all the containers, start them up again and then they have the latest version with minor interruptions. And which is, I mean, similar enough, it's sort of the difference between immutable versus mutable. You can think of virtual machines as sort of mutable, right? You can update them, you can change their state. With Docker containers really they should be sort of immutable. When you replace them they should be an atomic replace. So list versus Python who's ready. Any questions about this so far? Okay, cool, while we continue talking. Basically the only reason I gave this talk is to use the Unicode heart to see if any of the software would crash. It didn't, which was a huge disappointment. So hopefully this turns into more of a discussion pretty soon. So again, another strong opinion for myself that you should really only use container linking. Skydoc used to be something that I was preferring, but it ended up being really buggy and ate up all of the free memory on my system and um killed nearly everything, which was not fun. It ended up taking about two gigabytes. That kind of was a bad day. So I generally use container linking, which is all Docker containers when they're spawned are given a private IP address on a Docker zero interface. So they all can talk to each other behind the Docker zero interface and when you bind to a port in the container it's bound to a container local IP. Container linking basically rewrites the Etsy hosts, which is a bit of a hack, but it works. It essentially rewrites the Etsy hosts to point to another container's IP address. So it has the other 127 point whatever IP address. And this lets two containers talk to each other. So my Postgres container is up, but it's not bound to my public IP. It's bound to its container IP. And then other containers will talk to it by using container linking. So it'll mean that my web apps know about Postgres, so you can connect to postgres colon slash slash postgres at postgres colon postgres postgres postgres. The Docker API, I have so many things to say about it. It's not great. It's essentially been more and more stuff has been duct taped to it as time has been going on. And so to correctly tell it which ports you want to map, I think you need to find it in two places, which is the host config and the run config, which you need to pass during two different posts. And it's kind of a pain, same with mounting stuff in like volumes and the API that Docker exposes is very much sort of an implementation detail more than a public sort of facing thing that you should be playing around with. I've written plenty of Docker API clients, they're not fun. So if I can basically dissuade you in any way, I really want to. And if you really want to play with one, like run a helmet, it's seriously good advice. This API can probably, for a while, ID was spelled three different ways. So there's all uppercase ID, I, V, and ID all uppercase. Docker images are super cheap. They're all built on each other. So essentially, you have different layers on the image. And every time you perform an action, you're sort of pulling from all the images below it. So when I say from Debian unstable, it's basing all of your changes from the Debian unstable layer. So if you only make a couple of minimal changes, it's really cheap. And so the more and more layers you add, it's not really that bad. So if you extend like from Debian unstable in a couple of places, it's not actually duplicating that material on disk. It's just all in that one place, that one layer. So you should definitely use images for as much as you can. Having good images is definitely a huge improvement over try to do this stuff raw. I think Ashish has a question back there again. How are they cheap? Is it using copy on write? Is it using AUFS? Is it using a custom block layer? Yes. Great. Thanks, Ashish. Yes. So they are written to the file system and mounted on top of each other in a variety of fun ways. You can either use device mapper. You can use AUFS or you can use butter FS. Device mapper should not be used under any circumstances. I don't know why it's still in the tree. It's pretty bad. I used it on my... What's that? Well, compared to... It's compared... Yes. Please repeat the comment. AUFS is not great, but it is much better than device mapper. And so it's what I'm using until butter FS sort of becomes a bit more stable. I want to switch to it, but I haven't had the chance to switch my VPS to butter FS. So right now the current most stable back end in my opinion is AUFS. Yes, it is deprecated and there are plenty of operating systems that don't shift AUFS module anymore like Arch. And so that turns out to be a problem. But whatever you do, avoid device mapper. And so essentially it uses copy and write for everything, including the containers. And everything is mounted on top of each other using a variety of different methods. So yes, it's definitely, definitely cheap. And yes, basically ensure that you can hard reboot your machine, kill all the offline containers and start everything back up and have it work. What's actually... Oh, Russ, sorry. Yes. Hi. So there's a comment on a question on IRC. So if everything is layered on top of a base layer, what happens when you upgrade the base layer? Does everything on top of it break? Yes. So this is... No, this is great. Yeah. So every time you create a new image, it's given a new hash. It's given a new layer ID. So you're recreating the image from something new. So there will essentially sort of the immutability principle holds. You'll have the old layers, which you're still being based on, but they're basically unreferenced tags. They're just like commits that are hanging out that aren't being referenced by anything. And they're given a super descriptive name in the docker images output, which is angle bracket, none angle bracket. And these are essentially layers that are sitting around that have kind of been that kind of moved on. So if you from Debian unstable, Debian unstable updates, then you're going to have an image based on IDs that aren't referenced by Debian unstable in a couple of weeks, which is why people like to continuously upgrade these things. Hopefully that answers the question. Okay, right. So yeah, be sure you can start everything back up, have everything just work. The easiest way of doing this is treating them all as sort of ephemeral read-only process wrappers. So now some of the most interesting stuff, that was just a small overview of docker for anyone who doesn't know. Now this is, this is a good part. Docker is totally installable by running sudo app get installed docker IO. All of you guys should do that because it's great. Upstream, TN in particular, has a super stripped down Debian image, which is really good to base stuff off of. It's super lightweight and it's pullable from stock docker. If you're interested in the changes from Debootstrap, again they're documented in a shell script. Shell script, yeah right there, user share docker IO contribute make image Debootstrap, which I think might be the deprecated version. I can't remember. But yeah, if you're doing a lot with docker, feel free to check out what that's doing and make your own image. For Debian development, because I feel like this is going to start coming up, don't use the docker image from the index. Just don't de-put stuff that you've built with that image. If you're really trying to use docker to package stuff, build the base image yourself. I think that's pretty sane advice. I think just like P-Builder or S-Build, you wouldn't trust the truth that you W get. Don't trust the docker image that you're just pulling from the internet. Which brings me to another fun point. De-Builder, something like that. Someone should totally do that. Having a back-end that's as flexible as docker would be really interesting. Having something with a P-Builder like interface that uses docker containers on the back-end is something that I've been interested in for a long time. You can even tag images with build-ups installed so you don't have to have that warm-up time every time. And all sorts of crazy stuff. If anyone's interested in doing that, I'd love to talk with you about how to do that. Essentially, I want to turn this buff into what can docker do with Debian, what can Debian do with docker. Because that's sort of what I'm interested in. I see a lot of potential and I'm hoping other people do too. And a quick overview of some future plans before we start a bit more discussion. Nightly builds, check, ish. We have nightly builds going to PPAs. I need to set up a Debuild cluster to get nightly builds for Debian. These are mostly useful for myself and other people interested in testing nightlies and making sure packaging like works continuously. That's something I've been interested in. Something that's mostly kind of working props to TNN. Backports. We have a lot of stuff backported in a PPA. We need to upload that to Debian pretty soon but it involves backporting Go which means that we need to commit to maintaining Go in stable. So as you can probably guess I'm not super on top of that. I would love to see more Debian people push for content based IDs of layers. So those layers I was talking about aren't actually given IDs based on the content of the layer. They're just IDs. If we had content based IDs then we could do better stuff like verifying the integrity of an image or signing of images which would be really cool so that we could GPG sign an image and then assert that it is the image that we have or set up a docker daemon somewhere that only runs images that are PGP signed which would be awesome. And right and basically limit the stuff to only stuff I've signed. Potentially trusted Debian image somehow. I'm not sure what that would look like what the like logistics that would look like. For now I think just sort of decentralizing this and pushing it to all the people probably makes sense. Docker 1.2.0 has been released this week and I plan to upload it into unstable as soon as mark down to man is through new so that should be really soon now. Okay, right. So who's ready to flame? Yes, Brian. I've kind of been following Docker upstream development and I've noticed the version numbers or like they were like just like nine months ago point two point three point four just jumping and you were already at one point two and we're talking about a Jesse freeze maybe this year. How do you plan to maintain that going forward or keep up with upstream or do you have any thoughts there? Yeah. Yeah, I don't think there's a good answer for that. The the 1.0 release was supposed to be something a little bit more stable and more maintained. It's not turned out that way. 1.2 is much more stable and much better supported than 1.1 right now. I can't imagine that's true in the future but I'm hoping if we can sync Bintoon Debbie and on a particular version the collective user base will be enough to pressure updates which I think would be something worthwhile. The Docker upstream is super friendly and they're all really awesome. I love them all dearly. I poke fun at them plenty and I've definitely poked up fun at them at this talk and I'm sure I'm going to hear about it but they're definitely amazing and they definitely want good things for the world. So I think if there was definitely use case in which this made sense and I think a stable release of Debian and a couple versions have been too maybe then I think we could probably pull off some support. Yeah, it's a good point fair point but Docker 1.2 outclasses 1.0 and nearly every way. So it's definitely not worth keeping us on a stable version that's not better in any way. So, oh fun. Flame. So you said it's not suitable to prevent exploits is it basically the design of Docker as in the tool or is it rather the underlying interface provided by the kernel that are not sufficient to run like say students admissions when assessing student work. Sure. Alias sudo equal Docker run RM Debian let's see get into a volume out in here. So we're probably going to eat dash V to sorry I'm trying to live exploit Docker in front of you Trude MNT. Well, I guess I was I guess I wasn't quite clear. If something is running inside a Docker a container or inside Docker however now I'm really on the host. So Yeah, but you did be screwed up by a calling Docker. Right. So if I'm calling Docker in a sensible way how is it easy to act is the reason Oh, I see. I see the code inside a well prepared Docker container. Oh, I see. Yeah. If you change the user off route in the Docker container there's much less of an attack surface and yes if you're not a user with permissions it's a lot harder to do this. It definitely provides some level of isolation. It's just the kernel namespacing stuff I don't think was meant to provide bulletproof security it was meant to provide rough security. And I think it definitely does that pretty well. And if you keep users as non route it's pretty trivial to to exploit this. So yeah, you're right. This particular exploit is because I can run Docker and the Docker group is root equivalent. But yeah. You should be fine. Just a quick comment on that. If you are running developers code on production systems you probably want to use SE Linux in combination with Docker. Yeah, that's good advice. So another hand up somewhere back there. She's or and Russ. Oh, with OpenShift they use SE Linux to isolate the containers from other things. Awesome. Yeah. SE Linux sounds like definitely could be a solution. A whole bunch of hands up over there. Has somebody who helped maintain SE Linux for a while please don't trust it it's our single source of security. I don't recommend it. It's a great thing as a part of a defense and depth strategy. But if it's the only thing relying between you and remote root you're going to have a bad day. So all software is terrible. Right. So have you experimented any with the various privilege isolation system call limitation and similar privilege separation stuff in system D and you because you're using unit files to run Docker have you tried playing with adding that stuff in to do the I've not. I've not. And that's a great idea. That'll be awesome. Yeah. She should sit back. Oh, that was your question. Oh, great. All right. What else has got ideas on how to break Debian with Docker. Are you with us back and for Docker has a 42 layer limit. Ah, well that's fun. Well, yeah. For you obviously have 27 127 127 now. Yeah. Yeah. So I'm so confused. Yeah. So I guess it hurts. Don't poke it. Yeah. Right. Trying to attack more flames. Would it be reasonable to expect all the Debian infrastructure to run to have Docker one command so they can serve like run them on our machine easily and develop on it. So I've been playing around with Docker rising deck. Yeah. Right. And that's I haven't spent too much time on it, but it's definitely a goal of mine to be able to Docker run three containers and have a working deck to be all set up that will let you de-put packages in source form to a directory and end up with an apt-getable Deb directory somewhere else. So it's something I'm definitely interested in Docker rising more of Debian infrastructure so that people can run it and test it locally and have that the steps that it takes to set it up in a Docker file is perfect. That's exactly what I love Docker for. So having something like that where you can make some changes and then do a Docker build of the current directory that you're working in and then be able to test it without having to worry about setting up on the host is just that would be key. That would be awesome. I'd love to play with that. Just to make the flame temperature increase. It seems like Docker by promoting a world of process-based isolation decreases the importance of things like Debian policy which are all about having programs be co-installable and not step on each other's toes. And this seems sort of consistent with the way that the I don't know San Francisco Bay Area based web development community operates of which I am now a part where we just sort of like install some sort of based operating system and then just pour files all over the system. But I guess I'm supposed to ask a question so the question is Please form your flame in the form of the question. Yeah, but the question is really should Debian take more seriously the idea that things like policy may be less important over the next two to 15 years and alter Debian packaging accordingly? So there are several pieces to what policy does for you. So what I would say is is that there's a set of problems that Debian has tried to deal with for many years that are a bunch of the things that are in policy which as you say are about being able to install a bunch of stuff that prior to Debian putting a bunch of work into it would have naturally conflicted with it with each other and that given that obviously Debian did now they don't conflict with each other a bunch of stuff like alternatives and diversions and all that kind of thing. I think that stuff is still going to be useful in a lot of cases it's possible that will not be useful inside the little Docker containers that you're using to run production infrastructure. I think we would all be happy to see that happen. I mean those are that they're often workarounds for problems they're not as good as just having the one thing installed. Like for example I have one of the things I want to use Docker for is to set up test MIT KDC and Heimdall KDC so that I can test Kerberos code against both of them and right now the packages conflict because of a bunch of reasons and and you can kind of fix that with alternatives except you can't really fix that with alternatives because the cab and syntax is completely different and then you get into a big argument. So there are parts of policy like that that will be less important. I think that even when you put everything inside Docker having all of the binaries in VAR temp is still not useful when something goes wrong and you want to find the command that went wrong and you didn't think to look in VAR temp for the command. So I mean so I think there's still some role for I installed this thing now where the hell did all the bits of it go and I want to configure this thing I would like all the configuration files be in the configuration file directory and not scattered off in Brut's home directory. So that part of policy I don't think really changes. So what Paul gave us was a bunch of recommendations on top of what Docker documentation if you can call it that describes. Isn't that something that would be useful as part of Debian Docker policy as in how do you do how do you dockerize applications for Debian. And in that case what you can have is you can still have alternatives and diversions and everything else that actually allows you to have packages coexist inside that Debian unstable base image and you still need that to build your base images or any images for Docker but you could have some same recommendations on how to lay things out with Docker as well on top of that. Yeah interesting I hadn't really thought about that too much. If people would be happy with documenting best practices in Debian with Docker I'd be happy to spend time and effort. I don't know that me dictating that sort of thing is the best idea but I think if other people want to try to form coherent thoughts around this that'd be a lot of fun. Oh come on you got more than that. Within the next 20 minutes can we dockerize subsurface? I got five minutes left. You got five minutes left. But there's a man here I got one minute left who's going to be upset about the fact that subsurface isn't used statically. Run pseudo apk and install subsurface should be good. Okay. There we saw all of our statically problems that way. Hey last comment yeah sure. Yeah you said that that you were using your Docker while he's using a UFS. Yes. Do you have some problems with stability of a UFS itself? I have not. Most of my problems have been using non a UFS backends and in a matter of fact I came to get a UFS to run on line node because the kernel was built without a UFS on it. So actually I have a blog post where I load from Zen grub to grub 0.9 to grub 2.0 to the Debian kernel because the old grub Zen doesn't support XZ compression which is great. Yeah it is. So if anyone wants to get a UFS working on line node there's a post somewhere. All right. I think I'm out of time but we can keep talking about cool.