 Right, so this is the virtualization and containerization boff. It's up to you what you want to talk about. We at HPE, we are interested in a bit of both. We used to run a public cloud, we don't anymore. So there was a bit of virtualization there. Yeah, part of my day job is helping out support for the virtualization team. The team that supports the private cloud product. We turn our public cloud into a private cloud. We're also just starting getting serious about containerization. It's pretty popular. A lot of our customers are asking a lot about Docker and are doing things already with Docker. And we are starting to look at that more seriously. So what's everyone else doing with Docker and or virtualization? Is there more interesting virtualization or containers? Containers. Containers, anyone else for containers? But not Docker. But not Docker, that might be hard. Yes, I would say a little bit about containerization, but avoid passwords which are just here everywhere. What's good and what's not so good and not... Oh yes, another 50 passwords where I can use my Bingo card. Okay, no jargon. So I've actually got a follow-up question to your talk this morning and maybe this is a starting point for this conversation. One of the questions I am wondering about in the context of containerization is how to handle security updates. And it sounds like there's no good solution in the community so far, in particular when it comes to tooling. I guess the first step of the answer is to provide or use good build tools that actually integrate into building containers and then use CD to actually push out security updates in a timely fashion. But I think if we as Debian kind of provide support for containers by default then this should be a question that should be answered and we should have a solution for users away level. What do you mean by containers by default? Just a few more words about that. Basically what you mentioned this morning that Debian kind of does not lack behind when it comes to containers and provides some sort of support for containers. Okay. It's something I've been looking at for a bit because I unfortunately have to maintain a bit of Docker-based infrastructure unfortunately. And basically the main issue is that you cannot really pass a Docker file and figure out what is installed in that. So I've been looking at using all the tools where I can literally declare a manifest thing. Okay. These are my app sources. These are the packages I want installed. Those are my app pins. And basically based on that take a basically get at the end of the build a list of installed packages and versions and look at basically be able to just pass the manifest, pass that list and check if the thing needs to be rebuilt after the repository changes. But unfortunately I am pretty much convinced that involves making people not use Docker for container builds because Docker will just tell you, oh yeah, slap some stuff on top of Debian-based container and just run a bunch of random commands to install stuff. And that's pretty much non-passable in practice. I guess just mostly rephrasing what you just said. I guess like inspection and some way to inventorize what you have is probably an important first step to make this work or it needs to be part of the tooling that is available to users such that they can integrate that into their infrastructure. So it sounds like there's quite a bit of overlap between this and the cloud images. I think it's a step towards maturity when you use Docker for a while and then suddenly realize you can't really download random stuff off the internet and put it into production. You need to, okay, right when you do CI, CD system, build a base image, how do we do security updates? So I think everyone goes through that. After the initial excitement is worn off. So since you are mentioning that we need CI, continuous integration, continuous deployment and so on, let's avoid a bit of a crony overload. Something I haven't seen many people working on is also how do I fetch updates for my containers securely. Docker has been working with Notary on container signing but it's kind of not really convincing right now. So for people who don't know Notary, they're using more or less a specification called the Update Framework to basically sign container updates. The main issue is that they basically consider every different Docker image as a different repository and it's basically at that point they say, okay, the first time you download the Docker image, you are going to trust that you get the right keys and after that also updates are signed. It's kind of as if I told you, okay, every time you install a new package, you can get something wrong, but if you got it right the first time, I'm going to guarantee that all your upgrades after that are going to be right. And it's kind of really not what you would expect because Upt gives you a much higher level of security than that. Upt guarantees you, okay, if your Debian install is good and you didn't add any shoddy keys to the trusted list, then it's all going to be good. And yes, there is like currently beyond what Notary does, I haven't seen anything going on with container signing, which is fairly problematic in some ways. So has anyone here done anything with regards to actually securely deploying container updates and so on? Crickets. Did the ISC do something? No, it seems not. I believe Rocket have some... I don't know. You know, I believe Rocket do some secure, but I haven't looked into it too deeply, but they seem to have considered the issue of security a bit more seriously and upfront than Docker has. That seems to me anyway. Perhaps that's something we can look into. It's fine. So Sam Harbin, Docker is the only container system... Well, okay, Docker is one of the container systems I haven't looked at. I've kind of looked at many of the others. The kernel list is all the same technology, right? So the thing I'm running into is trying to actually deploy containers that I think might possibly be secure, so really trying to lock down the capability bounding set. And I've found that particularly if I'm doing a container on top of a real OS, and I apologize, you may just say, oh Sam, I cover this and I talk this morning, you were not there, so you lose. But particularly SystemD really wants a lot of capabilities in a guest. It particularly seems to really want CapAdmin, for example, where it gets very grumpy about its inability to mount SysFS, even if you've tried to provide it a reasonable SysFS and a secret path S and that sort of thing, it kind of insists on managing that himself. Are there techniques that actually work effectively for getting a container running with a really restricted bounding set and actually pretty locked down? Does anyone have any thoughts on that? SystemD? Yeah, here it is. So don't really have a good answer for running SystemD in containers. I pretty much have the same problem. However, I mean, if you don't mind starting directly your daemon either through the init.descript or through basically by calling it yourself, you can pretty much reduce enormously the capabilities you need, but on the other hand, it means that your container build is much more custom and you cannot just say it's Debian with a bunch of services. And yeah, Yuck, his right reaction, I guess. Why given that, do people prefer containers to... Well, I guess there are situations where you don't care about security, okay, fine. About the init system thing, it seems almost an article of faith by Docker that you do not run an init system, you just run a single daemon in your container. A lot of people disagree with that. It's the subject of a lot of emails, whether that's a good idea or not, and I mean, yeah. But, okay, fine. So let's just say I wanted to do that. So what happens when three years from now, my daemon depends on SystemD services. It depends on socket activation. It depends on being able to get C-group manipulation for its sub-children, that sort of thing. It depends on being able to create scopes. I mean, that's all great for that one daemon today, but what happens as we look at the forward evolution of our technologies? Yeah, I think it's a nice idea running exactly one daemon, but I don't think it's practical for a lot of cases. But it's not even about running... Oh, sorry. I misunderstood you, but as far as I understood, it's not even about running many daemons or one daemon, but also about the one daemon we want to run that would at some point start to rely on actually having SystemD functionality. And it's true that some software is actually starting to do that now. So it seems we kind of have both sides being able to just say, oh, yeah, we can just install a bunch of daemon packages in the container and configure them. And on the other hand, being able to say, if we do that, if we can support that, then we also know we will be able to support packages or daemons in 3, 5 years. I was really having a better answer. That's a long answer. My own? So I guess the question is, if we assume that in a couple of years from now we need all these capabilities, why do we assume that they have to be provided from within the container? The container management thing or container technology could also provide those same features to us. Sure. So if the container runtime provides us something, SystemD like, you know, that we can run things that people would be able to use. Well, the computer, or the microphone, would be better. Oh, yeah, sorry. Well, if the container management thing could provide those features, that would probably work. I guess, I mean, the question is how to implement that. It could be that the functionality would be duplicated, which would obviously be bad. It could be that it's some sort of pass-through mode. It could be that SystemD just knows how to work with the container and something that SystemD from outside the container would provide and then this is passed through or whatever. I mean, I guess that's like an open technical question because we don't know what the future will bring and depending on what the future will bring, this will have to be taken into account, right? Like, going back a bit to the start of that conversation, I'm actually interested why you would want to have like a sort of full operating system in the container. That's something I never quite understood. I guess the only reason that I could see is that it makes some things easier, but that sounds like just something that you would want early doing deployment or experimentation. Okay, so first of all, I will note that it sounds, you know, you talk about having the container management system to provide those facilities. You need to be very careful that you didn't just widen your security attack surface because if I'm willing to widen my security attack surface, I can give the container CAP admin or whatever else it needs and, you know, it'll work today. So there are two reasons why I kind of want those services in the container. First of all, I would like to use containers as a lightweight virtualization instead of running VMs. For reasonably trusted services, you know, I would like to be able to... Containers provide better sharing of a file system and memory resources than VMs do. I like to take advantage of that and still get security isolation between services. But long-term, I mean, the argument, basically, as my daemons, for example, let's say that my web server or some other service starts using socket activations, or services start wanting to be able to run sub-services in C groups, like say my web server forks off some application services and wants to run them in its own C group. And I want to handle the C group management the same inside the container as I do outside the container. Then I'm going to want to use something that looks a lot like systemDscopes. Thanks, Sam. Yeah, I can see a use cases for both those scenarios, one where there's a very hard separation between the guest container and the host, and another one where, you know, there's some... The boundary's a bit more porous and some systemD bits and pieces can get through. In fact, if you start doing that, why would you use Docker at all? Can't you have all the features of Docker kind of integrated into systemD as part of the operating system in general rather than relying on Docker to provide everything? Well, yeah, okay, yeah. No, it's a nice example of... I mean, it's very concrete, so I was using it as an example. Yeah. I mean, I'm more looking at... I'm more looking at systemD's container support and or LXC. They seem to have more knobs that I can tune that are security-related. Okay, yeah, good point. I'll put that down. From the ISE, is that containers widen your attack surface to begin with because you share the kernel with the host, and so you would need to patch your container manager if you are actually worried about that? As I said, from the ISE, not from me. Yeah, that seems to be what people are doing. It's just running... For security reasons, just running Docker inside of VM, which seems to defeat the purpose of the whole... Wait, what? Container inside of VM? Sorry, running Docker inside of VM and then running containers inside Docker to avoid the security, which it seems a bit silly, but it's... I guess that's where... That's where clear containers come in, where we're trying to use some of the hardware isolation features of virtualization to provide more security for containers without having to do a full hardware virtualization. So, I've been looking during DevCon that's a sandbox that is used by Sandstorm, which is... So, I mean, Sandstorm doesn't claim to be doing containers. This is a framework for running web applications in isolation and blah, but it turns out internally, they are doing very tiny containers, and basically they have... At least they have pretty strong security claims, and you can talk to Ashish about that. Yeah, and basically, like, for instance, they did a survey of all kernel vulnerabilities over the last 18 months, and they kind of concluded that 95% of those were completely not relevant inside their sandbox, because they were using syscalls that were forbidden using sepcom, or that they were using the sysfs, which was not mounted and so on. The question is basically, or how is it to get things running inside that sandbox? It seems it's not too hard for the applications that have actually packaged. I talk to them, but it's not completely obvious whether we can reasonably expect people to use that kind of very strong sandbox for general purpose container. I cannot speak English anymore. For general purpose containers, dammit. Yeah, I think that's another step on the journey to maturity after you realize that you need to build your own containers that perhaps Docker doesn't provide... Docker's too restrictive in terms of what it does and security in other areas, and it's easier to step outside Docker for a bit and start implementing the containers with the low-level tools. Sam, we have to provide the tooling. So I will repeat that with Mike. Sam, if we are stepping out of the Docker ecosystem, we have to provide Debian users with the tooling to run containers safely. And I mean, it probably won't happen for stretch, given no closer freeze is, but... There's been a lot of work that's been gone into LXC for Debian. And it's, you know, for example, there seems to be a reasonable second policy. They do a reasonably good job of setting up the device controller C group. You do need to have a fair bit in your bounding set for most OSes. Although interestingly, I will also note LXC is the really awesome mode that I can't ever get to work, but it seems like it would be great if it did, where you actually don't even run the container as root. You have no capabilities. I don't quite understand how that's supposed to work with any modern distro, because, again, because you get screwed by system B. But it seems like that worked for someone once in a major early cool way. You are rapidly running out of my... getting out of my depth here in system D. There are any comments from this side of the room? I don't think... I don't think everyone's been on the other side. Oh, up the back. LXC has non-root with system D, but maybe I'm not aware of something. Oh, please tell me how. It's... I don't know. Talk to me later and then we can discuss it. It seems to be working. Okay, so one possible solution I can think of is running actually a system D user session as a regular user inside the restricted container, but that won't get you far if you need to do fancy things with a kernel, probably. So, since I have the mic and there seems to be nobody else interested, one of the things I really don't like about Docker is that it pushes forward towards culture of bundling everything. So, containers is not a take it or leave it technology. It has different levels. I mean, you have C groups and then you have name spaces and you can pick the ones that fit your needs to actually do what you want. And in that respect, system D is doing a lot better. I mean, I would really like to see services in Debian coming with namespace isolation turned on by default. If they don't need network support then set private network to yes, for example. So, this is not containers in the sense that we treat it in this both, I guess, but it's still a step forward to get people accustomed to the fact that there is no single network namespace in your system. There is no single file system namespace in your system. So, it's a kind of, let's say, a solution in between. So, yeah. If your degree is valuable, I've actually tried to get some, the maintenance of some packages I use to accept patches that do that kind of thing. Problem is that quite a few people seem to not be too fond of the idea of having a service that behaves differently when it's running under system D or not. Or even having anything at all to do with system D, which makes me quite sad in the end because, I mean, we decided to ship system D, so now it's, well, it's kind of difficult to see people just pretending that this thing never happened, but on the other hand, those are system D-specific features to some extent, but they are also, I think, as you mentioned, they are extremely valuable in term of what you can do for security for people who might not be aware of those yet. Basically, you can do things such as making the entire file system read-only or giving the restricting, sorry, the application's access to network or to road devices or and so on and so forth or prevent it from accessing some specific directories like Valog. Yeah, basically, many services we ship today could probably ship with read-only directories and read-write directories, some subset of this, but I think a big part of the problem is also social. It's convincing all the DDs to care about this. This might be an area where we had a roadmap buff this morning. This might be an area where getting that sort of thing onto the Debian roadmap is a good idea. I will notice a maintainer who has done some of that It does generate more bugs for you. It's people whine that when they change their configuration they didn't update the system de-configuration and it doesn't work anymore. Yes, that's true. There is also some work to be done also educating the users about what this entails. I agree to some extent that it's bad to kind of break user expectations but the user expectations also include my system BCQ. It seems to me there's a move towards sandboxing... Sorry. I just said sorry. Oh, okay. No, I was trying to say that there seems to be a move to sandboxing system services more. I know that Mac OS is... I mean, all the app store apps are run inside a sandbox and I think they're trying to roll that out to other system services. Does anyone know if Red Hat or Ubuntu or other distros are sandboxing their internal services? Yes, so Debian kind of is behind the world in this regard. Ubuntu uses app armor by default and Red Hat has put a lot of effort at least for enterprise Linux behind SE Linux and does a bunch of stuff with that. And I do know a lot of people who deploy production Red Hat systems with SE Linux in enforcing mode. So, yeah, we're kind of behind on that front. Yeah, I guess there's a bit of overlap between the security policy tools and putting things in containers. Okay, we're just about exhausted talking about Docker or not Docker. So, thanks. One question for me actually to all of us is I mean, we currently... I think nobody well we are currently... but what do we expect to be happening next? So, I believe things that in five years we say, oh, yeah, everybody use Docker and nothing else anymore? Or do we say, look back in five years and say, well, yeah, Docker was like CVS, it was great when it invented but not so good anymore now. Yeah, basically, what do we expect that might be happening? Several billion testing things for me. It's a totally open question, of course. I like the direction system D is going in terms of, yeah, sandboxing services without running Docker. So, more integration of containers, LXC inside the system itself rather than outsourcing that to another app like Docker. That's what I think is going to happen anyway. So, something I wanted to mention since it's let us somewhat on topic, most of the container-related tools have written in Go except for basically SystemDN, Spawn, and LXC as far as I'm aware. And currently we have no good story about shipping security updates for Go packages. It's simple. Currently in JCS there is no Go packages. If I'm not mistaken. But for stretch, for instance, we need to have this thing solved and the main issues that Go basically uses static linking. There is some experimental dynamic linking but basically nobody uses it like for real and I doubt we want to try to get this in the archive like half a year before freeze. So, basically that's a big issue for stretch because if we cannot solve the problem of how we ship or how we handle security updates for Go packages it means we will have no Go packages which means we will not have Docker, we will not have Rocket, we will not have any of the other fancy container tools except for SystemDN, Spawn, and LXC. And yeah, it's I guess more call for help slash contribution. Yeah, but even then we currently we don't have the tooling to once we have done one library update then update all the packages that ship binaries and transitively depend on them. Yeah. I'm not sure, yeah, I'm not sure anyone has tested the experimental dynamic linking stuff on all platforms. I think there's some horrible bugs hiding in there. So, what you describe about Go is essentially the same problem Haskell has because it also does some static linking and as far as I know the Haskell team has some tools in place that schedule bin and amuse for packages after a library has been updated. So, it seems as common ground there and there could be let's say language agnostic solution to some extent. So Haskell and Go does the same, I think Go lang, DH Go lang does the same uses virtual provides basically with a hash what actually this library version was when it was built. So this might be changing tack in some ways just from a lot of this is going over my head. So maybe coming from a sort of new perspective. Docker is very easy to use and sort of containers at least from my perspective took off with Docker. You know, just Docker make and you know, if you go kind of stuff you have your Docker file and you know, we run Docker in production with Kubernetes and it's very easy to use and it's much easier not to mess up as badly as you can mess up vagrants and getting into the mindset of using machines that you can kill and bring up again and the mindset of containers it's very easy for someone like me to use. So I think that needs to be thought about that it's kind of taking over the world from that angle more than a security you know, having everything right because Alexi is quite hard to use as far as I'm concerned. Yeah, I have to admit I've never managed to get Alexi working by hand myself and Docker is way, way easier. Yeah, it's kind of two use cases. It sounds like the development versus production stuff. But yeah, I use Docker all the time for just messing around too and I'm not too concerned about security in that encasement. Yeah, you too. Hmm, which one, Sam, sorry? No, obviously the other one. Oh, sorry. See it true at all right, okay. No, I see it true. That's true. I'm going to get that working anyway. Okay, well, if there aren't any more things to say, thank you for coming everyone and you can have a 10 minute early mark perhaps. So, thanks.