 But it's not wrong to have a panel on building past and maintaining distance. And as you can see, that's the flow of things before you get into that space. The best thing about having a panel, mostly involved for people out there, is that we can just put that person on a big bench there. So we've got some good people behind it. It's not too bad. A lot of people are doing that, and that's what we do. Go ahead and introduce, well, do you know who you are? Phil, do you want to set up Phil, Andrew, Colm, and Trillian? I'm Phil Whelan, software architect of ActiveState. I'm Andrew Clay Scherfer, and I'm Watters Lackey. I'm his bannerman in the game of thrones. I'm Colin Humphries, CEO of Cloud Creede. I'm also Watters Lackey. I'm Julian Friedman, I work for IBM. I'm not Watters Lackey. I am my manager, Alex Tarpinion, Lackey. So, we have regional distributions of Cloud and Andrew here. We have one of the guys that created Puppet. We have someone who actually created the tool that allows us to do film packs locally, What's it called? Cloud Rocker. Nice. What's Rocker? It still begins with F on the weekend. And Julian used to work on Watson, and is now on the Guardian team. I'm still on the Guardian team. I'm proud. Garden True. So, a container is what Garden runs in Cloud Foundry. I think actually this is a really difficult question, right? The obvious answer is a container packages a unit of deployment that can be run anywhere. There's two things that are going on that people conflate. So, there's the packaging of the root file systems, and maybe three things. So, there's the root file systems, there's some metadata about that file system and entry points and maybe some other stuff. And then there's the actual process isolation, the kernel features, and there's some arguments about which PID gets to start. I think there will be a little on top, a container is a process, and then the rest of it is kind of up for grabs. So, in the parlance that LXC started with, a Linux container is a process that runs under C groups and namespaces. It's a process that gets to pretend like it's a loan on the computer. That's a lonely process. If you think about LXD, I would say I'm asking you to start a container, but it's actually going to use what's effectively a VM to do it. That's still a container. As far as I'm concerned, as a user, it's a container, except they're now using completely different technology to actually deploy it. What's the different technology? Well, it's actually basically a VM. LXC is? LXD. We're not talking about those guys, right? I just opened that kind of once. So I think that there's actually, I don't know how quickly we want to veer off of this, but there's a future where all the things that people are excited about with regards to packaging and isolation are actually even more true about the unicurnals than you're getting with the process containment OS level virtualization that you see right now that everyone's excited about. If it falls to the arc that the kernel, being the mainline kernel to now in about six, seven years, everyone's going to be really excited about unicurnals. That's why I'm pushing on these two things being slightly different, right? Because I actually think the technology is, and I work on the technology, but the technology is slightly irrelevant, right? So you've got, we kind of need two words, right? We've got containers, like a unit of deployment that I can deploy with various technologies. Absolutely. And then we've got, unfortunately, Linux containers, which is an unfortunately overlapping word. There's namespace collision. In the Docker world, they have Docker images and Docker containers, and the container is the running thing and the images are the portable thing, but obviously that's got skewed and everyone gets excited about Docker containers being portable. Which is because they've used this ship analogy, right? The analogy of a container, that word container is the container on a ship that means I can put it on my ship and take it over the Atlantic in a reasonable way. But I think one thing that's missing, or at least it's unspoken, is that this adoption curve and the exuberance around this technology is only possible because everyone's already running the Linux kernel. It's not like you're actually adopting new technology. These are features that are in the kernels that you're maybe already running, and you're just able to have more accessibility to those features. Although I've heard people, for example, using Solaris to implement containers, right? They use the Docker API, Solaris on the back. You can run the Linux images. There's nothing obviously that makes that not a container, right? There's nothing that says that's not a container. This is why the definitions are like people are conflating the two things. There's the packaging of the deployment artifact, and then there's the process isolation. Right. Yeah, I mean, this panel title is about build packs and containers. So there are kind of two things. There's one where I think when we think about containers, currently it's the Docker image where you build the whole thing and you put it in there, and then the second one is build packs where you've basically got a base container and you use the build pack to take the code and build it from there. But I think the spectrum could be a lot wider. I mean, on the end of the spectrum, there's obviously the raw code, but after that you could have compiled code that you put in and then maybe a build pack or something that runs that. And then after that you've got the build pack which can install it and then you've got the Docker image. But I think we've over-technology like Kubernetes and Google, they're taking a little bit further. Google said that they rarely run a single container and that's why they have pods. And so Kubernetes has these concepts of a pod, so when you deploy a container, you don't deploy a container. You deploy a pod. And an application possibly would even make up several pods. So actually, limiting ourselves to just talking about should it be code or should it be pill packs? Probably it doesn't take it far enough. Should we be supporting deploying multiple pods to Cloud for Android? I think we should have a high degree of freedom, to be honest. I think it's quite... So the original, the current Cloud Foundry model, you have your code, you give your code to the Paz. It stages that for you by combining the build pack with your code. That will output a droplet and then staging is done. The droplet is then put down onto a single layered file system and then run inside of a container and those containers can be scaled. So we're giving our code as the unit of currency and Cloud Foundry takes care of everything else. And then in that, when you say container, you mean the process isolation of the kernel, not the file system, right? When you said the way you're using the word, right? A Linux container. Yeah. So there's a bunch of things and I think everyone's got the right idea about aspects of it. And then the thing that you're making your decisions on is kind of a localized perspective of do I really want to get this into production? Do I want maximum flexibility? Or am I thinking about day two, day three? How do I patch ghost? How do I patch heart bleed? So if you have a bunch of things that you don't know what they are, you have no way to inventory what they are, then at the point where there's a vulnerability, you have to audit your full infrastructure and figure out what to do. So there's a value to being able to separate all these different layers from the operating system to the stacks that you're going to provide in time to whatever your little code bits are. And having that separation gives you the power with something like Cloud Foundry to do a patch across your full infrastructure, basically as soon as you have it and the rolling updates across everything will work. So I actually don't think anything's missing so much as it would be nice in Cloud Foundry. I personally, this is what my vision, and then you guys can say what you think is wrong about it. But I really want to be able to deploy the container as a thing that I can move as an artifact. It's nice to have the build pack for these operational considerations and the auditing and control from an operational perspective. But I would really like to be able to eject, you can't eject a droplet. You can technically if you dig it out, but it's not built into the API. I think to make the API, that's just, I think, been an oversight. I mean, we very soon will have the ability to move droplets out of Cloud Foundry. So as I just said, the droplet is something that when your code is combined with the build pack, a droplet is created. And that is theoretically portable between different Cloud Foundries, certainly inside a single Cloud Foundry between different spaces. So you want to take that droplet and move it from dev to stage to QA to prod and not restage your code. Exactly. There's not this mode. Sorry? You don't want to restage your code each time because if you've tested that droplet and you know that it is good, you don't want to rerun a staging procedure that might fail to go and grab dependencies from the internet or might fail in some other way or might have something that's temporary dependent in rebuilding a new droplet for production. So you're staging one worked and your production one then fails. It's just a docker layer. We happen to implement it by shipping a tarp all over directory, but it doesn't matter. It could as easily be a docker layer. It's just an implementation detail, but it's an important one because layered file systems in docker don't work amazingly well. Right. Layered file systems period. Yeah, exactly. I'd rather see the current setup where we have a file system image that is used and then we untar a droplet because it is reliable, it is good and it's clear exactly what's going on. You don't have 50 layers that may or may not have permissions that's used across them. I think you could use layering to do that step. Maybe three. Just fine. It'll be technically better. The point is it doesn't matter. The technology isn't the important thing. The important thing is what's the abstraction the user sees. And Dr. Nick was asking the question, what's missing? I actually think the actual answer right now is not enough is missing. Containers give way too much power. Actually, it's great to implement some of this on containers because architecture for us, that's a really great way of operating things. But it's actually answering the wrong question. Because the right question is what's the best abstraction for the user, not how do we implement that abstraction? You could ask, should we do it on bare metal or on something like AWS or an IaaS? And I would answer on an IaaS, but it doesn't matter. You should never care. It's the wrong question. You should use an IaaS. It's a better operational way of delivering the user benefit. But now let's talk about what user experience we want. And actually, we give people too much power with, hey, run anything anywhere. If you go down that path, you end up being effectively a more effective IaaS. But we're trying to build a platform. My problem isn't what format it's in. Currently, I think the opagness is the big problem. I have no idea what's in this container and put it in. But still, with build packs in Cloud Foundry, you don't really know what the build pack has done. Even the simple build packs have like 10 different run times that they might use if it's a vulnerability, I need to restage all of them. At least it makes it way harder to start doing really hard to support things. So with a build pack, I can start shipping with all sorts of dependencies on a particular OS. That's a bad idea. That's something we wouldn't normally do. What containers have done is make it really cheap to do the wrong thing. I was having a conversation yesterday about this. We were saying it's kind of like if you're driving a really big SUV. And what's happened is we've made gas prices much cheaper. But it's still probably best if we get to better transport. I definitely agree that containers, this metaphor with a container means that it is theoretically more portable faster to fire up. But the things that are in there are usually so horrific you don't want to look. It means you can just go in that container and just wreck havoc. And then we've just got lots of versions of havoc. The test pass, ship it. But I'm not saying that. I think it should be a freedom thing for the users. There's times when the right thing to do is to ship a container. And there's times when the right thing to do is to ship some code. But we just need to radiate out the information about the costs of the two approaches and when each approach is better. As you quite rightly said, when you've got an SSL vulnerability with a completely unknown container estate, you've got a big problem. If you can just actually change build pack and say, everything restaged now, you've got a much less of a problem. Just beating the analogy to death. But there are times when you should use bare metal. There are times when you actually genuinely need that level of control. You should just go use bare metal. But most times you should use an IS. And there are times when you really genuinely need the control that you should have a container. And that's great. But a good percentage of the time, I don't want to guess a percentage, but a good percentage of the time, it would have been better if you didn't. It would have been better if you just ship code and we figured out how to make that work. And I think that's the point of Cloud Foundry. I think it's sort of a question of where you are in terms of your operational pipeline. So in the front end, doing experiments, trying to play with new technology before you have codified those build packs, it'd be great to just finger paint whatever you want and throw it in there and it seems to work. But then as you move into a more hardened, operationalized, audited system, I really want to have those layers be accounted for and have control over them. It's kind of like where do you build it, rather than how do you build it? If you've got control over building at Cloud Foundry and it's a good process and you can see how that works. I've actually made that statement that a sufficiently sophisticated Docker build pipeline looks suspiciously like build packs. You can use build packs to build Docker images. I'll say something slightly controversial, because it's supposed to be a panel. I think it's the other way around. I think the problem we're having right now, the reason why we're having difficulties is what should be the case is that it's so much easier to push applications on the platform, on the layer above, that most of the time you'd never think of using a Docker container. But because the Docker container experience is so good and the UX is so good, people are actually finding it easier to push a Docker container than to figure out, in some cases at least, how to do a CF push. And the problem we need to fix is to make it so much easier, so much quicker to get it working. My personal opinion is that the CF push is actually quite easy. It's getting all the stuff set up before. The commitment that an individual standalone developer has on that onboarding experience to get Cloud Foundry set up by themselves, forget about it. You're going to do it by force of will. Where Docker app gets installed and I have a container running in minutes, that's the difference. The first problem is that it's difficult for users to get hold of a Cloud Foundry account and get started. And the second one is it's a much bigger shift from I used to run these commands in Bash to set up my stuff and now I can just copy and paste that into a Docker file and it runs, whereas I used to run these commands in Bash to now my world is just CF push. That's just a bigger paradigm shift. But I think we're going to see a world in which people will start doing that. They'll go to a Docker file, they'll then use it in a Docker container. I think, okay, I've got one Docker container. Now I need to add in, change it into a distributed system across multiple hosts. I need to add in routing and logging, various other things, all the things you need for a distributed system. And then they're going to think, I appear to have just built a really, really crap Cloud Foundry. And then they'll go over to it. So I think some of this might be that we need to find some abstractions that let people do some of these things. So maybe build packs aren't quite expressive enough right now. Maybe the ways we let people do things like say, I do need this binary dependency. I do need this thing on my image, right? If instead of the way of them specifying that was to bake it once in a Docker file and ship us, there was a way they could tell us, I do need these extra things, I have these escape hatches. But without pulling the leaf of the whole way out. Right. There's no reason that couldn't be part of some sort of manifest, like some of this stuff, right? Build packs were sort of designed before Docker files, right? And so they're kind of shell scripts and we do this stuff. If you actually combine some of the better ideas of Docker files with what we already have with build packs, maybe you can start to solve some of these things. Yeah, so maybe actually people can say, Right. Much nicer developer experience than installing Ruby again. I agree. That's a very basic, ridiculous example. But let's say we have most of what a Docker file does, is available to you when you push your app. But you don't get to have the from command, right? You're going to go from our root FS. But you can have a few, you can say, I do want you to do this, this, this step, which we can cache. And there's a relatively limited set, but a few. The thing is there is a halfway point between these two. We don't have to say if your application does need something a little bit more, like a dependency, then CF push just shippers the whole bits and we'll run it, which is basically... We do have limited elements of that at the moment. In the Java build pack you can specify some environment variables that will make it do subtly different things and pick your runtime environment and those kind of things. So there is an element of that. I also think there is some kind of fear and certainty and doubt around build packs and people think these kind of hallowed objects. You are just running three shell scripts and you can do equally horrific things in a build pack as you can do in a Docker file. Absolutely. So nothing stops you, like, calling out to a Docker file in your build pack and doing all manner of horrific things. You can do nasty things in any tooling. It just seems that because the build packs are kind of there already for you, people aren't doing quite as bad things. So one thing we could do as well as build packs to kind of complement is more stacks. So maybe you have a Java stack and then there is less for the build pack to do on top of that. I've actually thought before that if I were running an internal cloud for an organization, I might be tempted not to let people push their own build packs. I might say here's your four build packs, right? And then I'm going to run with those and really optimize those. So I don't know that I want every single build pack. That's the point I was going to make in response to what you just said is there's this process where, yeah, you could do horrible things in build packs, yeah, you could do horrible things in Docker, you could always do horrible things in Bash, whatever. But putting that actual build pack through its own governance cycle to move into production, that's part of this as well. So you're able to say, okay, this is what we are going to support. This is what we operationally are going to support. And it's not bring your own build pack, it's X, Y, and Z. Absolutely. But that's why we have to, having that information, knowing that Cloud Foundry is taking your source code, combining it with a build pack, producing a droplet, and then putting it onto a file system, and knowing that those are essentially your primitives and how do you put governance around those primitives and work with them. I guess an interesting question is whether or not they are sufficiently granular and we need to break those down. Is the build pack too big? The bottom layer? My impression right now is that I think they're a good size that we could do more with it, but in some ways it's not always exposed to be controlled and manipulated. Like if we had a little more decomposition of that pipeline that gave you the ability to change those things, eject the droplets, use those as the artifacts, cache them, whatever, then it becomes a much more powerful system, more flexible, more usable. But we still didn't solve the problem that it takes you four hours to get your first Cloud Foundry set up. I've got a solution. What's that? I've got a solution, but I'll tell you. Well, the current workflow with build packs is either you take the build pack that is out there and it works for you or it doesn't, but it almost does and then you fork it and you tweak it and then you've got to swap that fork forever. Or you say this Java build pack is far too complex for me. I just want to install this, install this, install this, and I make a simple build pack. But then the other extreme... Okay. This is my build pack. Yeah. You're supposed to be moderating. I stopped doing that. We could have a world where everyone wrote their own build pack for every application, and you'd essentially have what we have with Dockerfiles where everyone writes their own thing. The difference with Dockerfiles is that you can build on top of other people's Dockerfiles. So that would be nice if we could do that. Yeah. Yeah, that happened early in the development of Docker. Someone created, took all the Heroku build packs and made a base Docker image from them so you could actually... Taking the Dockerfile and making a build pack out of it is what he's talking about. Okay. Yeah, I guess you could do that. I mean, you could write a little thing that parses it and tries to figure out you could do that. I did actually try that once, but there's generally no separation line where you can say this is a layer. It's just a big mess and you can't easily dissect it into layers, which is what you want to do with Docker. Well, it's what Docker's chosen to do, but if you're going to always run those things, then what's the point of having each one be separate as a layer, especially when you run into all the problems of layered file systems with 47 layers of this stuff? We don't really need them with Clifandry. You essentially only need... About the layers. It seems to be a different chosen layered file system depending upon which distro you're using. So if you're using a bunch of red hat, you'll get a different layered file system and then the different set of bugs they come with. I've had lots of issues with AUFS and permissions where different layers... You do it now less than a directory and it will say you have access to a file and then you try and access the file and you get permission denied because the layers have permissions, different permissions across them. Which is fixed in the latest version if you pass Durperm 1, which is some ludicrous parameter. There's all these things, right? All these different file systems. They all have these quirks and to try and make them work, this is a really complicated thing you're trying to kind of do on top of file systems. My major problem is the bloat because you can install the world and then delete the world and install the world and delete the world in each layer and then you've got like two copies of the world you're carrying around. It can be a bit worse than that. So for example, the Docker graph as it builds up this graph it has this one cache directory of all the stuff in it and that's fine on your local machine. If you're in a multi-tenant environment every time someone says I want this Docker file you have to install it to the cache and you need to get rid of it in the cache and there are no real tools to do that at the moment. There's no Docker purge yet as far as I know. You just have to sort of script this and every now and again purge which ones aren't used and figure out if you can get rid of it. It's a very difficult feature. It's an extremely difficult feature. It doesn't sound like it should be but it's really difficult because a lot of the magic of Docker what it's doing is caching. The reason you can get away what's the difference in a Docker container and a VMDK or some other VM image? It's the fact that it's much faster. I could package up my VM image and boot it on a VM but a container is really fast because it's just stored the diff. Well, also because it doesn't need to boot. It just starts a process. But if you get rid of the layer then it's slow. You have to re-download the whole thing. So you actually don't want to get rid of certain layers. How do you know which layers are the ones people are actually going to use later on? So it's actually a really hard problem to manage all these layers. I'm not necessarily sure this is why I really like the Cloud Foundry approach to this. The LED file system don't really work too well so let's just have the OS as a layer and the app as a layer. Let's untie the app on top and it works really well. We know we've just built the app because we combined it with the build pack and made the droplet. We untied the droplet and it works well so we don't really need to get involved. Their file system is just an implementation detail. You want your app to start fast. We fork a process that's the container. You want it to be portable and you want it to start fast. Yes, we have that. We're in a better situation than most other people like Docker do in containers. A lot of the magic of Cloud, I'm going to say magic and just apologize straight away, but a lot of the magic of Cloud and what we're doing is we're using constraints. We're saying you don't get all the power and then we can do much better jobs at running your apps. It's about contracts and promises and if you keep your promises, I'll keep mine. Right, and actually...