 It's technically a minute to go but I can tell people are excited to learn about Gardon... ..and I'm keen to get started before anyone realizes that this is a garden project day-to-day party that tries to escape... like them. Let's get going. I re-watched- Never re-watch your old talks! I rewatched the garden update from last year and it starts with me moaning for about five minutes about how I couldn't think of a funny title for a garden update talk and I can say that I've at least solved that problem this time. This is to talk about how our garden grew. It's garden project update talk. I'm Dr Jules. I worked for IBM. I was sort of the first dojo work because no one is tall, this garden Humphreys counts and I am the Project Leave for Garden and Autoscaler. This is how to escape. I really love that they have a law that you have to tell people how to escape from presentations in Boston. They've clearly also rewatched my talk. So this talk is in three parts. We're going to talk about what this garden thing is, some cool stuff we've done this year and stuff we're planning next, which is probably not surprising for a project's update track. So what is garden? Well, let's go back to the beginning. I'm going to tell you a story. All good talks should involve a story. In the beginning, there was jar files, what I call ghetto containers. They were right once, run anywhere. And everything was good. But it wasn't quite good. There was some problems with them that they didn't work. So we were like, we need some other stuff to make these containers work. We need isolation. So the dependencies are isolated so that your things actually run anywhere. We need it to look like a VM or this stuff. So we invented these namespaces things that you might have heard of. We invented these C groups things that you might have heard of. And then they were wrapped in this thing called LXC, which was a wrapper for these kernel things. And then we created this warden thing, which was a wrapper for that. And then we got rid of the LXC thing. Warden became garden because go and... We split it into an API and implementation. That was actually a good move. So you had the garden API and then you had how we actually implemented garden and then Docker. A disturbance in the force. So what did Docker have that these things didn't have? What was this whole Docker thing that suddenly happened and changed everything? I think Docker did do two things which were really different from what we were doing before. And those things, one, was encapsulation. So encapsulation, which is really this idea of containers as opposed to Linux containers. This idea that I could take this shippable unit and move it around and it would look the same in these other places. That was really, really new and interesting. And also marketing, no. But I had to say it, right? And also user experience. They genuinely figured out a UX for these low-level namespaces and C-groups and Warden and NXC that was really good to use. So good that people were like, you know what, I could manage these containers myself. I don't need all this stuff. Right? So it was pretty good. And it was so good that people started saying, this thing is a standard. If you don't use Docker, you're not standard. I thought it was such a compelling argument that this good-looking fellow with a British accent in about 2015 suggested that we might want to run Cloud Foundry on Docker and have a Docker backend for Garden. Did we do that? No. Why? For a few reasons. First, it was too big. This Docker thing was too big, too complicated. We didn't want to maintain this whole thing. Second, it had too many opinions. We were going to have to fight against this thing that was both a container engine and, thirdly, it wasn't really a standard. It wasn't a standard at that point, so we didn't have that much control over it, and so we didn't do it. What happened next? OCI, run C, standards. Yay! Standards. Actually, that was actually really good. We created some standards in the industry around containers. These are the OCI standards. So a container for container run signs and shiftable images is a real implementation that was pulled out of Docker, which is called run C, which is, in terms of these things, not that bad a name, but that's because of how bad everything else is. Wait till we get to container it later. We got to share some code. We were like, this is great. This is great. This run C thing is exactly what we want. It's small. It's not opinionated. We don't have to maintain all this stuff really, and we need to go and use that. That started what I call the year of glue, when we were like, okay, how can we use all this stuff together? How can we get rid of some of the stuff that we've been doing that was custom, get this thing in there? What were the results of that? They were pretty good. I want to say how long it took me to delete the known face from that. I was expecting more happiness about that image. There's no known. There's like 400 round of black rectangles in the keynote for this. What we ended up with was really quite nice. Because we split the implementation of garden and the implementation, the garden API stayed the same and we were able to have this garden API but just managed the standard bundles and used the standard tool to run it. That takes you to about the start of last year. So, glue and standards is great, but what's garden for now? If we're just wrapping this run C thing, what's the garden team for? What does it do? Are we an empty hat? Well, I think we do three things. We're glue, because actually that's not a bad thing. Gluing the rest of the system to the low level stupid stuff is a thing that someone needs to do even that abstracted is not a bad idea at all. The second thing is we make sure there are secure defaults because unlike the upstream technologies we have to worry about multi-tenant workloads which are much more difficult and much more problematic. So we like the security to be there out of the box and preconfigured and without options that you have to remember to pull to make it secure. It's secure and you can't really disable the security. And thirdly we're about exploiting container technology in the rest of the stack. So this stuff is cool. We're hiding it. If we're going to hide it, we don't want to hide it too well, we want to make sure that it actually gets used above what garden does. So that's garden, it's glue, it's secure defaults exploiting container tech and now in an effort to prove that I'm not lying, cool new stuff for each of the bullet points in turn. Yeah. So we've got glue, which is the new sidecar's work, defaults, which is the rootless work and container tech, which is OCI build packs. Let's go, glue. So glue is all about garden peas. What are garden peas? To understand garden peas, you have to understand about pods and pods to this concept that Kubernetes popularised. There are a collection of containers that share some namespaces. So their container images are shiftable images. They're encapsulated and isolated, but not completely. They share some stuff like, for example, the network. Why would Cloud Foundry want something like that? It turns out we actually have quite a few use cases for things like that. Things like health checks that we want to run in the container, but we don't want to affect the container's memory limit or be affected by the container's memory limit. The new envoy proxies that I think you might have heard about, that sit in your container and proxy requests and do SSL for you, CFSSH, other stuff. The important thing is why it's called that. The reason it's called garden peas is pods are a collection of docker containers or whales. That's why they're called pods. Because a pod is a collection. Does anyone see where I'm going here? A pod is a collection of whales. What is a pod a collection of in a garden? In a garden, a pod is, of course, a collection of peas. So a pod contains peas, a pod contains peas, a pod contains peas. Garden peas. You may grow now. Literally the most important part of the feature. There is an important difference between peas in garden and this pod concept which is you don't have to think about it. Cloud Foundry does all this stuff. This is just low level tech and we're just gluing the Cloud Foundry experience to this stuff. That's garden peas. I want to talk about secure defaults It's called rootless mode. Out of the box, garden turns all the security levers it can turn on for your containers. But there's a bit of a problem. When you see a door that's really, really secure, what do you start thinking if you're awful like me? You start thinking about the wall, right? When you've really secured the door the wall starts to look attractive and we turn on a lot of things. In fact we've secured the crap out of the containers but our actual garden server is starting to look a bit risky because to do all this stuff we have to run as roots. So even though all the containers are very secured we're running as this highly privileged user and at this point that starts to sound like a problem. So how do we fix that? We're going to secure that as well. We're going to secure the garden server and run it as an unprivileged user instead. There's way more details about that in an awesome talk straight after this one by this man William Martin but it's awesome. Third, exploiting container tech OSI bill packs. So normally in Cloud Foundry when you run stuff you take this droplet object which is a tar file and you stream it in to this container and then you untar it inside a container that sucks, it's CPU intensive it doesn't really cache very well and you have to pay for all the container technology when you're untarring this stuff. So you've got to have disc quotas and layered file systems and it turns out the Docker community in particular have this great solution, layered file systems and container images where you can have some shared layers and some different layers and a format for describing them and that's the other one of those OSI standards that we talked about earlier is the image standard and if you think about it the top layer which is just a tar file that's an awful lot like a droplet the bottom layers look an awful lot like a root file system so why don't we just do that? why don't we just convert the droplet into a layer have the root file system be a layer and then we can build that all up do it in a mount and you don't have to pay for all the CPU stuff that's OSI build packs and that would seem like a lot of cool stuff but one more thing it also all works on Windows now which is pretty cool because it's all based on standards now because it's all glued to some low level technologies the exact same code now powers the Windows, Cloud Foundry stuff just instead of RunC it has a plugin called WinC maybe WinC, I don't know if it's not already it should be so that's the cool new stuff it's quite good glue actually what's next is two things that I want to talk about quickly the first is containerd depending on how you feel about people who work on containers the second is CPU metrics and sharing just largely about me saying that what we currently have sucks and then hoping someone has a better idea so containerd if you think about Garnet it's been a long history of deleting bits of ourselves and moving up the stack and reusing code and that's good, what do we do next there's this containerd thing we'd like to use that why are we doing this why do we want to pull this containerd project in so it's a bit more of the code from the community that we can use so we're already using RunC containerd is more of the docadema that's been pulled out into an unopinionated piece of code it's really nice if we can share that it opens up some nice deployment scenarios the thing I really like is none of the rest of Cloud Foundry needs to care you get to use the nice operator tooling that you're used to in CFCR and CFAR because it's going to use the same container tech but Cloud Foundry just works and we can just pull that out that's pretty good CPU metrics we've known that CPU metrics in Cloud Foundry suck for ages I thought I was going a bit mad because I kept saying they suck and people kept pointing out that no one seemed to be complaining so why would we do anything about it now everyone seems to have noticed they've sucked at the same time so within the last month approximately everyone in this room has told me that they suck I said I know what's the problem with CPU metrics the problem basically is that there's no really great way of doing CPU metrics given how we do CPU in Cloud Foundry at the moment so at the moment the way we do CPU sharing it's really hard to know what your CPU is a percentage of because it changes all the time the amount that your app can have depending on what lands on the host all of the possible solutions to this are bad but we did really well, we picked the worst there's some rough solutions that I could think might work I think one of the things we should probably do is stop lying by using percentages if we don't have something that's a percentage I think we might just have to have a different number like absolute CPU usage how many milliseconds you've used maybe you could even bill on that I don't know but definitely know what we're doing and I think the reason it's so bad at the moment might also be because the actual sharing is bad so the reason we can't do good metrics is because we don't have a really great model of sharing the CPU in the first place so at the moment we do sharing by saying well if your app is 64 meg and this person's app is 64 meg you both get the same amount of CPU if another person's app comes along you all get a third of the CPU that's great but it means the amount of CPU you have changes all the time and you as a user can't predict why or when that would happen which isn't great so maybe we can do something like have a CPU maximum that actually allows you to have a burst capacity or something like that but at the moment we're just very open to feedback about what might make sense for people the first step in solving a problem is admitting you have a problem and we have a problem with CPU metrics so we'd really appreciate people's ideas and thoughts and experiences and just people telling us that this is a problem for you and something we should work on so that is around it what is garden, it's glue it's secure defaults and it's exploiting container technology we've done some pretty cool stuff this year between thinking of really funny names for things some really big security advancements and some nice performance improvements in the platform and I think we have a couple of pretty cool things that are going to deliver even more deletion and also some nice benefits for people so with that thank you very much we have tons of time for questions any questions? yeah so the question is with the OCI build pack stuff is this going to affect the user interface are users going to be able to get some new features from the OCI build pack stuff so the first part of OCI build pack we just did it as a performance improvement and a simplification there are some nice things that you can imagine happening given we're creating OCI images from the droplets you might be able to do some nicer workflows around converting code into containers but also around interoperability with other systems so if you were at the talk earlier about the worst name thing I've ever done which is saying a lot which is a cloud a Kubernetes backend for cloud foundry which we called cube with a C which is impossible to say out loud if you were at the cube with a C talk earlier this is actually based on the OCI build pack stuff so what we do is we use the OCI build pack stuff to create a registry and the registry serves images so standard container images based on your droplets and that's what we give to Kubernetes so what that means is you can do a stage convert your code into a droplet and then actually get a container image and you can run that anywhere for example Kubernetes so that's one example of a piece of interoperability that opens up at the moment we've just done that and simplicity but you can imagine that there's maybe some possibilities that we could look at next so the current status is the sort of MVP is done so the MVP is done in totally transparent we just create the OCI image on the fly at the moment in garden so before creating an image we just create an OCI image and in the future it would be very nice to pull that up so that the decision about how to create that image happens up in cloud controller and becomes a kind of policy decision instead of a scheduling decision but we haven't worked on that yet although like I say that's exactly how the cube with a C stuff works if we did that refactor for the cube with a C stuff then that's what allows us then to just give Diego images so it means that Kubernetes images so in the cube stuff we don't need to have machinery for untarring stuff and all that kind of thing you just have an image which we've created a deployment object in Kubernetes with that image and it just works even though we created that image on the fly from a droplet if anyone else does have anything else you really should have come to the office hours that were just before we are always open to pull requests and especially for cube pictures one thing I didn't mention in this talk because it feels like inside baseball and I went to my first baseball game on Sunday after I'd written the talk is that we merged the two teams so it used to be a separate group team and a garden team and we've now merged those back together I'm not sure why anyone would care but we did so the group repo is now you just get that in garden but it's always nice for the data picture anyone else for anything else or would people like to get coffee coffee awesome