 Everyone ready to get started? I'm going to probably use all 35 minutes of time because I packed this with way too much stuff. Because when you start to dig into the problem, there's a lot of container build tools. And this is probably not going to be an exhaustive list of container build tools. And it will be a survey. And the other thing I want to make sure it's clear, what's different from the abstract, is I won't be doing any demos. One, demos are too risky, and there's just not going to be enough time. And you'll see why there's not going to be enough time. So I'm Michael Ducey. I do community and evangelism at Sysdig. To give you a little bit of an agenda, we're going to talk about how containers should be used. We're going to talk briefly about the problems with the Dockerfile paradigm. We'll go into specific tools, and this is the part where we'll do the survey. And then I'll summarize each one of the tools and give you some opinion about what I think of the tools. And then if we have time, we'll take questions. If we don't have time to take questions in the room, we can always have questions in the hallway. You can also stop by the Sysdig booth, and I'll be there as well. So a little bit about me for the five of you that don't know who I am. I spent the last four and a half years at Chef. So automation is something I've been doing for a while. And before that, I was working at automation companies as well, focusing on cloud, automation, performance, and capacity planning. And if I would identify myself as a dev or an ops, I'm more focused on the ops thing. Ask me about goats, or just go to the website goatcan.do. And I'm a Trident, a Maroon, and a Buckeye. Does anyone even know what those are? IO. There we go. I have no one person knows what that is. Bonus points if you can figure out what that one is. So let's start with a review of what is a container. Because I think it's important to talk about what a container actually is so it understands what we should actually be putting inside of our container images. So this is an excellent talk that Jess Frazzle did at DevOps Days Minneapolis. And you can get to the slides there. These are already posted on SlideShare. So if you search for me on SlideShare, you can already find these slides if you need these links. So what is a container? A container is a collection of Linux concepts that are used to create basically an isolated process running on a Linux operating system. It's not a mini-VM. Even though everyone tends to treat them like mini-VMs, there are a combination of namespaces, cgroups, in some cases sec-comp, and then also Linux security modules as well. And if you think about the difference between containers, zones, jails, and VMs, once again, thanks to Jess for letting me borrow this slide, she wrote a very good blog post about this. And in the blog post, she talks about this idea of the differences between containers, jails, VMs, and zones. And the thing is is that containers aren't real. Like that's the number one thing that you have to understand is that containers are not a real object. It's not a first-class concept. It's a collection of different things that are available in the Linux kernel that allow you to run processes a certain way. So it's not a high-level object that you can just go and create a container. What you're actually doing when you create a container is that the runtime actually goes and abstracts away all of these things from you around cgroups, namespaces, sec-comp, and so forth. The runtimes make containers much more accessible and gives you this concept of this idea of containers. And from a user perspective, it feels like a first-level concept, but it's not a first-level concept. So that's really important to understand as we dive into this. So back a little bit further into what is a container. I think this is the most excellent diagram that you can use to explain to somebody what is a container and what should be in a container. And this is from the Kubernetes documentation. If you want to find it, if you search for what is K8S. And why I remember all these Google searches, I have no idea, but I do sometimes. And so basically, the concept of a container is that you have your application in libraries in a self-contained object. And that self-contained object, all it needs to run, is access to the kernel. So it's essentially boiling it down. I would say the only thing that is wrong on this slide is that it shouldn't say uses OS-level virtualization. So let's go now into problems with the Dockerfile paradigm. Now that we know what a container is and how we should be thinking about containers, let's talk about the de facto standard for building containers, Dockerfiles. So the problem with Dockerfiles is that builds aren't deterministic or reproducible. You can take a Dockerfile, and you can run it several times. And while you feel like you're getting the same result, you may not necessarily get the same result every single time you run it. Can someone think of why that they're not reproducible and they're not deterministic? Well, timestamps is one thing. Upstream packages might change. So you build a package one time. There's been security updates. And of course, you want to pull those security updates in, but you still don't have that same object at the end of the build when those upstream packages change. The other thing is that Dockerfiles, and an article I read about Ansible Container, which I'm going to touch on, kind of touches on this, is that you use bash to program things, which is good and which is bad, but the thing that frustrated me the most back in 2014 when Docker was getting really popular was like, fucking damn it, we're back on fucking bash and we're automating everything with bash, like, really? Like, we've had this great history of config management and deterministic languages around config management and things like Puppet and CF Engine and Chef and even Ansible. And we've fallen back into the way that we're going to automate is using bash. I might still be a little bit bitter about that. I'm bitter about a lot of things. You'll find that out about me. And what this does is it's easy, very easy, to turn a container image into a VM. And what is, like, if you look at a Dockerfile, you'll see this immediately in the very first line that you have in a Dockerfile and the majority of Dockerfiles out there. What's that very first line in a Dockerfile? From. And what are you going from? Ubuntu, operating system images, right? So, like, you're pulling in this concept of operating system images in immediately in how you build an operating system or an object that's built upon an operating system, right? And so, the problem with the Dockerfile paradigm basically lets you do the wrong thing from the start, right? You have lack of visibility into what's really in the final image and you see this with things like you gotta have container scanners to actually know what libraries you pulled in when you decided to install the software by doing an app get install or a DNF install, right? Because you don't know what that whole dependency tree is of that package that you're going to put into that container, everything that's gonna get pulled down from that dependency tree. And I've talked about this before, if you've seen talks that I've done when I worked at Chef, but it's a bottom up approach versus an application down approach. What you really want is you wanna be able to look at an application artifact and you wanna determine from that application artifact what everything that application artifact needs to run. So a jar, a war, no JS source code, whatever that artifact might be and then build that dependency tree from the application on down, not from layering operating system layers on top of each other, right? And we see this if you actually look at like some of the real world numbers about container usage out there. This is one in a couple of different surveys you see different numbers but it's somewhere around, let's just say, what would that be, eight and a half to average it out? So you get container to host ratios of about seven to 10 to one host, right? And why is that? Well, one because we're treating containers as VMs and we're not able to take advantage of all the resources that we actually have on a system. The other thing is that we're running containers on VMs which may not necessarily be the best paradigm to get the best usage out of the system but that's another topic for another rant of mine. But the other way that you look at it is this. So this is, I don't even know how you say them, Encore, Encore? So Encore is a company that did, they have a booth, you can go there and talk to them. I took this from a blog post that they did. So look at the base OS image size of, and this is relatively new, this is September, 2017, they wrote this blog post. And so you look at Fedora, and to just get started with Fedora and if you wanna run your application using from Fedora latest, you've just bought off 220 megabytes that you have to carry forward, right? And you see this in, how many people would consider themselves working for an enterprise? All right, so you're probably using RHEL, right? And those images are probably even bigger, no? Well, I see one guy shaking his head. I've been in some banks where they basically take their VM image and they convert it into a container image and then that's the from going forward, right? So then you get like a 4.7 gigabyte image that you get to carry forward for your, you laugh, but it's true, right? And you have the same problem here, right? With a lot of these distributions. If we look at official images and this is grouped based upon the official images and the operating system that they use, you can see that official images that use Debian tend to hover around 400 and I would probably say that's probably 475 megs, right? So you're gonna pull down an official image and you're now committed to using 470 megs. Of course layering helps with some of those things, of course. And then this is public image size. So these last two are more focused on what Docker is providing to you to build from and then these are just public images that other people have built and pushed to Docker Hub and are publicly available on the internet. And you can see that once again, you laugh, but you average, you're looking at, and this is average, you're looking at about 870 megabytes for a container image. So going back to that diagram that we saw or let me just move forward through the slides because I bring that diagram back up. I hate when you forget your slides, right? So what can we infer from this? Any ideas? Use Alpine, yes? Use Busybox as well or optimally use Scratch if you can, right? And build off of that. And if you just rewrote your entire application stack and go and just statically compiled everything, you'd be fine. What's funny? So if we think about this and what we really need is just the app and the libraries. We don't need all that other stuff that we have to carry forward with that parrot or bring forward from that last generation of how you run systems and operating systems, right? And what you can infer is nobody knows how to package their application in a container, right? So let's talk about some of these build tools. So there's a lot of build tools out there. I actually deleted a few off of here. So we're gonna talk about what I kinda consider traditional build tools, Builda, NixOS container, Ansible container, Smith, Distralis which I don't really consider traditional which is why I couldn't really categorize it but it's kind of an interesting tool out of Docker or I'm sorry, out of Google. And then what I call source to container tools and source to container tools in my opinion is what we really want to try and get to. Are these types of tools where I can give the tool a source code repository, I can give it instructions of how the software should get built and it will magically spit out a container with all the application and all of the application's dependencies in it, right? So let's first talk about Builda. It's an interesting name. Every time I say it, everyone's like, what? Okay, so Builda is a project from Project Atomic. You can get the source there. It's interesting in that it will create OCI and Docker image formats for you. And one thing important when you're looking at container build tools and I'll try and highlight this as I go through them for ones that I know this on is that it doesn't require a container runtime to build containers. And this is important because if you don't want to have to have a Docker daemon running on all of your build nodes they're actually building the containers and even locally on your workstation or if you want to build a container image for another operating system, like say, maybe Windows, not having that tight coupling to the container runtime is important. It allows for some interesting uses which are, I actually stole this from a text I got from Bridget last night, the low, low, what? So with Builda, some of the paradigms that they talk about using are still very, very tightly coupled to the operating system. The examples that they give you are basically where you go and you say, once again, Builda from the image that you want to base your new image off of. And then the next thing you do is you mount the root file system. And then you go into the root file system and you do the work that you need, like maybe DNF install or maybe changing files and other things like that. And then you commit that and that becomes a new layer. Does that sound horrible to anyone? The other example that they give is that if you're building from scratch, what you do is you create an empty container and of course you try and run it and it's not going to do anything because you have nothing in it. But then if you want to go and get bash in it, what you do is you mount up the root file system. You then run a DNF install and you change your install root to be your container root. And then it will go and install the libraries or whatever you told it to go and install, which also sounds pretty bad. And it sounds pretty bad because it's this idea of not deterministic, right? Like how do you go and recreate that build process? Well, some people would say, well, I would go and recreate that with a make file, which could be fine. But once again, you're running commands and you don't necessarily know what you're putting into that container. So NixOS container is interesting. Who use, anyone uses NixOS? One, two, three, few of you. It's a little bit of a niche operating system. Nix container is interesting. It builds containers based upon NixOS. And one of the interesting things about NixOS is that you can have multiple versions of a package installed. I don't remember the whole tree. What's the tree look like? It's like slash Nix packages and the name of the package, the version of the package, and then it's like a build timestamp, right? And so if you wanted to have multiple versions of Postgres installed, if you wanted to have multiple versions of Apache installed, if you needed multiple versions of a library installed, it's very easy to do that. Whereas with, traditionally with RPM and Debian, it's been hard to do that. Of course, with newer versions of RPM, you can do that. But it has imperative and declarative approaches. And what this means is that imperative is essentially that example that we just saw with Builda, where you don't necessarily know what the result is that you're going to get. And declarative is, is you've declared that final state and you know exactly what you're going to have inside of the container. Now with the declarative approach, what happens is the container can be auto rebuilt when the host OS updates. So if you do an update to the host, I believe it's called NixOS Rebuild, it will actually, you have a config file that lists out all of the containers that you want and it will go and automatically rebuild those containers with the latest versions of the software that you need inside of them. Which sounds interesting, but also you don't want to have a tight coupling to your host operating system, right? You want to be a little bit more independent from your host operating system. And as I talked about Nix's packaging approach, this is interesting because you could have multiple containers running on the same node and build them in the same way and have different versions of software in each one, but they still pull from the same source repository. But there's this nice little thing in the Nix documentation and I'll just give everyone a second to read that. How familiar are you with Nix, Boris? Not, I'm sorry, he raised his hand earlier and said that he used it. He's not very familiar. What I believe they are doing, and this is a little bit of speculation, is they're recreating their own container runtime environment, and they have an isolated things properly, either with C groups or namespaces or sec-comp and so forth. And so it's basically a two-root jail and if you run, if any, hopefully any sys admin who's done it for more than a year, knows that if you run a two-root jail as root, it's very, very easy to break out of that two-root jail. And so that's my reaction to that. And here's Ansible Container. So I find Ansible Container interesting because it's caught on and not because it's caught on, but the paradigm has caught on. So a little backstory, we created something at Chef in 2014 called Chef Container. And what you could do is you could actually use Chef to build containers, and if you had cookbooks and recipes, you could actually use those same cookbooks and recipes to build a container. The mistake that we made is that we kept Chef Client running inside of the container, and we were treating them like VMs. And if you had the recipe updated or needed to rerun, Chef could always go and rerun and change what's supposed to be an immutable object. But the paradigm of using cookbooks and recipes, which is a declarative language, has caught on in other ways. Puppet has something similar. It's called the Image Module, I believe, that allows you to go and create container images based upon Puppet code. So it's mostly a declarative approach. And I say mostly a declarative approach because one thing that configured management systems allow you to do is run arbitrary bash, or arbitrary commands. And as soon as you run an arbitrary command, you don't necessarily know how the operator or how the underlying system has been modified, and you don't necessarily know what you're going to get, and therefore it's not 100% declarative. Well, as long as your playbooks are declarative, then this will be declarative as well. It's interesting that they've recreated essentially Docker compose, so you create a container.yaml. You can have multiple definitions of the containers that you want built. You can also link them together, much like you can in Docker compose. You can pass variables between one another. The variables will be automatically populated when you go to build the containers as well. And it allows you to take advantage of Ansible expertise and playbooks you already have in your environment. So how many people use Ansible in here? A fair number of you. So if you have a commitment you've already made, you have expertise that you've already invested in and you don't want to throw that out the window, then you can take advantage of that. The other interesting thing is, which I'll talk about on the end, does everyone know what Open Service Broker is? So it's something in the cloud-native world that basically gives you a well-defined API to request services, not only from cloud providers, but you can go and write your own definitions for the OSB API. And Ansible has come out with something recently called Ansible Service Broker that allows you to easily basically have Ansible Container, build containers behind the Open Service Broker and it'll integrate very easily with things like Kubernetes and Cloud Foundry. So if you wanna have, make it very easy for your developers to spin up a Postgres database, then you use OSB to spin up that database and Ansible Container basically does all of the orchestration behind the scenes for you. Smith, Smith is one that I actually think has a very interesting approach and it has potential and I'll talk about that on the end. Smith came out of Oracle and it focuses on this idea of building microcontainers and you read those three principles. So a microcontainer only contains a process to be run and it's direct dependencies. It has files with no user ownership or special permissions beyond the executable bit. The root file system should be read only and anything that you actually need to write should be in a directory called write and anything, any unique config should be in read. Ephemeral files such as PID files should be written to run, right? Now this sounds very close to what we actually want, right? When we go back and look at that diagram to what we want in a container build tool. So it's mostly declarative and I say mostly declarative because if you look at what they're doing around, it gets pretty nasty about how you actually go and build the containers. It's an interesting concept and an interesting idea but what you can do is you can use young packages or Docker base images for your source of binaries. So with Smith you can download a base image, you can open it up and then you can go in and you can pull out the binaries that you need, right? And then you put that into a YAML definition and then you use that to actually go and build the container and this is a great how to on how to build a tiny HDDVD container and in this he actually talks about this right here in that the container that you get out of this is 3% of the Docker hub image, right? So like you can eliminate a lot of what you actually need. So another reason why microcontainers and this idea of like small area of concern for containers is important is that it cuts down on the number of vulnerabilities that you have to manage, right? So if you have only your application and the application's direct dependencies, well and also transient dependencies, then you cut down on a lot of the software that you have to manage inside of every container, right? And if you have many container images that you have to manage, it starts to become intonable when you have things that you don't even need inside of the container like maybe bash, right? And if you're like, well, how do I troubleshoot my container if I don't have bash and it was like, well, use sysdig? But there's very, very, like, they've taken a very good approach. You can also use YUM packages and that's why I say it's declarative because once again, you're using YUM packages and you don't necessarily know when you go and it downloads that one package all of the transient dependencies that it's gonna go and download as well. So what do you want in a container? What you want in a container, I've been using this diagram for a long time. It's stolen from my time at Chef. Thank you, Mark. He's not paying attention. He's angrily tweeting me right now. What you wanna try and eliminate as much as possible is the percent of what's considered the operating system, right? And what you wanna have is the weight of the container. So think of this as size or this is overall size of the container and you really want like 100% of your container should just be the application in the libraries that you need. And you wanna try and eliminate this as much as possible. And this is where Disturless comes in. So Disturless is essentially declarative builds leveraging a open source tool from Google called Basel. Basel is an interesting build tool that will build any piece of software for you. There's lots of build rules and there's a gentleman right here who can talk all about Basel for you. So Disturless images and this is directly from the GitHub page. So Disturless images contain only your application and its runtime dependencies. They do not contain package managers, shells, or any other programs you would expect to find in a standard Linux distribution. They provide stripped down base images for popular languages. So Java, Python, Go, C, Node, and .NET. So basically all you need to do is point to your source code artifact that might have been built by Basel as well. And then that will get included in the resulting container image that you end up creating. And so you have your jar and your JDK, the JDK's libraries, and you're good to go, right? The challenge is, hopefully it's the version of Java that you need, right? Or the Java, or the version of Python that you need and so forth, right? There's another tool by Red Hat and this is more out of the OpenShift team. It's called SourceToImage. So it's a toolkit and workflow for building reproducible Docker images from source code. So one image is used for build, and then it can also be used for run if you really want to, so in case of an interpreted language, like Python or PHP or Node, where you have to do or Ruby, where you do a bundle and you pull down all your dependencies or MPM install. That container that ends up getting created is usually sufficient to be allowed to go and actually ship that to production to actually be used to actually run your code. In the case of something like C or a compiled language, you don't necessarily want all of the build tools. So what sourceToImage allows you to do is have one build image and then you can have another image that you can use to actually go and run it. And this is actually what's behind and any Red Hat people in the room, but this is what I can infer, please correct me, but this is what I've inferred, is that sourceToImage is essentially what's behind the scenes in OpenShift, right? And some interesting things about this, what this allows you to do is build environments can be tightly versioned. So that build container can have everything that you need in it that you need to build that piece of software and it's immutable and you know exactly what versions of the build tools that you have in it. And if your application changes and needs a different build tool, then you build a new image, but then if you have to go back and build an old version, you have that versioned build environment for you already. And then also interesting is that because you're running the build inside of a container, you have the isolation benefits that containers give you as far as from a security perspective as well. So you can isolate the build environment for your builds as well. This is essentially what it looks like. The developer pushes code to GitHub, it kicks off a build inside of the build image or it pulls the build image to do the build. Now it can do one of two things here, it can either commit the code or commit the image and then you use that as the application image or you can have a separate image that would be used to actually go and run the application and that separate image would just expect the source code to be pushed into it somehow. So maybe you specify a jar that gets copied in or something like that. Now this is really new. BuildKit is something by Docker that they started talking about it back in May or it's actually from the Moby project. And they showed off some early concepts at DockerCon.eu, the Moby contributor summit. And BuildKit is basically a generic tool for converting source code to build artifacts in an efficient, expressive and repeatable manner. So there's a concept of front-ends and a horrible name because what front-end means something to everyone in this room, right? Probably not a definition of how your software gets built, right? But it is, it's, oh sorry, it's the definition of how your software gets built. See, it's me and Snarky, I lose where my hand is on the clicker. So BuildKit takes this human-readable definition and translates it into a low-level build definition and that low-level build definition does things like dependency graphs and so forth. It does build caching, a whole bunch of other things. You can directly write these low-level definitions and go if you want to. And then you have exporters that allow you to export these build artifacts in a variety of formats. Beyond just container formats. So if you wanted to export an AMI, you could build an exporter to create an AMI for you, right? Or a traditional operating system image. And it focuses more on this idea of how can we create generic primitives for a build system? Because remember, containers aren't anything special. Containers are just another artifact that we're pushing through a build system, right? So an artifact is an artifact as an artifact is an artifact. Always remember that. A VM image is an artifact, right? And there's a very good blog post that kind of talks about it. And this is kind of what it looks like. Right now there's only docker file of how to generate a docker file. And then you have a lot of things that the LLB provides for you. Also caching, you can cache dependencies that you pull down in that whole dependency tree and you can cab that cache as an image so you don't have to pull it down every single time. They do some other things that are interesting around how you actually update those dependencies if one of those dependencies changes as well. And then the idea is that you have a whole variety of exporters for yourself. And then we'll talk about Habitat. Habitat is an open source project by Chef. It's been around for about 18 months. I'm slightly biased because I was a platform advocate for Habitat for about a year or about 14 months before I left Chef. So this is probably the tool that I know the most about. And this was actually the tool that I put in the talk to, this is why I did this talk to try and sneak Habitat in and put it in front of you all. So mission accomplished, I guess. Even though I don't work there anymore. So Habitat's focused on how do I build my software? How do I deploy that software in a variety of different platforms? And then how do I actually run that software through what's called the Habitat Supervisor? So what's interesting is that it's a consistent process for packaging all your apps across your architecture. So it's a consistent process if you're running on Linux and also if you're running on Windows as well. And what's interesting is they provide scaffolding for key languages, right? So there's a common way to build Node.js applications. There's a common way to build Ruby applications. There's a common way to build Go and Java. And if you're using Gradle or Maven or something like that, there's a common paradigm that's repeated over and over again. So why should you go and recreate that wheel? Use one of the scaffoldings that they provide. So what you end up getting is you get this build artifact and then you can take that build artifact and you can say have export and you can choose one of these export targets. And so this sounds a lot like build kit. You can create this, take this artifact and basically what Habitat will do is walk the dependency tree and figure out everything that you need inside of that package and package it up and spit it out in whatever format that you get, that you specified. And then the other thing is is that there's a built-in supervisor inside of that container image. Let's say it's a container image that does things like service discovery. It'll generate templated configurations for you. It'll help you with clustering topology. So if you bring up multiple containers and maybe you're running a database and one needs to be the master and one needs to be the replica, it'll automatically figure out through a gossip protocol, figure out who should be the leader and who should be the follower. HealthCheck API is in a whole lot more. And there's also a build service as well. So basically you go from, you write a plan, you put that plan with your source code, you commit to GitHub and it will go and create what'll happen is that the build service will see that there's a change in GitHub, automatically build that package for you. And if optionally you can also have a container image built using the exporter and you can export it either into Amazon Container Registry or Docker Hub. So in summary, as I'm coming up on my time, believe it or not, these sessions are way too short. That should be feedback that you give is that these sessions are way too short. And also lunch, like really. So Builda, so the problem with Builda is you're still coupling yourself to a lot of operating system paradigms, right? Which when building containers, you should not be thinking that you're building an operating system. And when you're using operating system tools or build containers, you're using that old paradigm that may no longer be applicable. NixOS Container, well it's a niche OS and we saw that nice insecure container model. You also have to eat Nix for your host OS as well and you may not necessarily wanna do that. Ansible Container is actually pretty great if you're committed to Ansible and I think Ansible Service Bus and what they're doing around that for stateful services for things like Cloud Foundry and even Kubernetes and OpenShift is actually a really interesting concept. Smith I think is on the right path around this idea of microcontainers where Smith needs help and this is open source community. So what I would recommend is if this interests you, I would find a better way to pull apart these container images and get the pieces out that you need to actually package them up in this microcontainer type format. DistroList is the right approach to completely remove the operating system but it's very, very, very language specific at least right now. And Basel is not the, in my perspective as Basel is not the most approachable tool and the real world examples are minimal. Like if you look at some of the Basel like build definitions, you'll see what I mean. Let's just leave it at that. BuildKit is very interesting. It's very, very new though and it's very, very early stage but the goal of BuildKit is to solve software builds in general, right? You can also do version build environments as well with BuildKit but it's still too early. Examples are really sparse and front-ends for languages and things that you actually wanna build are really still non-existent. So I'm not being negative here. I'm showing you opportunity as open source contributors of where you can go add value to these projects to get them to where we need to be. That's how I spin it at least. So source to image is interesting. It's version secure build environments. It has actually a library of build and run images available but they're all built upon that operating system paradigm of course because they're coming from Red Hat. And then Habitat is very interesting. It easily describes software builds in Bash. You have export formats for multiple platforms which I think is really, really powerful. It does the right thing to determine what a build artifact or I'm sorry, what a deployment artifact needs or I'm sorry, a build artifact needs to run to create that deployable artifact of how that container is created. The challenge is that if you're not wanting to eat the supervisor, you get the supervisor whether if you want the supervisor or not. And it doesn't always fit with the Kubernetes paradigms and what the Habitat team did to fix that is they created a Kubernetes operator where you can load that in and the Kubernetes operator will actually talk to the supervisor to allow you to have the benefits of the supervisor but still operate more in a Kubernetes native type fashion. And the software libraries, this is another place where you can actually add value to this project is the software libraries provided out of the box are not always well maintained. So a good example is MongoDB is about eight releases behind in the minor version and it's several releases behind in the major version. So summary, container builds tools, stools, just let that soak in. Container build tools still have a long way to go. Each tool has pluses and minuses and those minuses are opportunities where we as an open source community can contribute in. Some tools sacrifice best practice for approachability in the case of maybe some, a tool like Builda. And some tools make things overly complex. So what do we need? Like what is the vision I think of a container packaging tool? A build pack type model for source code building so that idea of scaffolding that Habitat has, declarative container build manifest generated from that build of the software and then an exported to create an image with only the choice of apps and the dependencies that you need. So that is four minutes over. But I like to thank you. Here are the slides. I'll also tweet the slides as well so if you want to follow me on Twitter and also of course I was told I had to put this in here at the end, Sysdig is hiring. And since we're over time I'll take questions in the hallway. Thank you.