 All right, welcome, welcome everyone. Welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Taylor Dolasal, a senior developer advocate at HashiCorp, where I focus on all things infrastructure, application delivery, and developer experience. Every week, we bring a new set of presenters to showcase how to work with Cloud Native technologies. They will build things, they will break things, and they will answer your questions. Join us Wednesdays at 11 AM Eastern Time. This week, we have Peter and Martin here to talk with us about building, analyzing, optimizing, and securing containerized apps. This is an official live stream of the CNCF, and as such is subject to the CNCF Code of Conduct. Please do not add anything to the chat that will be in violation of that Code of Conduct. Basically, please be respectful to all of your fellow participants and presenters. With that, I'd love to hand it over to Peter and Martin to kick off today's presentation. Take it away. Well, hello. Thanks very much for inviting us on. I'm Martin Wimpress. I work at Slim AI. I'm a senior developer advocate and community manager, and I'm joined by my colleague, Peter. Hi, Peter. Hi, I'm Peter. I work on the growth side at Slim AI. Thanks for having us on and excited to be here. Yeah, so I think if we just sort of dive into the introduction of what we're going to be doing today. So as highlighted in a recent CNCF software supply chain, best practices, white paper, tools like Docker Slim were used to limit the number of files in container images, and thus reducing the attack surface. So while our talk today applies to any OCI compliant container paradigm, we'll be focusing on Docker images, mostly because they're familiar and most prevalent out there. But what we're specifically going to cover is Docker file best practice just at a surface level. We can dive into that in more detail on another occasion. We have done live streams and blog posts and stuff on this in the past. We'll be using Docker Slim to analyze the container layer construction. We'll be doing some security scanning of container images, generating a software bill of materials for the container images. And then we'll be using Docker Slim to minify the container and analyze what's changed, making comparisons to the analysis we've done with security and SBOM earlier. And we'll be doing that use by exploring and diffing container images. So that's what we've got on the ticket. Is there anything you want to add to that, Pete? No, I think that's really good. I think what people will see, I think one of the advantages of this is that you can run just the code that you need in production. And we have some highlights of potential security vulnerabilities that you might see in Kubernetes, examples of great talks people have done in the past on that. And also some of the drawbacks of Slimming, right? No great technology is without trade-offs. And so we'll be highlighting some of those things as well. So excited to take it away, Martin. Okay, right, so I'll just, I'll be in the terminal for much of this, but towards the end, we'll be moving to a pretty web app. So there we go, that's my terminal. So we're gonna start with the Dockerfile. So we're just gonna use this so we can see some syntax highlighting. So this is a pretty good start from a sort of best practices point of view. So I'm just gonna talk through some of the things that exist in this Dockerfile that are good things to do. First of all, we've specifically versioned the base image that we're pulling from. And we've even chosen to use a Slim base image to keep the size of things down. We've defined our working directory. When we copy in our assets for our app, our app is very simple. I'll give you a quick look at that in a moment. We also use the changing ownership process in that copy command. And we only specifically pull in the assets that we require so that we don't accidentally leak any tokens or secrets that might be in the app directory. And then we're using some best practice here. The pip install is not caching any data inside the container during the run command. And then we've just got some standard boilerplate here. This image is based on Debian. And although we're not doing any apt operations, that's a bit of boilerplate that I always have at the end of those scripts to clean up and make sure we're not leaving any cruft behind. Then we're choosing to run our app using a non-privileged user. We're exposing the port that the app works on. And that's also useful for the sort of discovery process that Docker Slim does. And then we've defined a tight entry point to our application as well. So these are all good things. And there's a couple of links at the top of that if you can see them to a blog post that I published earlier in the year, which covers this in a bit more detail. And also because this is a Python app, a link to an article from Python Speed where they specialize in Python inside containers. So Pete, anything you wanna dig into on that one? No, I guess just a note for the audience that this is a Flask app, which is one that we've done on our Twitch stream in the past, so it might be familiar. I see Yannick is in chat, so hello, French guy. And you may have seen this example before, but it's a pretty simple basic app. But for us, the app isn't super interesting in that we can do this with a node container, or we could do this with several different containers. And we're just using this as a very basic example because a lot of people are familiar with the framework. Yeah, yeah, it's just got two end points, root and hello, and that's all it does. Now, the app, like Peter says, the app isn't important, but the two entry points will be, we'll use those as examples a bit later on. So that's the app, that's the Dockerfile. I suppose the other thing to point out is a lot of people sort of advocate for starting with things like Alpine, if you want to make slim, minimized containers. We've chosen not to do that here. There could be different reasons for doing that. If you are a developer that's specifically knowledgeable around Ubuntu or Debian, you may choose to use a base image that you're familiar with to help with developer momentum, or it may be that you are aware that using something like Alpine could actually introduce some unexpected behavior in your application or your language ecosystem, which is actually something that can happen more often than not with Python apps. So we are using, not using Alpine on this occasion. Now, you can still slim Alpine images as it happens. And what we're gonna be demonstrating here is a reductive process to creating minified containers as opposed to an additive process that you would go through with something like Alpine as your starting point. So let's have a look. What have we got here now? Well, one of the things I can do is we can lint our Dockerfile with Docker Slim. So let's just lint that Dockerfile. We'll get some output there and it will generate a report file for us, which is just here. So if I cat the report file, we can see that it's analyzed 24 things that we haven't fallen foul of, but we have actually got one finding and that's that we have no Docker ignore files. There is room for improvement in the way that we've structured our container here. So we can do better there and maybe next time we'll make sure we have a Dockerfile. I think somebody in the chat already called us out. So one thing, and I know that we mentioned this in the blog post that you talked about. So exeggit.io points out that we should copy app in after the pip install in the Dockerfile, which is an improvement for the layer caching, right? So as we're making changes to our application, those are gonna be more frequent. We're not gonna change the underlying dependencies as much. I know we talked about that in the blog post. So good catch and slight improvement there as well. Yeah, right then. So with that, let's move on to the very basics. We'll build our Docker image here. So let's think now, we'll call it prod fat. So this is our fat container. So there's nothing here that anyone wouldn't expect to see. So Docker images and there we have, so our base image is 122 megabytes and by adding our app and dependencies, we've now got a container that's 133 megabytes. So that's not too shabby. And I will run my app very quickly. You don't need to see what it displays in the browser. What I will do is, oops, what have I done here? Oh, I've missed a letter off there. So I will just go over here very quickly. And I'm, oh, it says, I am, as you can now see, poking at my application. And I can also hit the other URL, which is this one. So there we go, that's the entirety of what that app does. And there it is running, which cubs us no surprise. Now, what we're going to do is we're going to simulate something that we see quite regularly. And that's container images that make their way into production that have still got development tooling inside them. So we're going to sort of create a simulation of that. So I have another Dockerfile here. I'll just give the two Dockerfiles. I have Dockerfile and Dockerfile.dev. So Dockerfile.dev, I know there are better ways of doing this, but this is purely for illustrative purposes. This Dockerfile does an apt update and then installs the three essential developer tools, the fish shell, Git, and the world's best editor, nano. So there we go. Our developers are now super happy because they have the ultimate in iterative development environments locally on their machine. So we will build that dev container for the purposes of comparison. And we'll call that dev-fat on the tag there. So we'll build that. We don't need to run it, but what we will do is we will have a look at what that's done to the size of the container. As you can see, it's doing quite a bit more. Oh, goodness, I can't type a toffee today. Right, there we go. So now that dev container is 271 megabytes in size. So that's a container we'll be using for some comparison processes a little bit later on. Now, what we're gonna quickly go through here is scanning Docker containers or containers generally for security vulnerabilities and also to generate Sbom. Now, we could use Docker scan to do that. You've probably seen that. We're going to use a couple of different tools. We're going to use gripe. So gripe is a vulnerability scanner for container images and also for file systems. So we're going to scan our prod-fat container and we're gonna see what this generates. So here's our vulnerability list. It tells us that 114 packages have been catalogued and there are 67 vulnerabilities. Most of these are low or negligible with some mediums and criticals hiding away in there. What have we got here? This looks like an interesting one. LibG Crypt, LibG and UTLS. Okay, so there's a bunch of CVEs in here. Different statuses. Now, you should run these types of scans and analysis before you slim a container. So you get an overall picture of what was used to construct that container. The slimming process removes some metadata from the container and that limits the effectiveness of these scanning tools. So we do the scanning now on the full-fat containers and then we'll store, well, we're not going to do that but you should store and publish this metadata alongside those images. So you've got a record of what was used to compose them. So that's our security scan done there. Let's just take a look and see what that looks like if we do the same on our dev container. So what have we got here? 185 packages, 120 vulnerabilities. So that's another 71 packages included in that image and nearly double the number of vulnerabilities. So just by including some DevTools, we've increased the potential attack surface inside our container. Perl is installed now, yes. So we can do the same thing with Trivi, which is another vulnerability scanning process. That's going to tell us much the same sort of thing. If we just look at the sort of edited highlights here, oops, yes, that's correct. So we can see the package here, we can see there was 65, 59 low, two medium and so on and we can do the same for our dev container and unsurprisingly, it finds more stuff as well. So you can use those two tools to get CVE analysis from your containers. And then the other thing we're going to do is we're going to use a tool called SIFT, which is a tool and library for generating software bill of materials for container images and file systems. So this is another open source project from the same camp as GRIP. So let's take a look at this. We'll do the production container and this will build us a nice big list. And you can see here the difference between those things that are devs that were installed as part of the distro versus those things that are Python packages that were pulled in via PIP when we constructed the container in the Dockerfile. So that's looked at 114 packages. And this tells us the packages that are included in the version. So this is a really easy way to generate that. And then we can do the same thing against the dev container and do the surprise of no one there is more stuff in it. One of the interesting things in here is as a result of installing those dev tools we now have the open SSH client installed. So what we've done is we've put a very interesting tool into our container image that any would be hijacker would be delighted to find inside that container for using for island hopping or things of that nature. We've seen this recently. There was a talk at which event was it Pete? Was it cloud days? Container days and similar talk at a cloud native rejects right before KubeCon. So yeah. It was a slightly sort of staged example but it made a very good point about where defaults changed and the behavior of a Kubernetes cluster resulted. And then you have an application that has a bug in it effectively. It was unsanitized user input. And from that they were able to create a reverse shell because inside the container image in the cluster there were shells, there was curl and just enough tooling to create reverse shells and provide the tooling to somebody to actually poke at the APIs of Kubernetes and then disrupt its operation. So what we're going to be looking at in just a bit is where we minify the container is removing that unnecessary tooling that exists inside the container images to reduce attack surfaces. So in that same situation, yes, your application may still have that bug inside it which means that somebody can overflow the application. But then none of the tooling exists in the container that they can actually island hop or go any further with that exploit. So that's the benefit here. Yeah, Martin, we've got a couple of questions from the chat so I should take them in a couple of orders. So just as a reminder, so Enquaring, or sorry, as you mentioned Gripen SIFT, they're supported by the company Encore. So they're great open source tools. We have a question about open source projects for beginners who are still learning. So I think if you're just getting started with containers, kind of creating containers that have a simple application in them is just a great way to get started if you do the sort of Docker hello world type of examples in whatever programming language that you use. Or you can actually just take any hello world tutorial. So right now I'm learning Rust. There's a great Rust container out there. I don't really know how to use Rust so I have a hello world app. I can containerize that in a pretty simple Docker file and then I can run some of these tools on them. I can use Docker Slim to try slimming that container. I can use these open source tools that we just showed from the Encore company, SIFT and gripe to generate an S-bomb and do a vulnerability scan. I could use the Docker scan vulnerability scanner to see sort of what's inside them and what might be vulnerable there. At Slim AI, we have a free web platform that you can actually pull that container. You can look at just the Rust container and see what's inside of that or you can actually pull your own application from Docker Hub or Amazon ECR. So if you're new and you wanna get started with some of this stuff, just creating a very simple hello world container is a great way to get started and do that in the application that you're, the application language that you're most familiar with. So are you rusty if you are more or less familiar with the Rust language? I believe you are. I believe it's a Rust station is what, they let you, they give you that tag as soon as you do the hello world example, which I finished this morning. So yeah, we're all rusty. Someone else is asking, what exactly happens during the slimming process? I think we'll get into that, right Martin? So you're gonna show Docker slimming. So that might be a good segue into the next thing. So great questions, please keep the questions coming. Indeed. So I just need to pick up my terminal again. There we go. There we go. Right then. So if we, what we're going to do now is we're going to use Docker slim. It has a feature called xray. We're going to use that to sort of generate some analysis on the layer construction. And we're gonna take a little look at that. So Docker slim and then xray. We're not going to export all the artifacts just yet. We're going to scan our production container and we're just gonna output all of that into a text file so that we can go and look at it. And I don't have visual studio code. So it will have to be the ever wonderful nano. So as you can see, this is quite verbose output, but there are some things. If you look through here, you can start to find some interesting strings. So one such string is this one, which is info equals layer dot start. So if we look through the file, we can see the start of each layer in the construction here. So we can see details about how each of the layers was put together. So we can see the instructions that were run here and the object count and things of that nature, the size of things that were added. And this one's particularly interesting. There's a lot that went on here. So we can see inside here and see quite a lot of information. We can also do things like find the exposed ports. So the information's in here. Now, I do realize that this is a not particularly human-friendly way to sort of visualize that data. So what you can do is this. You can run this xray command and you can export the artifacts. So we'll run that again here. And what that has created is this file here, data artifacts dot tar. And what you can now do is you can go to portal dot slim dot dev and you can upload that tar file and it will generate you a technicolor web view of that analysis. And it turns all of that sort of machine readable data into consumable information. Now I'm not gonna go to the website and do that just now because the process of switching between web and terminal is a bit tricky for me today. So what we'll do is we'll save all of the visibility stuff with the web app until a bit later and we'll look at all of the container exploration analysis and diffing features all together a bit later on. And hopefully that will help answer some of the questions about what exactly happens when a container is slimmed. So that's- The question and the point that we got before from exeggitio who pointed out that, you know, the layer construction would actually be a little better in the Docker file if, you know, we installed the dependencies before we copied the app over. You know, that's a good place to see that in this sort of layer construction from the Docker Slim X-Ray export. Also you can see it in the web platform at Slim AI. So, Sam Franky asks, it looks like a good command of Linux helps a lot for Docker, Martin, do you have a comment on that? Looks like a good command of Linux helps a lot. Not necessarily. I mean, I'm comfortable, you know, in a terminal, but I mean, the Docker commands and the Docker Slim commands, I think Docker Slim only has four main primitives that you need to learn. It's build, lint, X-Ray and profile, you know. So, you know, that's not too complicated to learn. And once you start to get familiar with the tools, you know, they come off the fingers quite quickly. But I think, you know, we're, there's enough complexity, but enough simplicity in a few of the basic primitives for Docker and Docker Slim. So, we're now going to take a look at slimming, you know, the container. So, there's a couple of things to point out here. A lot of people have sort of been treating container sizes like a vanity metric. And that's really not the case. So, the size of a container can be used as an indicator as to the quality of that container and how well maintained it is. It's not the be all and end all, but it's an indicator of container quality. And that's certainly something that we're looking at working on instead of trying to take some of these indicative metrics around containers and turn that into sort of a health report, you know, using more than just container size, but also that security scan, the software bill of materials and various other things. So, we'll slim the container and we'll do this. I'll explain what each of these parameters or arguments do. So, we're calling Docker Slim. We're asking it to build a new container. We're going to call this container slim demo with the tag prod slim and it's going to be using our prod fact container as the source. So, we're going to run this and see what happens. A bunch of stuff comes out on the screen here. I'm going to highlight a few things. So, the first thing you can see is waiting for the HTT probe to finish. So, what's happened is is that Docker Slim has injected an observer inside the container. It runs the container and the observer analyzes everything that that container hit, touched or used in order for that application to run. And it's very much worth pointing out that this is a very key bit of information up here. So, the HTT probe commands, there is a count of one. It did precisely one thing. It did a single get request on the root of the application. And I understand that is probably not sufficient for most people and we're going to get into some of the other things that you can do because I appreciate our application is probably not as interesting or complex as yours, but we'll show you how to build out how that probe works. So, the other interesting metric here is it has minified our container by nearly six times. The original was 133 megabytes and our optimized container is now just 23. So, let's just confirm that with Docker images. And indeed, here is our prod slim container, 23 megabytes versus 133. So, we should probably run that and run the prod slim container. There it is. You can see the terminal now, but I'm just heading over here. I'm now going to poke at my, there we go. You can see me hitting the root of the app and then I'm going to hello world. There we go. One thing to point out as well is that you used the Python slim container as the base image, which is again the sort of minimum container from the Python community that has nothing to do with slim or Docker slim. That's the Python communities image that's kind of the bare minimum of Python tools that you would have. So, that's why that's like 120 megabytes and then we're reducing that down into 25 megabytes. If you were to use Python latest as your base image, which is kind of the most generic one, that might be a gigabyte and you would still sort of be able to slim down to this type of magnitude. So, yeah. So, we've just talked there about it did a single thing. It will do a get request on the root of the app and then it will attempt to crawl any URLs that it finds as a result of that process, which of course for a restful API, it may or may not find anything at that point. So, we're going to look at a couple of additional commands we can use. There's one which is called, I will just put this here. It's the HTTP probe command. So, what we can do is, if you've got a simple app like mine, you can add additional probes to that set of HTTP probes. So, here is a, I'll just call this probe one for the purposes of this. Oops, no, I put that in the wrong place here. Let's just call this P1, that's easier. So, we're adding this here. So, my app has another endpoint under slash hello. So, what this is doing is adding this additional endpoint to the probe. So, if we run this process, we're slimming that same container, but this time, we will see that in fact two probes ran as part of the discovery process. Here we can see that the count was two and it was get on hello, get on the route. There was a count and we can see that they were both successful. Now, this is important. If I just did the probe on the route of the app, there is a possibility that that doesn't exercise all of the code paths and pull in all of the dependencies of the application and therefore the resulting container won't be fully functional. But by making sure I've hit all of the endpoints, I've exercised all of those code paths and made sure that any dependencies for the whole application get taken into account in the minified container that it creates. But I appreciate, go on. I was just gonna say kind of getting back to Avinash's question of like what happens during slimming. I think that's what you're kind of explaining right now, which is the container runs. And again, I think we should say that there are a lot of different approaches to slimming and we're showing sort of the Docker slim approach to slimming, which is to try to automate it and really understand everything that's in the container. I think slimming at a high level is this notion that you should only ship to production the things that you absolutely need to make your application run. And making that perfect is pretty difficult to do, right? So, multi-stage builds do this to a certain extent, distro lists and build packs are a certain approach to tackling a similar problem. Docker slim really the way it works is to run the application, stimulate that application in certain ways so that it can see everything that's running and it's a mixture of static and dynamic analysis. And then what it's gonna do is rebuild a functionally equivalent version of the container, which is what Martin is just doing right now. And with these flags that Martin's showing the, they're just different ways to sort of better run the container so that Docker slim can be smarter about what it keeps and what it can take out. Right. And if you've got a more complex application than this and you have many endpoints and maybe it responds to more than just get requests, then you can group together lots of different HDT probes in a single file. So I will show you the very simple one that I have. In fact, let's use this for colorization. So I have a file here probe.json and it has a couple of commands in here. Protocol is HTTP, the method is get, it hits the root and again, HTTP get and it hits hello. Now it might be that you have many of these and you want some of these to be post, you need some of them to be HTTPS. You can also invoke the crawling should start on this particular URL and it will go and mine everything south of that URL. So there's a whole bunch of different methods and capabilities that exist here and you can build that out or you can also generate a file which can have lots of curl commands, for example, inside it and it will run those curl commands during the observation stage and maybe instead of curl commands, you have integration tests. So you can actually, and this is the best way to actually make sure your application is properly stimulated is to call your integration tests whilst the container is being observed for minification. But I will run this with my very simple set of commands. Bernard asks, and we get this question a ton. So I think it's good to address. Can you get into a situation where you execute all the end points but to paraphrase a little bit like, is it possible that you miss something? And I think at certain stages it's a little more art than science. You certainly don't need to create a new probe for every single endpoint with every single variable across your entire app. But a common thing for people who are just brand new to Docker Slim is they just do Docker Slim build and they do that on their container and suddenly it stops working. And that's what all of these flags and if you go to the Docker Slim GitHub repo you can find a full list of, I don't know, there might be 300 different flags that the contributors have built into the project over time that really help you sort of tune what that minification recipe looks like. So if your app is really, really complicated you might need more flags. If it's really, really simple it might work better out of the box. But the goal of Docker Slim is to be able to really minify any app. It's just that your mileage may vary depending on how complex your app is and at what points different code gets executed. And something else that we're working on is to take that rich set of tooling that exists within Docker Slim. Hello, it stopped screen sharing. Let me try that again. Oh, this is difficult. Oh, okay. It doesn't want to share my... Oh, there we go, there we go. Had a good hard thing about it. All right. The other thing is that there is that rich palette of tooling in order to craft a Docker Slim command that will work for your application but we're trying to simplify that and turn that into a much simpler process for actually building out the means by which you can probe your container. So I have this here. We're doing another build and this time we're going to use that probe.json there to build it out and we will see that it will run run that in order to execute those probes. And what we'll be looking at a little bit later is the impact of this minification process. So we're going to do one other thing now. We created that dev container earlier that had additional tooling inside it. So we're going to go and build a Slim version of that container because this demonstrates a capability sort of a user story that we often find people find valuable. So I'm building a Slim container from our dev Docker file which included those additional dev tools and I've also included the extra HDTP probe command there just for sort of completeness. And if we do Docker images here you'll notice that our prod Slim and dev Slim containers are exactly the same size. Now, the reason why this is important is because one of the advantages of Docker Slim being a reductive process and one of the reasons why Docker Slim was created is you have developers who are already got a familiar set of development tooling and a workflow let's say based around Debian in this case and they want to create smaller containers because they want to reduce the attack surface and all of those other good things they want to decrease the time it takes to deploy into production and smaller containers are faster to deploy, faster to start. And somebody suggests where you could start with Alpine but then you have to relearn a whole bunch of new platforms and tools and developers are already universally time poor and asking them to learn another thing and another thing and another thing and change the way that they work is harder than introducing something that just augments what they're already doing. So what we have here is the idea that you can have fat containers for developers that are fully instrumented with all of the dev tools and all of the creature comforts that those developers want in order to iterate fast, locally, but then you can put those containers through your CCICD platform through Docker Slim to create the small containers that have got all of that tooling removed so that they can then be pushed into production with that reduced attack surface and those efficiency benefits. So that's sort of the user story that we see quite a lot with people that are using Docker Slim. Is there anything you want to add on that, Pete? No, I think that's a great point and really illustrates what we've been talking about. We've got a question, can Docker Slim work directly with tar image files? We use Bazel to build images resulting in file artifacts rather than having images in our local Docker registry. Yeah, because what you could do is you could use, you could have your Docker file use from scratch, import your tar file and then do the Docker Slim process once you've built that initial image from your tar file. I think that would work. One caveat question we get a lot is, does it require Docker at all? And the answer to that is yes. So Docker Slim is a binary that you download and it's on your local machine or you can put it on whatever machine's like. A lot of people built it into their CI pipelines and stuff like that. It can run as a container as well, but it does rely on the Docker daemon to do its thing and run the image and understand it. Now what you get is an OCI compliant image so you run that on whatever you want, but there is that requirement. So people that are just allergic to Docker for whatever reason, probably won't work. But if you have a different runtime, that's fine. If you have a different development process, it's usually fine. And there is a containerized version of Docker Slim that you can run as a sidecar as well. Question, are there restrictions on the stacks that supports? I see a lot of languages listed. I do a lot of .NET, which is not, our .NET expert is in the chat. That is big hard. He can tell you about his experiences with .NET. So I will let him answer that question because I am not a .NET person. The goal of Docker Slim is to work with any language. So any container, any language, I'll say it works really, really well right out of the box with kind of web server style applications, APIs, websites. So Node.js containers, Python, Flask, Django, stuff like that works super, super well. Obviously, it works really, really well with Go. It's written in Go. It definitely works with other languages, but the levels of complexity and the sort of results you get can vary a little bit. Data science containers work, but they take a little bit of doing. They're also humongous, so some reductions in those is usually a pretty big benefit. But we've done them in R and with kind of Jupyter style data science containers. But that's a little bit of a further out there use case. So yeah, give it a shot. Let us know what you think. We have a Discord for both Docker Slim and we have one for Slim AI. So if you have questions or if you want to give it a try yourself, please drop into those and let us know. We're always looking for good examples in ways to improve it. Right then, let me share my screen again. Let's pick up this. Okay, we'll just wait nicely. So here we go. So here I am logged into the Slim Developer Platform and what we're going to do is we talked about exploring and diffing containers and I talked about, you know, you can upload your X-Ray reports. Well, that's right here on the homepage. Once you're logged in, it even tells you the command here to, you know, generate your X-Ray report and then you can just upload it here. Now, I'm not gonna show you this because we're gonna go into more detail in a couple of other areas. So what you'll see at the top here is connectors. What I decided to do is I pushed my fat and slim container images to Docker Hub. You can use a number of different registries and we'll take a look here. We look at mine. You'll see that I have mine connected to Docker Hub at the moment, but you could use GCR, ECR or key. So there we go. That's that there. And you just, you know, I'm not gonna go through this whole process but you click connect, you plumb in some information, you hit save and you're connected. So I pushed my fat container and my slim container to Docker Hub earlier. And what we're going to do now is then I added them to my favorites. So I will show you how I did that. They're already there, but you can click on this ad and then I can see what's inside my connected repositories. Here's our slim demo and here are our two container images and I connected those up and added them to my favorites earlier. So if we go back to the favorites here, here is the slim demo, the prod slim and here is the slim demo, the prod fats. This is the fat container. So let's start with the fat container and we'll click this here to analyze and view. This container and we'll pull out a few things that are worth looking at the sort of a high level. So you can see here all of the layers that we used to construct this container but if we just look at the overview very quickly, we can see, you know, the user, we can see what it's built upon, we can see what ports it exposes, the size, the working directories and all of that good stuff. And then down here, we can also see the shells that are included. This is an important bit of information. Again, when we're talking about that whole, you know, containers being hijacked through two, through, you know, bugs in software and we can also see the files that have got special permissions and certificate bundles and a whole bunch of stuff. So at a high level, you can, you know, easily and this is what was inside that X-ray report that was sort of difficult to sort of scan as a human. But if we now look across at the file explorer here, we can see this is a view of the container with all of the layers applied. But if we step through this, we can look at layer zero and we can actually expose what commands were used to generate that layer. Now, this is obviously coming from the base image. We ingested a base image but I think the first five layers of this container are actually what we inherited from the base image. So you can even analyze, you know, what actually happened, you know, in the construction process before you, you know, ingested it as a base image. So here we can, well, that's all going to be one thing. So let's do this. We can see here that one file was added but it gets interesting when we get along here. I think layer seven is where we install our PIP requirements and we can see that 832 files get added and they all, you know, sitting around here. Oops, I didn't mean to click on that but you can click on anything. And it shows you everything that's in here and you can, you know, filter this down to, well, I know Flask was something that got installed so I can just look at the things that were added as part of Flask. So this is a nice way to look at the sort of the whole image. This is the fact container, remember? And it also fully fleshes out the docker file, including all of the full verbose steps that we inherited from the base image. You can see precisely how this container was put together. So this gives you a sort of degree of visibility that you might not ordinarily have. Is there anything else you want to add on that, Pete, before I? No, I think so. Are we going to look at a diff, is that? Yeah, so we will look at a diff. I was just going to very quickly look at the slim container because there's a couple of things in the overview that are worth pointing out here because of the... So the first is the slim container is a single layer container. It has been reconstructed with just the raw ingredients that it requires. And in the overview here, we can see there are no shells inside this container that there is only the temp directory with any sticky bits applied. So you can immediately see a lot has changed and you can go through the file explorer process. Now, obviously everything's in a single layer at this point, so it's more interesting to, as Pete says, go and take a look at a diff between the slim and fat containers. So we'll just do that now. We'll compare them. We'll select the fat container to the slim container and then compare. So we're looking at a file system diff here from fat to slim. We can see what was removed. So we can see that in the slim container, the requirements.txt file has been removed because well, the minified container just simply doesn't need that to operate. We can see all of this user space that's just now absent. There's obviously tons of stuff that's been thrown away, but we might want to actually sort of dig into this to sort of, let's think about some of those scans we did earlier. I think we knew that LibGNU TLS was one of the things that had a vulnerability. Well, if I filter this list, we can see, well, that library has been completely removed in the slimming process. So that, if we are asked, are we shipping this version of LibGNU TLS in our production images? Our bill of materials says, well, yes, we do. But then this tool enables you to go and look at the slim container and actually come back with the answer. No, we are not. The slimming process is removing this. And similarly, we know that SSH was added that's probably a bad way to search for that. Is it that? I can't even remember what the libraries are. No, a bad example. You get the idea, you know, when we search for things, we can see that the shells have been removed. So you can look at that software bill of materials. And similarly, we know we included Flask, but we can also see that some of Flask has been removed because it's simply not used by our application. Yeah, and I think, you know, I think kind of to Avinash's question before about like, well, what's happening during this process? And, you know, just some of the other questions about like, well, are there risks that this gets removed or that gets removed or something that I need? You know, I think that was part of the idea behind SlimAI and the platform and why we're building this to help sort of debug this slimming process. No matter what containers you're using, if you're using Docker Slim, if you're using some other type of container approach just to give you some more insight into that. You know, if people want to see more of the Slim platform, like feel free to go check it out. It's free, you know, to sign up. And we do a lot more in-depth stuff on the Slim platform on our Twitch stream at twitch.tv slash Slim DevOps. So that's a good place to check it out if you want to want to see more there. So. And I think that is sort of the main sort of, you know, pieces that we wanted to show there. So, you know, what we've gone through is looking at some best practice with Dockerfile. It looks like we've still got some learning to do there. Thank you for the hot tips earlier on. But starting with that process, starting with what is a decent looking container, doing some security analysis, some S-bomb generation and then slimming those containers and analyzing what happened as a result of that slimming process. And then how does that relate to the software bill of materials that we were generated? You know, if we are asked the question, are we shipping library XYZ in our production containers, we can trivially go and find out the answer to that question. Well, that's awesome. Thank you, Martin. And that was super cool. If people in the chat have any questions or want to see anything else again, you know, you can find us at SlimDevOps on Twitter or on Twitch and we do a lot of these demos and show more of the platform and do some Docker Slim examples. And yeah, if you have questions, just reach out. Taylor, any questions on your side? Yeah, I think kind of just more of a, you know, fun kind of exercise. I know that typically when I've taken a look at Docker Slim, they've said that if you use a compiled language, you're going to see a lot more reduction in terms of size. Just, you know, a curiosity question. What is the biggest delta you've seen in terms of like file size reduction? Oh, that's a great question. I think the biggest one I've seen is when we worked with our containers. Yeah, yeah, interesting. That was a 1.2 gigabyte container down to 200 megabytes. Wow, incredible. I wonder if Kyle Quest, the creator has any other examples. I know I've seen it on the order of like 60 or 70 X, but yeah, you definitely get some pretty small ones, so. It's interesting to take a look at the space around, you know, cloud native build packs and Docker Slim and kind of, you know, I wonder if we'll get to that point where we'll be able to kind of chain all of these things together. But, and, you know, just have like, oh yeah, it's just a kilobyte, you know, to ship around. Just so that'll be fun. Yeah, yeah, we actually, so we talked a bunch with the build packs folks at KubeCon. And, you know, you can run Docker Slim on build packs, built containers. You don't get as much out of them because they're sort of optimized already. But with the Slim platform, you know, you can like look inside and it's very interesting to me to see how those containers are built because they tend to be built a little bit different than like if you were just running a Docker file. And so yeah, we're talking a lot with them. We actually just published a blog post about working with build packs. I think they're super cool. Probably do an example on Twitch with them. So yeah, that's a cool technology. And I think it's complementary to the things that we're thinking about doing. That's exciting. Are there any kinds of, I know it's, you know, obviously it's always best to kind of focus on the workflow over the tools themselves. But are there any other tools or practices or anything like that that you, that you might add to the pipeline or just kind of find interesting on the horizon at all? Oh, we've lost Pete. So yeah, we're obviously looking to improve these all of the time. And we've got a number of things in the works at the moment. One is integrating a GitOps workflow into this sort of minification process and operating minification over a group of containers via Docker compose. So you can, more common is that you have several containers that operate a microservice in harmony and you want to be able to have a new revision of one of those containers in that collection and be able to minify it, but then test and validate it against the other containers that it relies upon. So we're, that's, I think we put a blog post up about that last week. So yeah, we're, you know, that's one change that we've got, which I'm quite looking forward to because I love that GitOps workflow, you know, being able to tag and mark things in a familiar way, but operate that on a whole collection, I think is gonna be cool. Absolutely agree. No, I can't wait to see that. I always really like to see the GitOps workflows. It's nice that, you know, at this last, was it GitOps con and KubeCon? We kind of got that formal definition of what is GitOps, you know, so we can kind of rally around that definition. So definitely one of my favorite workflows, too. It's nice to be able to have that rather than, you know, we do things a little differently around here at every company you work at. Right. Yeah. And, you know, we've been working with PaymentWorks, who are sort of a design partner, and they have been helping fleshing out, you know, how that whole GitOps workflow should function. And they're already making good use of it, but that's a feature that's behind closed doors at the moment that will be unveiling very soon. But we do have a white paper out that explains exactly what we've been working on and what's coming along soon. So you can find that on the SlimAI website and also you'll find links there to our YouTube channel where we did a lengthy interview with four of their dev team about that whole process because they were moving from monolithic VMs to a microservice architecture at the same time as moving to, you know, this GitOps workflow all powered by the stuff that we're building. So that was a fascinating conversation. That's awesome. That's incredible. Yeah, I'm excited to see all the efforts made on those fronts. It's gonna be a lot of fun. Awesome. Awesome. Well, I don't see any more questions on that front, but definitely want to thank y'all for watching today. Gentlemen, if you have any parting words or words of wisdom to impart on the audience, I would love to turn it over to you before I close things out here. I guess I'll just say, you know, we're a pretty friendly bunch. We like talking about containers and stuff like that. So if you come to slim.ai, you'll find links at the bottom of the page to our Discord channels and our Twitch stream and all of that. So please just come chat with us, hang out, let us know what you think, let us know, you know, if we can make improvements. And yeah, it's just, it's been fun to be in this space. And so looking forward to all the cool things that we're gonna do with CNCF and all the great projects that CNCF is doing. So, Martin, what about you? Yep, all of those things. And tomorrow we'll be on our Twitch channel, twitch.tv slash SlimDevOps. That will be at 3pm Eastern Time, 8pm GMT. We're going to run through a version of what we've done today. We're going to expand a bit more on that HTT Pro stuff. We had somebody in our community where we ran into an issue where their container wasn't being fully stimulated. So we're going to circle back and try and find a more comprehensive set of probes to actually automate that stimulation of their container. So if you want to have a recap on this or dig into things in a bit more detail or have questions about what we've done today that we didn't get time to cover, then come over and see us tomorrow. And at the SlimAI website, you can find a link to our Discord and you can always, you know, message us in there and ask your questions and we'll follow up with you. Perfect. Well, thanks so much. I think, yeah, all I have to say is minify, minify, minify. Now I've got all the tools to do that. Awesome, thanks Taylor, I appreciate it. Awesome. Thank you everyone for joining the latest episode of Cloud Native Live. It was great to hear from Peter and Martin around building, analyzing, optimizing and securing containerized apps. We really liked the interaction and questions from the audience. It was a really lively bunch today. So thank you all for making it out. You know, for getting through all of the packets and everything like that. Get a little congested from time to time. So we bring you the latest Cloud Native code every Wednesday at 11 a.m. Eastern. But next week we're gonna be off due to the American Thanksgiving holiday. We're gonna kick off again on December 1st with Jason Morgan talking about Service Mesh 101, an introduction with Linkerdee. Thank you for joining us today and we will see you soon. Thanks everybody. Thanks everybody. That's awesome. Thank you.