 Hello, everyone. Welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Annie, and I am a CNCF ambassador, as well as a senior product marketing manager at Samunda, and I will be your host tonight. So every week, we bring a new center of presenters to showcase how to work with Cloud Native technology. They will build things, they will break things, and they will answer all of your questions. So join us every Wednesday to watch live. This week, we have Martin, and the press here to talk with us about comparing different minification techniques and their vulnerability assessments. And another great news from the KubeCon and Cloud Native Consphere. The North America CFP has been pushed to two and third, so everyone still has time to submit their sessions. So go ahead and get that done. So great to hear. So this is an official live stream of the CNCF, and as such it is subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct. Basically, please be respectful of all of your fellow participants, as well as presenters. But with that done, I'll hand it over to Martin to kick off today's presentation. Hello. It's great to be back on CNCF Live. I think we were here about five months ago. And today, we have a slightly different topic. So we're going to be talking about container images and looking specifically at managing sort of complexity of container image management versus the size of the container images versus sort of the security of the container images. So we're going to be using a number of terms today, particularly slim, minify and optimize. And those terms are being used interchangeably to describe the act of reducing the size of a container image. And we'll be covering a number of different ways that you can reduce the size of a container image. And what's really important here is we believe that container size is not a vanity metric, but it's an indicator of container quality. So throughout this sort of presentation we'll be showing you sort of hard facts about why that is actually true. So if you can just flip to my shared screen view. Thank you very much. Oh, there we go. Hello. So we have an article that we've posted in our community forum. Whoever has the rights to paste the link to that if you could share that link so anyone can jump off in here. So the first question is why should you slim container images? So very clearly you should only ship into production what your application requires. And I see Gibran says, is there a presentation visual? So yeah, hopefully you'll get the link in chat in just a moment to this article which you can review after the fact. Oh, they only see a blue screen. I do at least see the article itself. So I'm hoping that it is. It looks like people that are watching aren't able to see my screen. Yeah, there's a bit of a delay. So we might do and tell us that it is working. But let's see, if it's not working then obviously we can try a reshare. Yeah, sure. I can reshare. I will stop sharing and resharing that screen. There we go, let's do that. Okay, let me know if that is, if you can now see the screen that I'm sharing. Okay, well, there is some issues from the attendee side but I think we can keep on going a little bit. Okay, let's just see. Oh, there we go, is that better? Anyone asking delay should not be that much? Well, I will keep talking for the moment and we'll see if, oh dear, that's disappointing. Not visible. How odd. Yeah, I see everything fine as well. Sometimes with the streaming it can help the quality if everyone closes the tabs they don't have. At least they have opened a lot of tabs and closing all of the others. But that might not help in this case, but worth the shot. And then we do have our host is letting us know, please try watching in Chrome as well. That might help. But if we can get that sorted and because from our side, we can't see an issue at the moment. So we might have to go into a podcast mode but we can still talk through the topics at the same time. Okay, let me just make certain that everything here looks to be correct settings. So that all looks fine. Like I say, I can see what's being shared in the event. Everyone's saying it's not visible. It will be necessary to see my screen. See my screen later on, unfortunately. But for the moment, I can explain what's going on here. So we believe you should only ship into production what your app requires. The slim container images are faster to deploy because they're lower in size and they're faster to start because they contain fewer files. And slim container images can be less expensive to store and transfer as well. So, and the big benefit that we're gonna touch on here is that slim containers reduce your attack surface. So the link to the container report that we, which is linked in the article. I don't know if the article has been shared in the chat yet but it's called what we discovered when analyzed in the top 100 public container images. And we definitely saw a trend of dev, test, QA and infrastructure tooling being left inside production containers. And the issue here is if you do have an unfortunate vulnerability in your own application and that gets exploited then leaving those shells and interpreters, tools and utilities in your container effectively instruments that container so that the would be attacker can now use your infrastructure against you to disrupt your operations. So I wonder if it's worth me trying. I can't share the screen in a different way. I don't think I could share a window. Let me try that. Let me try changing the way I'm sharing. We have now confirmed though and I pulled up the attendee view as well is that the attendees cannot even see the speaker. So we are fully on a podcast mode essentially no view of you or screen share available. We are in the background checking on this. So essentially hang on tight everyone. And if there's anything that you can cover podcast style to kick us off that would be lovely obviously but at the same time we can hang on tight. Sure. So I'll continue with sort of some of the objectives that we're going to cover today. And that's that we're going to be using science to determine the best approach to containerizing an application. We'll be focusing specifically on developer momentum and developer experience, lowering complexity, optimizing for size and reducing the attack surface. And it's important to understand that reducing the attack surface is more than just the vulnerability count and we'll get into all of that and we'll providing we can share screens we'll give some practical examples of actually how we're doing this. So the tools that we're going to be using today is the slim AI Docker desktop extension because we can use that to introspect the container images and do some deep exploration and analysis of what's in the container images. We'll be using Docker slim to minify or optimize the containers and we'll also be using sift. Well, not so much using it but we'll use data from sift to sort of catalog the number of packages that our containers have and then trivy to do some security analysis and then we'll be doing some manual sort of diving into the containers in order to actually see what happens when we optimize a container what residual vulnerabilities are left behind if any at all. So we'll be going through that whole process. Now, what are we gonna be looking at? We're gonna be looking at container construction in the first instance. And in order to do that we have a very simple Python Flask app that implements an even simpler RESTful API and the application is kind of irrelevant. It's for illustrative purposes only and its function is unimportant and as it happens it's exactly the same application we used when we were here last time when we talked about building analyzing and optimizing containerized images. So it's a Python Flask app, it's super simple but it's good as an illustration. And at this point I could like, if I could share my screen I could show the application. It's literally a few lines of Python. If the link to the, can the link to the article be pinned so people can refer to it? If the link to the article can be shared publicly there is indeed a link to the Python file so you can inspect it. But it's a RESTful API with two endpoints. Root and also a hello for a hello world RESTful interface. So it seems unfortunately that the, that everyone that's watching the live stream is not seeing any visuals but if the link can be shared to the article then I can talk through where we are in the article and explain what's going on. This will get tricky when we get towards like how we do the deeper analysis of the containers because that is a very visual process. I trust this. I trust usually the cloud network is very visual. So the podcast format is very challenging for sure. But let's get the link shared to the chat but there is a base to be an issue on possibly on YouTube side currently but we are still working on it for sure but let's get that link shared and then we can talk through it. Okay, so I see that you're still working on it. So I'm now going to talk about the techniques that we're going to use to containerize our application. So what we've done is we've actually containerized it in several different ways. The first thing we did is as it's a Python application we went to the official Python Docker Hub images and we're using the Python 3.9 slim bullseye image. So if you're not familiar that obviously means it's a Python orientated image from the Python project. It's Python 3.9. It's what they consider to be their minified or slim image and it's built against Debian bullseye which is Debian 11 and that base image is 129 megabytes. Also from the official Python project is a Python 3.9 image that's built against Alpine 3.15. So we're also going to use another container image built against that, that base image is 58 megabytes. So, you know, almost one third of the, or half the size, about half the size. We're then going to use an Ubuntu 2004 base image which is 73 megabytes but by default doesn't ship with Python at all. So that will require some additional steps and because Ubuntu 2204 is available now we're also going to use that. Now the 2004, so two years ago image is 73 megabytes and the 2204 image is 78 megabytes. And on the day of release of Ubuntu 2204 we did do a deep dive into why there was a five megabyte file difference between the two releases of Ubuntu. And we've got a link in the article that's now being shared of a live stream we did where we actually sort of dug into that live. And again, used exactly the same tools we're going to be using today in order to determine exactly why it was five megabytes larger. And then finally, we're going to use Distrolis Python 3 image. And that's going to be a multi-stage build and that image weighs in at 54 megabytes. So we've got a spread of sizes there with and without Python included using a number of different techniques in order to containerize our application. And then what we'll be doing later on is evaluating what the developer experience and the security profile looks like from all of those things. Now, given that I'm not seeing anything in live chat anymore, does it mean the live stream is working correctly? It appears that it is not working correctly at the moment. Okay. There is, we're trying to fix it obviously in the background, but the article has been shared to people. So if there's anything that we can run through there, this audio-only podcast style, I think that would be really great. But yes, we're trying to get the video feed working as well. Okay. Let me just check something here. One moment, just bear with me. I'm just doing my own checking in the background. I'll come back to that. So if you've got the link to... So we have a question here from Rolf. Is this cloud-native foundation live comparing different minification techniques and their vulnerability assessments? Yes, essentially, yes. We're going to be looking at precisely that and also the impact of starting with different base images on the minification process and the outcome of the security profile of the containers. So you're in the right place, Rolf. You're among friends. Okay. And I'm just looking. Yeah. Okay. When I understand now what people are saying about a blue screen, they're not talking about the screen I am sharing. They're talking about the whole screen. So now I understand what people are looking at. But you will see that the host shared in the chat a link to the article. So I would ask for those of you that are following along to drop down to the title that says Dockerfile complexity. And what we're going to do now is we're going to actually look at about half a dozen different Dockerfiles that are containerizing the same application using those different base images. So we're going to start with the Python. These will be tags in the images that we create later. So I'm opening this on my screen. I know that you can't see it, but maybe you can follow along as well. So in this Dockerfile, what we're looking at here is a very simple container image. We just have to use the Python 3.9 slim bullseye image. All of this adheres to best practice. We establish our working directory. We copy our requirements file in. We run PIP without caching any data to install the PIP requirements for our app. We then copy in our application itself. We tell the app what user it should run under, what port it should run under, and we define a well-defined entry point. So this adheres to best practices, very simple Dockerfile. So in terms of the complexity, it's simple and it adheres to best practice. So now what we'll do is we'll take a look at the Alpine-based image. And again, this is using a Python-based image published by the Python project. It's 3.9 and it's Alpine 3.15. And it's almost identical. If you're able to, you know, flip between those, then you'll see that really the only difference here is that there is no user defined in the Alpine image. So we don't define a non-privileged user. That's because by default in Alpine images, there are no users created or system accounts. Now it is entirely possible to add those. I've elected not to do that for simplicity and sort of show some of the differences in these different techniques. So consequently, you could say that this container image does not adhere to best practices as it's currently defined because it doesn't run the application in an unprivileged user or system account. And like I say, if we added some extra steps to this dockerfile, we could do that. I've just chosen not to for the purposes of simplicity and highlighting the differences in the techniques. So next we have two images which are one byte different from one another. So these are the images built on Ubuntu 20.04 and 20.20.04. Now, as somebody that's familiar with Ubuntu, so I used to work at Canonical. In fact, I was the engineering director for Ubuntu while I worked there. So I'm very familiar with Ubuntu. So, but this does introduce some additional complexity in that we have this run statement which does our app to get updates. Then app installs, some, that basically the Python requirements, the minimal Python interpreter and PIP, and we also need the certificates bundle. But then we run all of this in one concatenated line that also includes some cleanup of the app cache. So that's sort of standard fare for Ubuntu. And then the rest of this looks a lot like the Python project Debian based image. We define our app, we use an unprivileged user, it's all the same stuff. And again, this adheres to best practice some additional complexity. We then have a version that runs against 20.20.04, Ubuntu 20.20.04, and the only thing that changes is the version of Ubuntu and we'll get into looking at that a bit later. Now, for completeness what I've done is I've also created a couple of versions. Again, they're based on Ubuntu 20.04 and 20.20.04 and they add one additional parameter to the app to get line. And that's to say, don't install recommended packages. So the complexity of this Dockerfile is slightly elevated because we're using some additional knowledge about how apt works. For anyone familiar with Debian or Ubuntu, this should be familiar. Now, the benefit of using this we will see later on. So these images we will tag with no rec in order to distinguish between a full Ubuntu image and also one where we've tried to create a slim image but all of these Ubuntu images adhere to best practices. And then lastly, we have a distro less image. And this is where we need a bit more knowledge and this is a more complex Dockerfile for several reasons. First of all, it's a multi-part Dockerfile. And also we need to know something about how distro less works with Python. So I'm going to sort of talk about what that is. Now, there's some things that are common with alpine here in that there are no user accounts defined or no system accounts either. So we don't have, and again, we could have created those with additional steps but I've elected not to for the purposes of simplicity. But what we do is we create a build stage where we install our app and all of its requirements via PIP and then we create the production container. And that's where we then use the distro less Python 3 image and then we copy from our build image, the app itself and then those Python packages that were installed via PIP. So the benefit here is we can create a container that is smaller and we'll look at that in a moment when compared to other Debian based images. But the trade-off here is I really need to understand how those distro less containers are constructed. And for that, I'm going to switch over to Docker desktop using the, in fact, I'm gonna have to reshare. For the purposes of those few people that can see my screen, I'm gonna reshare my screen so that you can see what I'm looking at here. So if you're not familiar, Docker desktop recently added support for extensions and Slim AI were among the first to use that program. So what I'm gonna look at is the distro less container and I'm gonna analyze that because I had a hunch that this was based on Debian but it was like, well, what version of Debian is it based on? So here I'm in the file explorer and I can go to Etsy and I can have a look at issue.net And then I can look at the final content. I think the screen isn't sharing. I think I'm the only one who can see it as well, but it is now I see. Now we can see. And if anyone joined in late, we are doing due to technical difficulties for essentially having a podcast style for now, trying to figure out the issues in the background but a recording of this will be available after the session and I, for example, can see the screen. So the recording will have all of the screen share and all of the graphics and our lovely faces in it as well. So if anything is missing, yeah. I will add to that that we stream traditionally every Thursday with Slim AI. So for those of you that are watching, we will rerun this whole tutorial tomorrow on twitch.tv slash Slim DevOps. So for those of you that can follow along, reading the article to my voice, that's great. But for those of you that want to come back and see the full sort of video presentation, we'll do that on twitch.tv slash Slim DevOps tomorrow. And if you follow that channel, then you'll see the notifications and the schedule for when we're doing those things. So if I go back here, what we can learn from the distro list container is it's based on WN11, which has the code name Bullseye. So that was useful to know. And then what I can also see is if I look in the bin, in fact, not the bin directory, but the user bin directory, I can see that it's using Python 3.9. So these were important clues that I needed in order to understand how to create my build container with a container from the Python project that was compatible with the distro list container that I was using for my production container. But this all adds a degree of complexity. Hang on a moment, let's go here. Because I now need to know a bunch of different things. So I need to have some deeper insight into how Python works. I need to set my Python path to the correct version of Python. I need to know how to copy the Python site packages into the appropriate location for the distro list container. So the complexity of maintaining that container image goes up. So that's sort of a review of those container, those Docker files and how they differ from one another. Now I've pre-built all of these Docker images, except for one. So what I'm gonna do is I'm going to build each, one of just one of those container images. I'm gonna illustrate how we do this. So we're going to build the version of the container image that is based on the official Python project image that's Debian based. And if we then go back to the presentation here, we can now take a look at the sizes of these respective containers when they've been built with our application inside them. So what do we find here? First of all, we have a clear winner when we are looking at image size alone. And that's Alpine. Alpine as a base image with our app and all of its dependencies inside is 62 megabytes and distro list follows up 71 megabytes, which is excellent. And then when we look at the version based against Debian, it's 139 megabytes. So we're looking at twice the size. And if we look at an unoptimized Ubuntu image, we're looking at nearly 500 megabytes, well, just over 400 actually. But when we use those techniques to reduce the number of recommended packages that get installed, then those sizes come down to 120 and 136 megabytes. But if we're looking at image size alone, Alpine looks like a clear winner and it's certainly great for go and rust. But our application here is Python. And this is where an additional piece of complexity gets introduced when dealing with Alpine images and Python and also some other languages as well. And that's the Python can result in significantly slower builds and also introduce runtime bugs when you're using Python and Alpine. I'm not going to explain all of that now. We have a link to an article published by PythonSpeed, which is titled using Alpine can make Python Docker builds 50 times slower. So I encourage you to read that and learn more on that topic. But what we learn here is when we elect to use Alpine, we have to do it knowingly. We have to choose to use Alpine when we know we're not going to introduce build time issues, build time performance issues or runtime compatibility issues. And as I said, we were able to use the Slim AI Docker desktop extension to discover that the distro-less container is based on Debian 11 and Python 3.9. We've already illustrated why those two bits of information are important in order to create a container that works. For example, I was using a different version of distro-less to start with and it was incompatible and my container didn't work. But we're going to touch on some other reasons why this is important later on as well. And it also highlights why good use and understanding of apt can significantly reduce the Ubuntu image sizes even when compared to the official Python Slim images that are built with Debian. So the next thing to do is look at some package analysis and this is very, very quick. So what we're going to do is we're going to be using SIFT for that. So I'm just going to run this command here. SIFT is a tool that builds a software bill of materials. It can work on just file systems but we're using a container image here. And so that just took a few seconds to run. I know you can't see my screen right now unless you're watching the recording after the fact. But if we return back to the article, we learn a couple of things. So what we're looking at here is, well, there must be a correlation between number of packages and size of container images and certainly that's borne out. What we see here is that distro-less has the fewest number of distro packages that are installed. But what's common among all of these is they all have precisely 11 Python packages installed. And this is expected because PIP should be deterministic. So those Python packages should be the same versions installed in the same way across all of these platforms. So this in itself is interesting and certainly you can inspect those reports in order to understand exactly which packages are in your container image. But this gets more interesting when we move on to vulnerability analysis which is the next thing that we're going to look at. And we're going to be using Trivi in order to do our vulnerability analysis. Other security scanners are available. This is the one that we're using primarily for today's example. So I've just switched over. We're going to run this again. And this is scanning my container and it produces a very nice report indeed. So you can see that on my screen. But what we'll do is we'll look at like summary information of the vulnerability analysis when run against each of those containers in turn. Now, what we learn here is there is no denying that the Alpine results are excellent here. It has zero vulnerabilities. So this is a container with our app and everything the app needs to install. Everything the app requires in order to operate being installed, zero vulnerabilities. But then we see some sort of interesting information here. So Distrolis, which is a small image, actually that has three critical vulnerabilities and seven high vulnerabilities. And the official Python image which is built on Debian has two critical vulnerabilities and 15 high vulnerabilities. So that's quite the jump from a clean bill of health with Alpine. But then when you look at the Ubuntu-based images, even those that have not been optimized in any way by sensible use of apt, they all have zero critical vulnerabilities. And only one of them, the large fact image based on 2004 has a single high vulnerability. Now as it happens, when I looked into it, that high vulnerability is a false positive. You can see on the Ubuntu-USN notices that that vulnerability has in fact been mitigated. And Trivi has picked that up as a false positive. And this is actually something that GRIP and SNCC, which are two other security scanning tools, both confirm. So in actual fact, irrespective of the number of packages or the size of the image when using Ubuntu, there is zero critical and zero high vulnerabilities in all of those container images, which then begs the question, how is that possible? Why is this? Ubuntu is derived from Debian. So why is it so very different? And again, I used to work for Canonical. I have some insight here and I'm gonna squash this down very quickly, which is Ubuntu is a commercially backed Linux distro with a full-time security team that has SLAs to mitigate vulnerabilities for their customers. And there's a link to the documentation that describes exactly what the commitment to security within Ubuntu is. And that commitment also includes mitigating all critical and high vulnerabilities for the supported lifetime of the distribution. And so when you compare that with Debian, which is a community project, and while many contributors to Debian, including myself do fix security issues in Debian, it simply cannot provide the same level of commitment to security as the commercially backed Linux distribution vendors, such as Canonical, Red Hat and Sousa do. So this is a very important point when you're choosing your base image. It's not just about choosing something that is smaller, it's about choosing something that has a better security stance by default. And I think these numbers sort of illustrate that point, which may be not many people are aware of in terms of what the security benefits might be. So this then presents a couple of what-ifs. It's what if I could have the low complexity of maintaining Ubuntu-based containers with the security profile of Alpine? We've just seen no vulnerabilities as a result of using Alpine. What if I can make container images that are smaller than Alpine? So let's try that. And what we're going to do here is we're gonna use Docker Slim to minify the container images, but we could equally use the Slim AI developer platform to do the same thing. But for the purposes of this demonstration, I'm just going to copy this one line here. I'm gonna go back to my terminal and I'm now going to minify this container image. Now we're not gonna talk about the internals of how this works because that was the topic that we talked about when we were last here with the CNCF. So we'll just push forward and we have a minified container image. So I'm now going to switch back to the presentation, which has a table of the results. So what we see here is that it doesn't matter which of the fact containers we started with, whether it was Alpine or Distrolis or a Debian base or an Ubuntu base, all of them see significant size reductions. I know some of you watching live can't see my screen, so I'll quickly run down here. But in terms of the size, all of the images are between 20 and 26 megabytes. Once they've been minimized using Docker Slim. And at the lower end, that means on Distrolis, we're looking at about a 3x reduction in size, the same for Alpine. For the Ubuntu images, we're looking between a five and 17 times reduction in size. And for that Debian image, we're looking for a six times reduction in size. But now we've really leveled the playing fields in terms of the size of the images. And we've also gone beyond what we get by using just Alpine alone. We've significantly reduced the size of our container. So the next question is, well, if we now have far fewer files inside our container image, has that removed vulnerabilities? So we'll start by, we're gonna pick a particular container image at this point. So we're going to look at the container image that was built with Ubuntu 2204, that also didn't install the container image up there. Didn't install the recommended packages that we have now minified with Docker Slim. And we already know that that container image is free of critical and high vulnerabilities. So what we're going to do is we're gonna look at just the medium vulnerabilities that are still present in that container image. And we have like just that report from Trivi in the article here. And we can see there are seven, I think, from memory. Most of those seven vulnerabilities relate to a package called E2FS progs. And if you're familiar with Linux file system, you'll know that that's all of the utilities that manipulate the extended file system on Linux. So what we're able to do is actually use the Docker desktop extension from Slim AI to go and look inside that container to see, is there any residual files left behind from those vulnerable packages that we should be concerned about? So what we're going to do is we're gonna use the Docker desktop extension to open that container image. And that will just take a moment, here it is. And we're looking at the all layers view here. And what we're going to do is we're gonna use the search criteria to actually go and peek inside the container to see if any of the vulnerable bits are left behind. So for example, I know that all of the extended to file system utilities start with E2F. So we're going to go and search for that. And then we'll flatten the results to make it easy to see. Oops, bother, just bear with me a moment. I've just clicked a button I shouldn't have. Let's go back to, there we go. Let's go back here. So we'll flatten the results and we'll put our search criteria in and it comes back with no information. So the extended to utilities are absent from the container and we can do the same thing by looking at this library, which is related to the same vulnerability. So we will do the same here and hit return and it comes back with nothing. Now I'm not going to do this for everything, but you do have links to all of the assets in order to reproduce this for yourself. So you can actually demonstrate, but I did the same searches for the other utilities as well. And then most of the vulnerabilities were attributed to SQLite. And again, we looked at the SQLite libraries, they're entirely absent because our application doesn't require them. And in fact, as a result of the SQLite libraries being removed, that also removed two of the low risk vulnerabilities that were highlighted. So we go down this list and we can easily determine that actually none of these components are left inside our container image. So now we have a container image that is free of critical high and medium vulnerabilities. So we have low vulnerabilities left. So we move on and what's interesting here is that my app requires Python. So we actually look to see, well, are there any vulnerabilities associated with Python itself? And there are in fact. So there are several. So what we can do is when we look at this vulnerability analysis, we can see that all actually the same CVE relating to the same issue. So it's to do with mail caps. So what we'll do is we'll craft this search term. And before we go and look at the minified container, what we're going to do is verify that this search term is actually the way that we should identify that this vulnerability exists in the fact containers. So I'm going back to my fact version of this container and I am searching for mail cap and I'm just gonna flatten the results for simplicity. And we can see it's a pure Python module and we can also see the, what's called the compiled byte code. So this mail cap does exist in the fact container and therefore the fact container does indeed carry this particular vulnerability. But if we now switch back to the slim container and do the same search and get the flattened results, we'll see that that comes back with nothing. So even though this is a pure Python module that's part of the Python distribution, because our application doesn't use this Python module, it's completely removed from our slim container image. So in fact, I was able to do this for all of the remaining low risk vulnerabilities and I've linked the CVEs in the document here. And all of the remaining low risk vulnerabilities were also removed by optimizing the container with Docker Slim or Slim AI. And it was trivial to verify this using the Slim AI Docker desktop extension in just a couple of minutes. So yes, we can have our cake and eat it too. And what's important to point out here is that there was a high risk vulnerability, which is CVE-2021-3399, which exists in G-Lib C. So that's the fundamental library that just about all applications are linked against. And that remains in both the Python-based image, the Debian-based official image from the Python project and also the distro less containers. Unlike Ubuntu, where that CVE has already been mitigated and unlike Alpine, where that vulnerability never existed by virtue of the fact that Alpine uses muscle. So this highlights, again, the importance of choosing the right base image in order to generate the most appropriate security stance for your container images. So we'll move on to some conclusions here. Firstly, I'm not gonna suggest that you should never use Alpine or distro less, particularly if you're working with Go or Rust, then they are good choices, particularly if you use the distro less static Debian base image with Go and Rust. But to recap, slimming container images is a vital process in the software supply chain. It can reduce the attack surface, it can increase deployment momentum, start increase startup performance, and it requires no changes to your existing container build processes, unlike having to learn multi-part builds with distro less or switching to Alpine, which may well be unfamiliar. And vulnerability scanners can produce false positives. So there's an argument to be made for using multiple vulnerability scanners and actually cross-checking the results. And it's also trivial to inspect your container images to verify if vulnerable components are present or not. So don't rely just on the results of the scanners, do actually go and check. And while Alpine is small and secure and has a low complexity Docker file, it introduces high complexity with considerations with regards to language ecosystems and performance compatibility impact. And Debian is not equal to Ubuntu with regards to security profile. So what's the outcome of all of this? I would choose to use the Jami no rec Docker file. This is based on Ubuntu 22.04, which came out just last month. It by default has a better security posture than 20.04. And we did a live stream talking about being a tech debt warrior and moving to newer releases as fast as possible. And here's why Ubuntu makes sense for the purposes of the evaluation here. I work in a team familiar with Ubuntu. Therefore, we can trade the slight increase in Docker file complexity, which is good use of apt in order to install Python in a minimal way. And what we gain from that is a Docker file that adheres to best practice, developer experience of using a familiar and well documented platform, which is Ubuntu, no runtime or build performance or compatibility considerations that can create friction and a container image that is free of critical and high risk vulnerabilities. And then by introducing either Slim AI or Docker Slim into the CI CD pipeline, we also gain a container image that is entirely free of known vulnerabilities and a container image that's three times smaller than an Alpine generated image or just six megabytes larger than a slim Alpine image. We get faster deployment velocity, faster container startup performance and perhaps this is a topic for another CNCF live in the future. We also get automatically generated app armor and set comp profiles for these containers as well. So in conclusion, caring about image size demonstrates that you care about the quality of the containers you deploy into production. Introducing slimming into your container production workflow can considerably reduce image complexity and reduce the attack surface while maintaining development velocity and adhering to best practices. I'm sorry, many of you have not been able to see this visual as we've been doing this, but as Annie described, the video will be made available and we will stream this presentation tomorrow on twitch.tv slash slimdevops. I don't know what questions might be outstanding. No worries, the perfect that was exactly put. So everyone who wanna see a new version of this, you can tune in tomorrow to the Slim AI Twitch. Also the recording of this with the video intact and the share screen and all of the example visuals will be shared, will be available immediately in the CNCS for Cloud Native Computing Foundation YouTube channel immediately after the stream ends. So do not worry, the visuals will be available then. And there was one audience question as well. They apologized to see the share screen, they might have misunderstood something, but then they said it sounds like your solution takes this input a container image and produces a new container image. That's exactly correct. So you'll have to watch the video recording after the fact to see what I'm about to share on my screen. But I'm going to compare two container images. The traditional fact container and the one that was slimmed using Docker Slim. But the first thing to explain is that the slim image that Docker Slim or Slim AI creates is a single layer container image that only includes inside it precisely what that application requires in order to function. So, yes, a whole new container image is generated and it's as if it's a from scratch image only using those assets from the original image that are actually required for the app to run. And what I'll do is if I go here, so I've got this, I'm using the Docker desktop extension and we have this compare feature. So I'm going to compare this, not this one, sorry, the one that we've been referring to. So the Jammino rec image with the Slim, oh, I did it wrong. That's not what I meant to do. Let's try that again. We're going to use this image there and we're going to compare with this one. So this is comparing the fat container with the Slim container. So first of all, it's very visible, everything that got jettisoned from the container image. So that's, you can trawl through that to actually see precisely what happened in the minification process. It's a very useful feature in that regards. But we can also see sort of a quick overview. So we can see the size differentials. We can also see here, look all of the shells have been removed from the minified container. We also have all of these binaries that are set GID or set UID, they're entirely absent from the Slim container. And then we can see that, the binary count is down from 10 fold from about 2,400 down to 240. So you can see sort of the impact that this is going to have on not just the security profile, but the performance impact of this container. And last week when we were at KubeCon, we were speaking to a number of organizations who saw using this minification technique as a way to make the mess go away. They were working with customers who were supplying container images to them and they were operating the infrastructure for them. And whilst those container image may well have been functional, they absolutely did not adhere to best practice and would have dozens and dozens of layers, for example, MB multi-gigabyte in size. So they're looking at, and we have customers like this using this technology in order to actually clean up the containers to make them production ready, regardless of how those containers were created in the first instance. Great, perfect. And then Gary continued, and it seems pretty clear that you do not do any application flow path minification, you just do static minification. No, we do static and dynamic minification. It doesn't do any source code analysis. It actually traces the application whilst it's executing. So Docker Slim will run the container and the app in the container. It can automatically introspect, but there are mechanisms to, you know, make that observation step more accurate, for example, instruct the container to run your integration tests to get the deepest sort of execution path across your application. And there's also syscall interception as well. And that, those working consort, because when all of that deep application knowledge has been created, that's also what enables us to create the app armor and set comp profiles to go along with those container images. Perfect. Then I think it's about to trying to wrap up. If there's any questions you can, well, maybe it's time to wrap up. We don't have time for any more questions. Back again, always, oh, there's immediately when I say that. What input slash configuration does it use for dynamic analysis that affects the flow path? So dynamic analysis is sort of part of the default operation, but what you can use to guide how the observation happens is if you've got a microservice, for example, you could export like a JSON object of your API endpoints and what methods they use and what the success and error conditions look like. And you can pass that into the observation piece and so it can hit all of those endpoints to fully exercise your microservice. And similarly, if you've got a web application, you can provide a list of all of the URL endpoints that your application has in order to exercise it completely. And even then you may bump into situations where not all of your static assets get loaded through your integration tests. We saw an example recently with a static website with an image carousel and not all of the images in the carousel got loaded. So you can even stipulate paths within the container image that should always be absolutely included or excluded for that matter. But there's a whole gamut of options in order to sort of tune the behavior of the observation. Great, but yeah, apologies again for the technical difficulties, but now it is time to wrap up. But at the same time, now that we wrap up, the recording will be available on the CNTF YouTube. So you can actually see with the visuals there or you can tune in tomorrow, for example, to Slim AI's Twitch to see a new version of today's session and ask more questions there. Also, obviously there's probably a lot of info in the article that was linked within the stream where you can find a host of things to follow up on the materials. But as always, thank you everyone for joining the latest episode of Cloud Cative Light. It was really great to have Martin here talking about comparing different minification techniques and their vulnerability assessment. Next week, we will have a really great session on enabling automatic setups in your on-prem cloud. Thank you everyone for joining in today and see you next week.