 Cool. So hello, everyone, and welcome to Cube Day Japan. And we'll start the session now. Secure and debuggable, debugging, claim, slim, scratch, distillate, Kubernetes containers, a lot of terminologies over here. We'll try to, you know, simplify all these and see how you can debug it. So expected background, as, you know, Kyle mentioned some of the container fundamentals is needed. You should know what Kubernetes is, you know, the basic knowledge, namespaces, fundamentals, architecture, nodes, and all these things. And then we'll cover the rest of the stuff. So this is about me. I'm working as director of technical evangelism at Sevo. And I'm a CNC ambassador, very active on Twitter and also have a YouTube channel. So you can kind of follow me everywhere. I might reply you in between the session as well. And I'm the Docker slim guy. So keep going. Okay. So the next slide. So this is pretty much one of the best ways to describe the problem using minimal container images. You want to use them, but you can't a lot of times because you need to debug them. And up until pretty much now, Kubernetes 1.25, it was one of those things on the wish list. And you couldn't do it. Well, you could do it the hard way, but in terms of mass adoption, it wasn't possible because to use smaller containers, you still need that debugability. And this is what we're going to talk about. And apparently, you know, there's a lot of interest in it. So let's keep going. So let's get started with the general debugging techniques for for Kubernetes, which everybody, like we have been doing it just a quick preview of that. So whenever there's any failures, we go through the events, the logs that describe host and node level debugging. So we as a search into that, use privileged containers, use some embedded debugging tooling. Also, the Cubsidial exec, which is pretty common. If you want to get inside the container, you use Cubsidial exec, go inside and do a bunch of stuff to see what went wrong. And then some sidecar stuff, which is there. So yeah, this is some of the stuff like if you do Cubsidial describe, you can see the, you know, the events like if a pod is spending in sufficient resources, all these you'll get. And if you have a crash loop back off, you can do Cubsidial logs. And sometimes you have to do you can also attach that hyphen from previous so that you can get the logs on the previous container to see what failed. It's useful. And then the exec one, the Cubsidial exec interactive pod and the bash or SH whatever is there inside that particular container that you can attach to and run the commands with the debugging utilities. So let's let's actually see that in the action. So I do have some of the files. We'll try to apply the pending part one. So this is just I have taken from Robusta. They have some good examples. So you can see the pod is already in pending. I can describe that and we can see that, you know, it's not there because I didn't match the parts node affinity. So when you describe, sometimes it helps, it will give you the useful information that you can fix it and then, you know, use that similar to that. Another one example that I have is crash loop. So it's content. So it's already crashed loop back off. You can see that there's a pod which crash looped and I can see the logs for that. So environment variable is undefined. So these are standard and very simple to use the first level of the basics that you have to go through whenever you are debugging the containers that describe the logs, the exact events and all these things. So this gives you a bit of gist of how, you know, you do debug it now over the time what has happened is we have moved to like we have been creating the containers, but now we have moved we want to move to slimmer images to, you know, the images which contains less so that, you know, we have less secure vulnerability vulnerabilities in that and the image size is less. So with that, there are some concepts scratch, destroy less and slim. What are these? So scratch image is basically an empty image. It has nothing in it. It is used to run the binaries which don't have anything linked to that. I mean, yeah, there are some gotchas that we'll be covering later on. Mostly you go or rust absolutely like that go or rust. So not everything that you can run with that. So it is a compiled binary which is there that have no dependencies and if it has the dependencies that probably you have to know all the linked dependencies and you have to, you know, manually get that because it doesn't even have the like you can see the CA certs, ETC password, nothing is there. So you have to make all the things up in the multi-stage builds and then you can run it. Yeah, that's where it gets complicated. That's where it gets complicated. Coming to destroy less. So destroy less is not completely empty. It has some of the base directories which helps you in creating the containers and running them. It has like ETC password, CA certs. It has the app plus the runtime dependencies. Both of these have no shell and no package manager. So that's where it becomes tricky, right? These are good to have, but these are only good to have if you have everything sorted. Like if you know your container won't fail, but if your container fails, then probably since it has no shell package manager, it becomes difficult to debug these set of containers and that's what we'll be discussing and showing you the demos for. So the gotchas, extra dependencies are super complicated. Whenever you have extra dependencies in scratch image, you have to know them beforehand if you even have to run it. No debugging tooling, no shell and not always a static binary. So this is the three apps that we have. You can see there is a big image. The size of that is one gigs. Pretty big one gig. And it's a simple hello world app and it's already one gigabyte in size. Then you have the distreless app, which is 166 and then you have the node app slim. Okay. So you have a lot of options in terms of how you want to get to, how you can get to a minimal container image for your application. Some of them require a lot more work than others. Distreless is probably one of the most known ways to get there. There are several flavors of distreless starting with the most basic static that gives you mostly a directory layout and a couple of things on top of that, including the zone info. That's pretty much the biggest chunk of that image. And then we have other images like the next level is the base image, which includes the static version plus a few extra libraries, basic OS libraries. And then you have application specific distreless versions. They're much bigger. They include the base plus the additional system libraries and the application runtime. Now, and I'll talk about the gotchas later on. So let's take a look at the distreless app and all of them, they use the same Kubernetes manifest for the application, but the Docker file for them is a little different. So, but this Docker file, one of the interesting things about it is that what you end up with, you might have a multi-stage build and then in the final stage you copy stuff to your deployable image. Now, in most cases, you end up with those kind of copies where you copy a whole directory with things because otherwise you're not really sure what you're doing. So that's okay. It's better than using a FAT image, but you still have a lot of attack surface left in the image when you do that. Let's keep going. So, this is an example for those compiled languages because sometimes, you know, with Go, the first thing that comes to mind is static binaries, a single binary. You can easily create a scratch-based application image and that's pretty much it. In reality, for real applications, real world applications, you actually have extra dependency certificates, user information, and in some cases, it's not even compiled statically. So this, let's see, okay. Yeah, let's keep going. So in this case, this is a sample app for Docker Slim, one of the third-party apps, and it's not compiled statically. So we end up pulling stuff like lib, pthread, shared objects, etc. So there's a lot of stuff that gets pulled in and that's when it gets tricky. When you're trying to do it the hard way, build those minimal container images by hand. So let's keep going. With a Slim image, the nice thing about it is that you have a regular Docker file. This is probably the most basic node application Docker file. It's not multi-stage, it's simple, derives from your node base image. You copy the package manifest, you install the dependencies, you copy the app, and then you just run the app. That's all. Now, that's all you would have for a regular app and then with the Slim images, it takes that and then it produces a much smaller image, kind of like out of scratch. It does it for you. So let's keep going. So now we have discussed what Slim image is, what Distillis is, what Scratch image is, and we have discussed some of the toolings. But I mentioned one particular point that it's hard to debug the Distillis and Slim and Scratch images due to the lack of shell and debugging tools within that. So we cannot use the standard stuff which is kubectlxxcp, embedded debugging tools because it's not there. So then it becomes difficult. And kubectlxcp, just it requires a tar to be there in the container else it will fail and it copies files from the directory. Exec needs a shell to be there and debugging utilities like et cetera, the netstat, whatever you need. These are not there. These are the standard ones. And if you try to do that over any of the images of Distillis or Scratch, you'll end up in getting the error that it is not able to process that because the shell is not there. So it won't be able to run. You won't be able to do the standard easy debugging that you are used to. So that's where we have the concept of ephemeral containers. So ephemeral containers are the containers to help debug the ports where there is no way to debug directly using exec. So what happens is it is when you write a pod it will you like you will be using kubectl debug to create those ephemeral containers and it attaches the in the live pod state itself. You will have a pod in the spec section, pod spec section, there will be a container that will be added and it will also auto update the container statuses with ephemeral container status. So that's the very good thing and it attaches to the container to the same pod, share the namespaces so you have that same process. You will be able to see the process of a Distillis or that particular container itself and you can have the file view as well that we'll show you. Some of the things to be taken care it doesn't have any readiness, liveness, probes the resources are not allowed on that and you can create it using the ephemeral containers API. Kyle will show the demo for that as well. Question for you, why not use regular containers? Why can't you just add a regular container to your pod? So again, adding a regular container won't help because you have to add that and the mounting becomes the issue with the process ID, the sharing of the namespace, all these things won't happen. So that is where the ephemeral containers will help and basically not touching the existing pod, not starting the existing pod. So that stays there. Your application stays there and if anything goes wrong, you still will be able to debug that. So this is the simple example of like Qubectl debug and let's say there is an engine export running before that. So you are debugging that and you are using the image busybox and you are getting the share. So what it does it, it is adding a new container with busybox image to that engine export. So if you see the like if you get the Qubectl get pod siphono yaml, you'll be able to see two sections which are important to work here, which is ephemeral containers. So whenever you do the Qubectl debug, there will be this section added automatically to that particular pod spec section and also the status, ephemeral container status will keep on getting updated. So this is just pod level debugging, same pod copy for advanced debugging. There is one command which we'll be discussing later and some of it uses ephemeral containers and other stuff doesn't because you don't always need to have ephemeral containers. For example when you get a pod copy there, you can use a regular container because it's the time when you get to decide. Again, this is a simple Qubectl debug demo, but it is creating a container debug sidecar box with the image busybox and attaching to the target container app. It is also like you can see the process over here. So this is the process. You can see the process one, which is the node application, but the current process for our ephemeral container is 13. And one thing you can see the difference between the file. So it doesn't look same. So your file system won't be exactly the same like when you do a Qubectl exec, you are into the pod. You will be able to see the exact file system over there, but when you are in the ephemeral containers, you won't be able to see the exact file system directly. Yeah, and that's why you need to go to the proc file system and you need to navigate to the PID you're trying to debug and the root section. And so this is odd. If you haven't done this before, this is odd and not great from the debugging experience and we'll have a demo to address that. So you can also use ephemeral containers without the Qubectl debug commands. So you can use the internal APIs and that is very interesting because there's a demo that we have to set the security context. Like if you have to set the security context or if we need any additional mounting, which you cannot do by default using the Qubectl debug, so you have to use the APIs for that, the call examples. Some of the gotchas, you can't remove the ephemeral containers, not all container properties are available, which we discussed, process name, space sharing and the security context and mounting volumes are not there out of the box when you use Qubectl debug. So ephemeral containers like Kyle mentioned, it was a fantasy to use before the slam scratch and destroy list and they have been a lot of work effort and getting this set of ephemeral containers support natively into Kubernetes and that happened very recently. So it went in GA 1.25 and you can see like all the local clusters, Rancher desktop or Docker desktop should work and even the cloud providers, you know, C, O, GK, EKS, they all have 1.23 onwards. So you should be able to use the same demos that we are doing with Qubectl debug with those clusters as well. Okay, so let's say you want to use Qubectl debug to debug your minimal container image application. So what do you do? You have a few options, you can create your own debugging image which is kind of this option you can use a couple of existing debugging images. The most popular one is Natshoot. It's a well known system and network level debugging image, lots of great tools there. Then there's also a set of debugging images by LightRun, they're called CoolKits. They support several runtimes and will use the Node runtime. And in the Do-it-yourself bucket, one of the nice things that you can use in Nixor, you can build a debugging image on the fly by specifying the packages you want and we'll see it as well. So we'll have three demos we'll look at how to debug a Node application, kind of an application level demo, and then a system debugging demo with Strace, and then a demo that shows how to make the debugging experience look similar to what you used to. Let's switch. So first we'll in the demo application, the Makefile has a nice menu of different options that you want to use. So we'll start with creating the application. And it's this application it has a deployment with OnePod and the demo app image and a service. So the application is running and we're using the Slim image. So first we'll try to debug the Node application and let's try to see that's all it does. It's a Hello World app and this is the app itself. It has a couple of endpoints and we'll try to debug one of the endpoints. So what can we do? So I'm going to use a debug, a Node cool kit image to debug my application. So now I'm connected to the debugging site car and I see the application and again and this is us and this is our file system and this is the file system of the target application and this is the app. So I'll want to use a debugger that I have in my cool kit image and with Node there's a way to force the application to run in debug mode and you do that by sending a user one signal to it. So now if I switch to the other okay. Now I see that the Node runtime switch to the debug mode and it's listening. Now I'm connected to the target application. I'm going to pause it. I'm going to set a break point. It's going to take a while. There's a delay. Okay. I'm going to set it on line 13. That's inside of the main handler. I'm going to continue. I'll want to curl again. Now we hit the break point. So that's an example of how you can use a debug image to do application level debugging. So I'm going to get out of that and I'm going to try another. I'm going to use strace to do a lower level debugging session. So now strace is connected to the Node application I have. Now we've got a whole bunch of stuff there including a response right there. We've got a response written from the application and then I'll try to do the same thing with a couple of other images. I'll try next. Now here, if you look at this funny looking image name, you'll see that it has a lot of different names in it and it's the names of the packages I installed. For example, LSOF, Natcat, TCP dump and strace. It takes a little bit to pull it because it gets created on the fly. We have the Node application too and now we get the same result. And the next one is going to show a bit of shell level trickery to make the developer experience a little more straightforward. So I'm going to use a busy box debugging image. Again, we have the same file system problem. So what you want, you want to see the file system over the target application image but you want the tools from your debugging image. So for that you need to bring those debugging tools into your target namespace. Well, figuratively speaking. I'm going to copy and paste because I'm really bad with typing. So first I'm going to link the debugging tools from my debugging container. Then I'll add them to the path and then I'll change the root file system. So now if we do a less what we see is the file system of the target image container with the debugging tools. So now if I can run any of those tools there and I can go to our source. So it looks the same. And I can do everything I would do as if I was exacting into the fat version of the application image. So you get the same kind of experience there. Yeah, so these are also available on GitHub as Kyle mentioned. So you can try on your own as well. We got just one minute left. So the key takeaways is minimal container images on case are ready for mainstream because of fmware containers. Now you have the tooling which exists. So you have seen some of the cool demos using the tools like nixie, net shoot, cool kits that you can use to debug your slay, destroy, or scratch containers as well. So you can actually now use them for in production. So fmware containers make it possible to debug, scratch, slim, destroy images. Thank you. And this is this is the demo repository. Cubeday Japan demo and you can try fmware containers on Civo and create a minimal container images using Docker slim and then try to do that and shout out to Akira for the nerd CTL. And if you have any, yeah, it's an awesome tool. And if you have any queries, we don't have time, but we'll be hanging around here. So feel free to ask any questions as well. Thank you so much for joining it.