 Um, welcome everybody. Uh, my name is Alex and this is things I wish I'd learned earlier about containers, uh, lessons learned from, uh, from a seasoned Linux admin, uh, as seasoned Linux admin, I mean, I realized I've been doing technology in some form or fashion for 24 years, which seems weird to say. Um, and so this kind of came about because, uh, it's one of the things that kind of passed me by and we'll get into that. But first, um, I want to say thank you for being here. Um, this is my first talk in person since 2019. Um, a lot of things happened since then, so it's good to be back. Um, and there's a lot of conversations going on right now. There's a lot of presentations going on right now. It's right before lunch. There's a lot of things you could have been. So I just want to say thank you genuinely, uh, for coming to my talk, finding it interesting enough to spend your time with me today. So why this topic? Um, I would like to give some context whenever I decide to write about something or speak about something. Um, to give some kind of background context. And so this one particular is interesting. Um, in 2020, much like many people even to this day got laid off. And after 13 years at one job and kind of progressing in one general role, um, I find myself looking at the job market again. Um, I spent my time for those 13 years in a, uh, we'll call bare metal now, right? But then what's called dedicated, right? A hosting environment where we did a lot of, um, multi-tenant, 10, 30 plus server environments, kind of on average. And then some customers had hundreds of servers, but they were all physical servers. This is even before virtualization was going to become a big thing. And so I spent a good portion of my time there working on physical servers, training, mentoring, um, we're looking for, uh, onboarding people in that kind of realm. And knew that containers, we're talking about now, it was around, it was there, but she wasn't in our realm. Uh, we had a lot of enterprise level customers who, as somebody might know, are just a slow to move. So they weren't going to be adopting things like containers and VMs too soon. So it's one of those things that I didn't get chance to really dig into. So fast forward to 2020, get laid off. I'm looking at the job market again. And I had this gut, just gut impact where I was like, I missed something, right? Because all the postings, all the jobs were containers, Kubernetes, um, virtualization, all those kinds of things that I didn't have a lot of experience with. So, um, I consider myself to be a lifelong learner. I like to learn things. The thing about technology, right, is that there's always something new to learn. There's always opportunities to learn, always something new. Um, and if you feel like it's too much or feel like it's always something new and kind of the downside, I would say to be not the best environment for you, right? So that's primarily this topic here, because I found myself in a space where I had to learn something. Um, I usually, I learned things, got a deadline or I have a job, right? In this case here are a project. And so if I came time to really bang my head against this and kind of try to learn. So, so who's this for, right? So I wrote this for myself, who I was about two years ago, right? I need to learn something. And these are things I wish someone to point out to me. But I also wrote this for the continued converts, right? The people who are here and like, I know I need to learn this. I'm not, I haven't really gotten into it yet, but I know I need to learn it. So I want to start learning it. And maybe I can guide you on that path and kind of like highlight that path a little better. Um, I also wrote this for the container curious. Um, those who, like me through most of that career at 13 year span, I knew it was there. I knew it was coming. But it wasn't my day-to-day work and it wasn't in the space where I needed to learn it essentially. So, but I was always curious, right? At the time, I think around 2012, I think when a doctor was going more prominent, most of the tutorials were how to run bash in a container. In my environment, I didn't matter much. There was no point to that. I didn't see a use for it, right? And kind of my blindside on that. And then I wrote this for the container from margins. Um, me, probably 2016 to 2018, 2019 where I didn't want to learn it. I had enough of my plate. I didn't want to. Um, and I say this one as well because during the job search, I interviewed with a company who I'd worked with many of the people beforehand and they called me up and they said, Hey, you know, we know you're learning these things. We know you're learning these things. We know you were on this kind of path. We don't know any of that. But our environment is, again, bare metal servers, lots of them. Um, I think they had a hundred plus servers in their environment they were managing. Like, we know we need to move towards it, but we don't want to, right? We've got what we got. We like what we like. And we don't want to go through the process, right? So for those people as well. Um, and for those container from margins that I can highlight here, um, I hope for you with this, this, uh, talk, right? I'm not trying to convince anybody to go a certain way. It's just that, uh, again, lesson learned things that kind of my purview and not right really in front of me passed me up and had to kind of speed learn, try and gather it up really fast. So also trying to find three C words for this was more difficult than I thought. Uh, I was like, I like the alliteration. So we went with it. Um, promotion is actually the last word I found in that. So what we will not cover. Um, so for some reason I decided to pick a very large topic in a very short amount of time, right? That's a great idea. So I want, I had to start cutting things, right? I had to make some assumptions on the audience and who was going to come to this conversation. So couple things we won't cover. Um, kind of the step one, the first they were objected, right? So, uh, the things like, uh, I'm not going to hit Sam here and talk about Docker versus Podman. Those are two very prominent container engines. We'll talk about what those are, but we're not going to talk about the differences between those two. There's a lot of articles out there, a lot of literature out there. So we won't be doing those things. Um, we won't go through a demo of installing Podman or Docker, right? Again, um, my assumption here is that most of the audience is probably that container curious. I've tried a few demos, tried it out, wasn't for me or didn't see the use in it, right? So we're kind of getting past that to kind of talk about maybe some uses, maybe some next steps. And then we won't be going through a bunch of Podman Docker commands. Now I wrote this, this line here and then later on, like, well, I got to do some demos. So we will do some things with Podman, just that's what I have installed on my system. So it works out. Okay. And we also, what we won't be covering is large container orchestration deployments, right? Kubernetes essentially, right? We're not going that far out, but I mentioned that because understanding containers, understanding the core parts of that, right? Uh, what you want to orchestrate is kind of the, the base of it, but we're not going to go as far as trying to talk about how I do it in large, in a larger passage across multiple, uh, servers or nodes, things like that. So with that out of the way, what we will, what we do cover, we'll go to terminology, right? So I said earlier, we're not covering day one things, but at least want to set the baseline of what the words we're going to use, right? How we talk about these things. And maybe there's some terms that you've heard, not really sure what they mean or how they fit into everything, right? Um, talk about the kernel features that make containers happen. So I kind of jumped back a lot to when containers start to make their prominent, right? They become more prominent. Dockers become more popular. I guess like 2012, I believe, I forget. Um, but it was really just a black box, right? You do this, you run this and you can run batching it for some reason. Okay. Well, it didn't make sense, right? So it didn't really catch my attention, but what does catch my attention is I've been working with Linux for 20 plus years. So understand how those actually, actually happens, right? The features that make that happen, um, the isolation happened. We'll talk about that. Um, that became more interesting. And for lack of a better word, I'm calling container concepts. Um, these are the concepts that I embarrassingly just didn't catch on to right away, which is why this kind of the whole thing started, right? Things that maybe I can highlight and kind of move along. All right. So what are containers, right? We'll start with that. Right? Containers are groups of processes running on a Linux system. They're isolated from each other, right? So that's from a, um, Podman Inaction free ebook. Um, you can kind of search Podman Inaction and kind of download that whole ebook, um, from Daniel Walsh Walsh. And essentially it's, it's containers are groups of processes running on a Linux system. And for our purposes here, a Linux system. Yes, we can do containers on other things. But right now we're talking about this here. They're isolated from each other. And the big thing about, uh, that started to kind of veer me towards understanding a little better was the idea of isolation. The idea that the things are running separate from each other and they don't know it by each other, but it's also a better way to manage resources, right? So, um, terminology we're going to go through here. Container image, right? So, uh, when we say image, right? So the image is like a template, like a snapshot, right? So if you were familiar with VMs, right? So taking a snapshot of a system, uh, the container image is like taking a snapshot of process there. We want to continue to run. And we can use it as a template to create container, right? They're isolated process. So container engine. So we thought like container engine, right? So the prominent popular ones are Podman, Docker, right? So these are for running, um, containers on single machines. Right? So, um, that's what this is. And I highlight these two because these are the two that we're mostly would interact with if we're trying to learn or we're trying to start out, right? That's in contrast to container orchestrators orchestration, right? So software for running containers across multiple machines and multiple networks, right? Kind of expanding out and kind of scaling out. Um, I heard once Kubernetes described as the operating system for a data center. And that kind of clicked for me. Because again, it's all about abstraction, right? So it's all about if we're going to think that the data center is your, as your, uh, uh, OS, right? Or as your hardware, you got your bunch of RAM, bunch of CPU or a bunch of CPU, bunch of RAM, but storage, it's large data center. Something like Kubernetes and orchestration can control all of that together, right? And so in the abstraction, the container becomes just the process. And I understood processes at least at some level. So that kind of abstraction beginning to make more sense thinking about that way. So, so why containers, right? Um, I kind of, this might be a repeat slide, but, um, isolated process is least a better resource usage and higher density on a single host. So, uh, kind of jumping back a little bit for the longest time, we had these one box wonders, right? So the one box wonder ran your Apache, your minus QL, your PHP, your plus probably a mail server and any other thing you wanted to, right? Um, and it was probably always running of RAM had to look to a little RAM, not enough, right? Um, so this idea of density on a server, right? Can we can, can we better manage those resources? Uh, in a different way. And so for me, the concept of the kind of stuff was like, can I better manage resources through containers? Right? Um, turns out you can, right? You can put limits around things. You can make sure things don't run away. Um, and if they do die, well, ideas that they become ephemeral, right? And you kind of bring them back up. So it's kind of where, for me, beginning to make sense, right? The idea that it's resource utilization, make the host more dense and hopefully not going down all the time, right? Because going down is not good. Uptime is a good thing. So, right? So looking at what makes containers happen, right? Right. So we're going to talk about C groups a little bit. Talk about some data spaces. Probably going to tear images again. Maybe not. Might be a typo. Um, so the funny thing is that like these things aren't new. C groups and namespaces aren't new. There have been part of the kernel for a while. I think C groups was, okay, so I was teaching some Rails 6 material and C groups were just coming in. So that's probably 2010, before that 2008 something right there. So the technology that we use to make containers happen isn't new. Namespaces I think was 2012 or before that, I forget. Um, and the images themselves are, like, think about like tar balls, right? If, coming from an admin who's been doing this for a while, it's like tar ball. It's a compressed image. So C groups. Okay. Um, so kind of again, defining the process here, right? So it's a series collection of processes that are bound to set limits of parameters defined by the C group file system. Okay. So what does that mean kind of in everyday terms, right? So C groups is how we, I mentioned earlier that containers are about isolation, right? Um, isolate resources such as CPU, RAM, um, networking, and as well as from other processes, right? So the other processes don't know about the other, about anybody, anybody else running. So with C groups, we control the resource utilization, right? So we begin to put, we can put limits on things like CPU, RAM, um, and so it isolates away from the rest of the system. So, um, to kind of come over here. Right. So on any system again, for me, that's the things that make more sense when I could say, oh, it's using C groups where C groups on the system, because with Linux, everything is a file, right? So everything is gotta be there. I always say, Linux is not magic. If it does something, there's a reason, there's a configuration somewhere that makes it do those things. You might not have access to that code or to readily to that code, right? Open source, you have access to it, but you might not see it right away. You might have to debug things or kind of open up the code, but everything is there for a reason or everything is there at set, right? It might be doing the things you want to do, but it's doing what it's told to do. But for this system, just to tie it in together here, um, CD, CD group. Okay. So this is the file system here that's kind of acts as the interface for these C groups. And so you can see a lot of these that are, uh, what you can begin to control put together. Um, some of these aren't, so like system D is just a hierarchy. I forget to say things like, uh, because now for most of system D, it's all put in one general C group and you can't go from there. The hierarchy begins with that. Um, but when you go into any of these, let's go into memory. Memory is an interesting one. Because memory is kind of for the system as a whole right now. Kind of give you things you can control and interact with. Um, what do I want to show on this one? So you can begin to manipulate these things here and set limits on processes. Or I can go too far into it, but you want to say that here's where it's at. All right. Again, big subject, 40 minutes. So, um, let's go back to your work. Uh, namespaces. Okay. So, um, this one from Wikipedia, right? Namespaces are a feature of the Linux kernel that partitions kernel resources such as, uh, so that one process sees one set of resources while another process to different set of resources. Okay. Again, what does that mean? Again, we're back to isolation so that each process thinks it's the only process in the system. All right. Um, which I'm thinking about the matrix and how do we know for a matrix? How does that process know it's only process system, right? How does it know it's, there's not other things over there. Yes, there are ways to get out and break and kind of run havoc on the system. But in general, that process thinks it's the only one. Okay. So, um, so to that point, right? So, so let's see that the namespaces, uh, LS NS. These are namespaces in the system. Um, earlier I was kind of just playing with things and trying to run some demos. Um, so I had this engine next. I'm going to kill that real quick. So I'm going to go up on man real quick. I'm just going to set things up here for you. And of course I decided to do a live demo on the fly because right, why not? So let me go and start that up. So if we look here, right? So we saw earlier those namespaces are gone, right? So here's the standard namespaces for the system. But we've got a container, right? Put the container on the system. Again, it's just a simple one here. Nothing too crazy here. So, okay, let's shoot it out here. Okay, it runs great. Um, but we see now with you LS NS. Again, we're going to tie to the namespace there. Right? So as if you run more containers, more namespaces, uh, again, seen the isolation from everything else. Oh, okay. So container images, right? So an image, I'm going to come back to this. It feels like a simple idea or topic, right? But I think when first learning and first kind of starting to train other people, it really is like container versus image, right? And so really you can't have a container without an image, right? The image is always that kind of base, uh, contains all the libraries need to run applications, kind of standard definition here. So what that means is that you, someone took their application and bundled it all together with everything they need on the system. The libraries, uh, the versions of the, the big program language, so PHP or Go or Python, everything they need together. They brought it together and so that runs on the system. Now the thing about containers, again, this is again the jump for me because like starting to make more sense was that, uh, because we have VMs, right? We have VMs, why containers? And the, the size, right? The resource utilization with containers, all you're sharing the kernel across all those containers, uh, as opposed to VMs where you have the entire system, entire image each time, right? So contains all the libraries and most of the time where it may begin to make sense is your running application where that may be engine X, um, custom application for me was like engine X and Apache, right? Those two, those two in a container was kind of what made sense to me. So those, those, uh, images are immutable, right? So we don't, we don't change them. So every time that we spin up a new container, copy the image and then begin to build these layers on top of it. I don't go into layers too much, um, maybe I should have, but, uh, you have this base image and then from there you begin to, uh, right over it, right? So do you ever change you need to make? I'll see here. So, um, talking about engines and run times, right? So this was another thing where, um, you would hear about Podman or you would hear about Docker container D. Um, then you hear about Run C, um, Codic containers, right? So all these ideas of what was the difference between all of them, right? So a lot of, a lot of terms, a lot of things thrown out there. So kind of break it down a little bit. So, all right, um, the most popular ones, most prominent ones are container engines, right? So the engines will hear most often Docker and Podman, right? Popular container engines. So what does that mean to be container engine? Well, mostly it's the interface with the end users, right? So us as end users run a command interface with the, the API for Docker or Podman to run commands, right? So, um, it's what we kind of have that CLI with, uh, interface with image registries. So another jump to, for at least for me was I understood packaging. I understood yum and apt. I understood how like, hey, there's these softwares packaged. I pull it down in my system. Great. So again, associating that with image registry. So they have just image registry. So the images are kind of like packages, right? So they're just free compiled, put together, uh, software. And these are the jumps that started to make where I was like, okay, it's not that kind of mystery box, right? More and more kind of trying to cover how they all tie us together. And then so the interface with the container runtime. So the container engine interfaces with the container runtime. So to that point, what is container runtime? So, um, so run C, C run. Um, so they managed to continue life cycle. So if we interact with, let's say Podman to run a container, what the runtime is doing is actually setting up the C groups, setting up, uh, the namespaces, all of the things that make that with on the Linux system that make it happen. That's the one doing that's the runtime doing it. Okay. Oh, real quick. I kind of step back here. There's a to the engine part, right? I mentioned container D. I didn't say anything about it again. And the reason I didn't do that is container D is more for Kubernetes. And you know, as an end user, you don't interface with that as much as you would like Podman or Docker. That's why I mentioned it, but didn't come back to it. Right. So there are some engines that are meant for, again, larger scale deployments, larger scale implementations, things like that. Um, there are less, I guess, end user friendly, if you will. Right. So mentioning these two, because those are the two that are most end user friendly. Okay. Yeah. So C groups and namespaces and, um, works to make a match set up the, uh, network and storage and things like that. So that point. Sorry. Running manage container. Okay. So container volumes. Um, okay. So this is where we get to the point of concept or things that, again, embarrassingly, I just didn't think about or put together until later on. But again, most of the tutorials where we're run bash in this container. It's useful. It's kind of useful to me. Um, or it was run Apache and then nothing after that. Right. Apache. It's 80. It's the default page again. So these are the things that started to kind of build upon what like, okay, I get, I began to get it. Right. So lessons learned. So persistent data and a fair amount of processes. So remember, you remember when there was a, the analogy of Cal versus pets. It was probably again, when Dr. was coming out, right? There was idea that like you treat the servers that you do. You don't want to reboot or don't want to rebuild as pets. Right. And then these containers idea was containers where they were as cattle. You could process them. One died. You bring another one up. It's a terrible analogy. It was a really, really bad analogy. I don't know why it just didn't say ephemeral processes. Right. I mean, ephemeral was a bad word or a bigger word. But the idea that the processes don't stay around all the time. Again, translating it to processes on any system. A process meant to live forever. Right. It should go if it's reached, if it fails, you should be able to restart it. Simple as that. So persistent data, ephemeral processes. So this whole idea that containers were ephemeral that could go away. How would you write it? The database makes no sense. Website for data. Always had questions that I didn't know. So it's all about persistent storage on these container engines. And the thing is most of the time the engine handles a lot of the storage side of it. Right. So there's two ways to do this. Right. So the first one is bind mounts. Okay. The bind mount is, sorry, bind mounts, volumes. Okay. Bind mounts. Okay. So a bind mount allows one part of file system to be mounted in another place in the file system. Now this I understood. Right. Again, working with systems, working with Charoot Jails, things like that. Charoot Jail would be essentially trapping a user, like I say, an SFTP user or FTP user into one directory where it's all they see. Right. It looks like root to them. Kind of similar in the idea. So those things made sense. Right. So a bind mount. For example, I would mount bind var log hbd into home admin logs. So I make this point here because had I had a customer one time who says, I need this one person to see the patchy logs, but nothing else. That's it. Just the log, nothing else. And so this is what we came up with at the time. And the bind mounts is that it's just a mapping from the host and releasing container from one directory to another. So if you delete files in one directory, delete them in the other. So it's kind of just a mapping. I had set up this whole system where they could have all the logs they needed. I think it was great clean. So I'm going to clean it up and let them set up what they need. So in the home directory of home admin logs, of course I do an arm dash rf, think I'm in home admin forgetting that I did a bind mount and deleted everything. Right. So I put it on the phone with backup so fast because I was like, oh, that's what that means. So let's see here. So bind mounts and containers. So it's a way to provide persistent storage to a container, kind of decoupling the file system or file system storage from the container itself. Again, the container dies. You still have the storage. You still have the data that generated or the day you want to provide to it. So some examples of where you would have a bind mount would be provides files for a web server. Right. So classic example that kind of started to build my understanding was I have an Apache server. I'm going to bind amounts, a directory of content to the document root for Apache. Right. So it begins. Okay. That's now I see where that comes from. From that point there, you're going to see if, oh, if I have one, I can use it multiple times, remount the same path to multiple web servers as long as I'm not writing to it, right? Just read only if you will. You can also use it to share additional update config files. So say you're a developer and you need to test out new settings for your application, for your code. Having a bind mount lets you modify on your host system and be present on your container right away. Kind of sharing files across. Okay. That's good. Yeah. Okay. So container volumes. Right. So bind mounts were easy to use, but both Docker and Podman say, hey, they're not the best to use, right? They want you to use the built-in system for them for your volumes. And both Podman and Docker have that content of volumes and a lot of engines do as well. So you can use them amongst containers, so you can mount them up against different containers, container dies, bring it up on a different one, all right? And more manageable within the engine itself, the environment itself. So where's the file container storage, right? So while we had the bind mount as the host system mounted into the container, which number one induces a lot of security issues, a lot of problems, if you happen to run that container as root and you have it mounted in it, anybody in that container is also root. So container volumes allows you to kind of manage it a different way, right? So it's kind of separated again from the host system and lets you have persistent storage that's not associated with the host system directly. Container communication. So, again, the next kind of concept that I thought was isolated, how they communicate, all right? If these things are isolated from each other, they don't know by each other, how do they make it kind of work together, right? And this is the part that we talk about, like, microservices, right? You have a bunch of containers, they're all isolated, they don't know by each other, but then how they communicate, right? So this kind of next big step. So there's a couple ways it can happen. So app to host container, right? App or host to container. So when I say app or host, that might be your application, let's say you have a custom app running on the host machine, but you talk to information inside a container or vice versa, right? And the other one, container to container. So again, I mentioned earlier that I'm not going to do a whole bunch of pod manner doc commands, but this one kind of just to show you where things are, right? So app or host to the container. So the easy way is that a lot of containers have exposed ports. A classic example would be Apache exposing port 80 from the container out. So this happens to a mapping, right? So let me see here on this one. So do this here, put that here, all right. Another step that really jumped at me was how do I begin to look at these images, these, the storage, all kind of, how do I look at these things? So it's real quick here, a pod man image inspect and then the container we have, right? Showing this kind of give you idea of what's in there, but what we're looking for in this case here, when we talk about exposed ports or how do we talk to this container through our system is going to be under here, exposed ports. So AD TCP. So this tells me that this image is exposing port 80. So I can map something from my host system, from my like main system to this container on port 80. And so what that might look like is, you have this, what that might look like is this here is that I have this container running to using that image again, that image is mutable. It's always the same image kind of base image there. Command is running. We'll talk about that a little bit. And then the mapping, right? So this tells me that on my host machine, port 42289 is mapped to 80 on that web server. Now if I plan to do more demos, which I didn't, we could probably do something like curl and get, it works, right? So but that's talking to the container, the process of running on the container. So we're on time. Okay. Is it 40 or 45? Okay. What's lunch next? We got lunch. All right. Okay. So we'll work through this and then those questions, we'll do questions and make sure we're on time here. All right. So container to container. So if you want one container, talk to another container and we're talking just with kind of within that pod man, Docker realm, because there's other ways to do it with pods and things like that. They're a bigger concept. There's networking, right? So the engines implement a bridge interface, right? So if you look at my machine here, right there. So there is an interface that all of the containers get added to, right? So we can add networks, remove networks, kind of begin to manage that. So again, using this inspect command, I'm going to show you what it looks like on pod man. Yes. Okay. So you can create your own networks, kind of these virtual software defined networking, right? That's the ends. That being created this network on the system, it tells me my IP address and my gateway and things like that. And so I have this running in a, I was setting up ghost, right? It's a blogging software. So I created, or is it, yeah, ghost network. And if I can see here, if I look at the pods I have running, I think this one has it. Yeah. So looking at that, we look at networks, yeah. So we see here that again, the add machines to a certain network and they can all talk together, right? So now I can reference each other through IP addresses, one machine versus the other. So where that took me and kind of my learning journey is to, well, that makes sense. This kind of makes sense. I can see it. I have a container running the web service, have another container, has another web service. It makes sense to me in these processes. And then I'm like, okay, well, I want to balance between them. So we looked at. My early on brain, I was like, okay, I'll just run it. So let me hide this command. I think I don't like having commands on the screen without explaining them. So let's do that real quick. So this is the most we're going to do with commands. Just kind of an idea of how all kinds are to come together. So podman run dash D for detach. So I don't want to see it. It's going to put in the background again. A lot with, again, they started to make sense when they all run in the background. I get this, these begin to gel together, if you will. RM just means that after this thing dies, we're going to remove it. So we're going to hang around, give it a name, ghost app one, network. So we're adding it to a network, ghost network, where I can add other machines and put it all together. I'm giving an IP address because I can assign IP addresses out. And so I know that this one has IP address. And then I am. So here, this V dash V for volume. I'm doing a bind mount for users, me, ghost content to var of ghost. So that that first part of that before the semicolon or before the colon is a path on my system. After that is a path in the container. And you would know this through either documentation or through right knowing the app intimately. So this was a Apache web server. I know that var, w, w, html would be where I put content traditionally or usually. And then calling the container ghost. So thinking, okay, I have one container, I have two containers. I need nginx in front of that to low balance them, right? So then I started like, okay, this makes sense. But then I jumped into, well, it was kind of a piffing moment, a piffing moment where I was like, I got these things, but it should be easier than this. And I was like, oh, I get it now. That's why we have orchestration. Because I was trying to run this up through system setups. But in the end, one thing I do want to bring up as well is orchestration here, but not really, right? So this, this is just a, what I showed you earlier. So one thing that I'm trying, I'm playing with more now, my next step of kind of digging in, learning more is, Podman works really well with system D in that it can run and bring up containers as services, the same way as services, the same way of services. So reboot, since it comes up, I have to worry about having to start up a daemon or have to hop on there and start up manually. That kind of generates system D files for you, which I find quite nice. So I don't have too much to say about this other than that it's out there and that it's kind of where I'm looking at next for kind of just next step in learning. So kind of in my own systems, right? Not so with the day-to-day jobs, but in my own systems testing, be able to have this come up as I build servers and take them down, things like that. So all right, 45, look at that. All right, review and wrap up. So we covered a lot of things here, right? So C groups, namespaces, images, these are the things that on the underlying system make containers work for the most part, again, none of it's new, which made me feel better about it. I feel like I understand these things because they're on the system. It wasn't this, again, this black box. Container engines and run times. We talked about the difference between engines and run times, how they interact. One popular engines would be Podman Docker, run times, RunC and Kotlin containers. I didn't mention too much about Kotlin containers, it's something I'm still kind of exploring myself, but it's a different runtime for containers and, if I believe it correctly, even more through like hardware isolation as well. If I could be wrong on that, if I'm wrong, someone correct me. And container volumes, right? Again, these concepts that help me jump to the next step of just get past running that first day one tutorials, volumes of networking, and then orchestration, again, how to start moving towards multiple containers in the same network that can talk to each other, and it's really digging in to see that build up for microservices as well. Okay, so closing Q&A, it's my cat, I have three, but this is the best one. So don't tell the other ones. And so Q&A, also whiskey recommendations, I love whiskey, if you want to talk about whiskey, you can do that too. So that's all I got, thank you all very much. First of all, thank you, Alex. I don't know if anyone has questions, I got the mic back here, so just raise your hand. Do any of these tools apply better to embedded systems, smaller versus larger servers? Yeah, I love the question. So the question was, do any of these tools apply better to embedded systems, right? So of what we talked about, one of the major differences between Docker and Podman is that Podman is agentless, right? So it's less overhead to run. So on embedded systems, on edge systems as well, I think there's less resources, it seems to run a little better. If I was looking at embedded systems, I might look at LXC or LXD, it's even less overhead as well, LXC, way bigger. But for embedded systems, for edge systems, that's what I look for, because again, you just want to go with less resources, less overhead in those things. So between the two, Docker and Podman, I would look at Podman just because it's agentless, for agentless. Any more questions? So a lot of the examples that everything are predicated on the assumption that what you're trying to run in these containers are system services, web browser, sorry, web servers, things like that. In my environment, the main use case we've found for containers is actually end user applications. We've got some scientists that needs to run an experiment or something and they're not running a service, they just want to go somewhere, run their thing and use it and whatever, and then they're done. Some of these tools seem like they're not well designed for that type of use case. Do you have suggestions or what's a better way of addressing those types of needs? Yeah. So running kind of like one-time processes or like, yeah, just one-time processes, yeah. So what comes to mind, because the more I deal with day-to-day is looking at, what's we're looking for, serverless, right? So you have like, it's like Kubernetes, right? There's a whole search system. And I think those are more apt to that kind of like multi-user running containers and then just do what they do and come back, right? Not to worry about them, right? The thing with serverless is that when there's nothing to run, it doesn't run anything. There's no resources used, when it's ready, there's a small, there's a small spin-up time, but it's meant for kind of an ephemeral thing, we just run a process, get the data, die off. When the process gets dead, die off. So it's not meant for things that continually running, right? Like services. So I would look like that and in that case, I mean, I would look at cloud providers that offer that, right? So like, I think like Lambda, right? Something like that to kind of make it easier to launch those things. But from Podmare Docker, I kind of agree with you, they're meant for single node or a single host, right? And so one machine, if you want to run it against, I guess I'm assuming you want to run it against a larger network, right? Multiple machines have resources all over or maybe it's just one machine, yeah. Here or there? Yeah. Yeah. I don't know. That's a good question. A good support, I'll, if you come up here with a good information, I'll look into it. Anymore questions? All right. When you say that the containers use less resources, why is that the case? Because it seems like there are like process memory spaces and things that the system will often allocate far more for and then it'll over commit and then give that extra space to other processes. It seems like the normal Linux system is pretty good at keeping those resources low. How does the container make it less resource intensive? Yeah. So a couple of things. I guess it's comparison, right? So less resource intensive than say a VM, we'll start with that, right? So the VM. With a VM, you have a lot more, like in the kernel, you have all the other files that you don't need for that one process, right? So there's a larger image. So there's that in there. And even with VMs, you can put parameters around it, right? That hopefully you manage it properly and you don't over commit, right? But we all know over commit's going to happen, right? With the containers, they don't have the extra kernel, extra space. So it's a smaller image, right? So we use less in that. When it comes to CPU, you can go down to, I forget the exact measurement, right? If you have up to the 1,000 of the CPUs, you can allocate even parts of the CPU to it, to get dedicated or whole CPU, it's like a CPU set and dedicated to it, right? So it's separate away from the rest of the system. So let that kind of run its own process, run kind of like area. At some point it becomes what I would call a developer's answer to a server problem, right? Running out of resources. And we had this time where we saw PHP I&I files, like just taking gigabytes of data for a PHP process, right? I called that a developer's answer to a server problem, why don't you look at the code, right? Look at it, optimize that way. To kind of get back to your question, the resource regulation is comparison to running a VM. That's what I meant by that, I guess, per se. Okay. Yeah. Any more questions? I have a tangent. Sorry about that. All right. Thank you, Alex. Cool. Thank you. Bam!