 Welcome to the homelab show episode 79 virtualization versus containers. How you doing, Jay? I'm doing well. How are you? Good, good, good. This has been a topic that seems to come up a lot and it's not like you have to go all one way or the other and most people end up with a hybrid approach anyways for what they have container in terms of workloads and what they have virtualized in terms of workloads. We're talking about some of the pros and cons of this and ultimately it's not like one of these solutions will fit all of your needs no matter what. So you'll probably still end up with a mixed environment, but nonetheless we'll talk about the pros and cons of them, some of our experiences, some of my confusion with them, which don't get me wrong. I understand it from a very high level, but there are still some functional things that I get confused by that I think makes containerization certain challenges. But hey, we're going to talk to you all those fun things here before we jump into the topic today. We're going to thank a sponsor of the show and that is Linode. So if you're looking for a place to host many of the projects that we talk about on this channel on this podcast, Linode's a great place to do that, especially because a lot of things, you know, you may not want to run me your lab, you may want them public facing or have some type of layer between you and your internal servers. So hey, Linode's a great way to do that. Let it be on their public IPs and not your public IP. We have an offer code down below in the description for getting signed up with them. We thank Linode for the sponsor show. And if you want to sign up for them, use offer code, the homelab show. Thanks again for Linode for sponsoring. All right. Where do we start? I think we can start by just giving a quick rundown of some of the basics. Not spending too much time on this because I feel like, you know, the majority of our audience will know these things already just trying to catch the few people out there that might be newcomers to homelab. And if you're new, hello and welcome. I'm going to go back to the picture wallet because that's going to be a problem. You're going to just just run your, okay. I'm going to not talk about the buying things problem because that's a whole another story, but we're here to talk about containers versus virtual machines. So the way, so yeah, let's just talk about the basics, virtual machines, containers at a very high level. What the heck are they? And we'll start there and work our way down the list of topics. So virtual machines are something that we're just so magical when I, you know, when earlier in my career. They're so magic. They are, but, but it's just more, it's less so now cause I think we'd take it, take them for granted because, you know, when I started, if you wanted to spin up a new, you know, dedicated server, you would contact your server provider, order a server racket and that server would then do that thing. And that server did only that thing. Well, hopefully some people would, you know, have more than one thing going on, but that's another story. So later on, virtual machines came around and we could have more than one server sharing the hardware or basically running, you know, on hypervisor that would allow us to have, you know, individual servers that think they're real servers on real hardware, but it's, you know, just an abstraction all the same, but you have a boot process and you have all the things that you would normally have in a real physical piece of equipment. Even on some of these, you can access the BIOS right in the virtual machine and change things around. So that's more or less a software abstraction of a server, physical hardware, but turned virtual, hence virtual machine. And then when it gets to containers, the abstraction is much smaller because at that point it's almost like Linux as a runtime is just one of the ways I could describe it, although I want to be careful with that because it's not completely true. But if you think of a container as a blob of the file system, that's sharing the kernel with the host operating system, you wouldn't be very far off because that's effectively what it is. We'll get into more information about that shortly, but it's much smaller and you could run many more containers on the same piece of hardware than you could run virtual machines on that same piece of hardware. So it's a lot lighter weight. It's a lot lighter weight. And there's ways of sharing memory between virtual machines. I mean, one way that I like to describe the difference is that if you give a virtual machine four gigs of RAM, then it owns four gigs or four gigs of RAM, that's not completely true now because there's memory ballooning and some more advanced things to where memory can be shared between virtual machines. But just to keep it very simple, let's just pretend that's the case. You give a certain amount of RAM to your virtual machine, whereas with a container, it's more of a memory limit than it is dedicating memory to that container. You basically want to make sure, hey, container, you can't use more than, I don't know, 512 megabytes of memory, the memory footprint smaller too. So you can get away with that. Nowadays, the smallest VPS instance on pretty much any cloud provider I'm aware of starts at like one gig of RAM because, you know, the operating system is going to need some of that. But containers, you know, they share the kernel, so they don't really need that, have that same need. So at this point, most people might think, OK, containers are lighter weight, they need less RAM and all that. So why would I use anything else? You know, why would I want to waste hardware, waste RAM? That mindset is kind of true because you don't want to waste resources. But the next point I want to get to, which is another foundational point, more so an opinion on my end, but it's just based on experience. And it's more of a pet peeve, actually, that a lot of people out there, you know, it's almost like they learn how to use a hammer, everything's a nail. Let's container all the things to the point where they force things that are not meant to run in containers or don't run in containers well to force them to run anyway. And they spend many more hours troubleshooting things just because the software wasn't meant to work that way, just to ensure that everything is in one tool, you know, which I feel is the worst thing anybody can do, because basically we have different tools, different technologies that might fit one application better than another. I've seen applications that just don't run in containers because usually it'll come down to something the developer did wrong. That I mean, they should be using, I shouldn't be using like, you know, direct paths to everything, everything should be more dynamic. They don't always do a good job with that. So you could argue that if an application doesn't work well in a container, that's probably the it could be the developer's fault. But either way, that's why we have all these tools. We have virtual machines, we have containers. You can run containers in a virtual machine even. So, yeah, you don't really have to separate them completely. But the under my main opinion is use whatever works best for whatever you're trying to achieve. However, when it comes to the smaller footprint of containers, sometimes you don't have a choice because let's just say you have a, I don't know, I'm just going to make something up a four gigabyte Intel NUC. That's what you have. That's what you have access to as much as we would love to buy new toys. We don't always have disposable income for that. If you have something like that, you could probably run a virtual machine on it, maybe two, if you really stretch it, depending on the size of the virtual machines. But when it comes to a container, though, well, you could have a lot more running on that smaller, you know, on a lot less RAM. So in that case, pretty much you should probably just try to use containers whenever possible. Your hardware just isn't going to extend as far as it might on a dedicated server with more memory, not that the NUC isn't a dedicated server. But I mean, like an actual server chassis. So sometimes the hardware that you have available might just make the choice for you in which case, well, that's all there is to it, right? We could end the episode right here for those individuals, but we're going to keep going for everyone else that doesn't have that limitation. Right. Now, you know, to kind of summarize things with any virtualization, the hypervisor emulates an entire computer, you know, for simplification. But the containers, and part of the reason Jay said that has to have some compatibility and has to be an application designed for it is because whatever you're using for your containers, and we're talking more broadly, not about specifics like Docker, but we'll even go into the free BSD world. The containers share the kernel. This has some advantages. Those advantages are, of course, lightweight because we're not trying to rebuild and emulate an entire hardware via an extraction through an hypervisor. The downside is if the tools or the services that you want to run are not compatible with that particular kernel, they're not going to work well in there. Now, doesn't mean there's not workarounds and ways to get dependencies set up. This is one of the challenges with your NAS core when people ask about the jail system in it and like, yeah, the jail system in it, which is IO cage, which is a form of containerization in BSD. It becomes a challenge if something was more natively designed to work in Linux. But if you grab these dependencies and put them in there, you could probably get this glued together and working. But there's as software progresses, more and more things are offering a containerized install as part of their install process. Like, hey, we offer this as a you can run it on bare metal when anything you can run on bare metal is automatically pretty much a guarantee. It's going to work in a virtualized environment and a services, not necessarily things that need to interact with hardware. Then one more step further going to containerization. Well, you know, it's a pretty well documented Docker being one of the more popular systems out there. So running any of those services provided they have a Docker image. And of course, Docker Hub making those images available, which we will address today is an interesting aspect because Docker is not just containerization. It's also combined with Docker Hub and being able to just quickly pull images instead of building your own images. So they both have their merits and they both have their use cases. Absolutely. And that's that's exactly it. You hit the nail right on head because it everything has its place. You know, you know, when you when you bring your car to a shop, the mechanics not going to use the same wrench on every single part of your car and try to force that one tool to work everywhere. They're going to use whatever tool is the best fit for whatever it is that replacing or working on. And when it comes to IT people, it should be the same. You know, you could evaluate software in one. Maybe it works great there. Maybe it doesn't. You could try it in a container, see if it works better there. And just I mean, ultimately just test it, right? And that way you kind of know. Now, there's one situation that is more enterprise. I'm going to not spend too much time on this because it's just an aside anyway. But sometimes the application developer doesn't support containers. Now, granted, I didn't say the application can't work in a container. I have seen applications that work plenty fine in the container perfectly even. But then at the developer, there's some stigma some in some places because they're just not keeping up with the times where they find out you're running their software in a container. Then they're going to say, yeah, we're not going to support you. That's not supported. You need to install it on an actual operating system, not a container. Call us back. When once you've done that, we're not going to go any further. And I've seen that happen. Now, I'm not saying that I necessarily agree with that mindset. But then again, it's up to the vendor, whatever they support. But that's going to be more for the enterprise side of things. I mean, how many of us have support contracts for the stuff we run in the home lab? I'm going to guess not very many. I know some of you do, but I just don't think that's as common. Yeah, I need to make that aware just in case you're taking this to work with you. Like you learn it at home, like a lot of us do, then you take it to work with you. And then if you get a container working in your home lab, get it working at work. Well, great. But if someone on your team calls for support, yeah, that's going to be a little awkward. So just keep in mind that can happen. So the next thing is, you know, talking a little bit about some of the individual technologies. So a lot of these we've covered, like multiple times on the YouTube channels and in this podcast, you can listen to other episodes to get more of a background. But when it comes to virtualization software, you know, the popular one, obviously virtual box, if you're running it on your on your laptop or desktop, there's also a way to run virtual box as a headless server as well. If you're familiar with that, you can do that. I did videos on that years ago. I haven't played with it in a while. There's actually, it might still be around. There's a tool called PHP virtual box to put a web interface on top of headless virtual box. Yeah, I think I can't remember the name of the one that I've tried out, but but they exist. So, I mean, ultimately, virtual box is more for, you know, on your laptop or laptop or desktop to evaluate software and test things out. Developers use it like crazy. But yeah, you could also use it as a headless server. There's nothing wrong with that. XCPNG, which you cover a lot on your channel. And we've talked about here. Proxmox, which is what I use. And as another aside, the, you know, I like them both, you know, a lot of people are team Proxmox or team XCPNG. I like them both, honestly, like, I mean, I wouldn't be very put off if I was using using XCPNG instead of what I have, which is Proxmox. However, I think the ability to spin up containers in Proxmox by default is just the tiebreaker for me. But they're both great pieces of stuff. No, that's a great feature that they have on there. There's no good in, you can find some old history. It's just never been well-developed. And it's kind of left stagnant in XCPNG for container management. People ask me about it from time to time as we'll find some old documentation about Zen and managing containers. It's old documentation and no one's actively that I'm aware of, at least developing that anymore. So if you want that mixed environment of, is it LXC containers in Proxmox? Is that correct? Yep. And it's weird. It's pronounced Lexi, but I might say LXC as well out of habit because before I heard it spoken, I've always said it that way. Yeah, but that is correct. Yeah, so there's, if you want that mixed use case, that's definitely Proxmox is the solid for that. Yep. And let's talk about some of the container technologies at a high level. Now, there was a comment in the chat room. I think it might have already scrolled by somebody mentioned snap packages. And I just want to touch on that real quick, because there's a concept of universal apps and Linux. They are not containers, but I completely understand, and I'm not saying anyone in the chat room felt they were containers. That's not what I mean. I mean, someone just said snap package, and it reminded me. So some people might think they're containers. Now, they're container-like in the sense that they have the entire paths and all the dependencies and everything built in. They're essentially a way to install applications on Linux workstations and servers where you install one thing, has all the dependencies built in. And it's more for helping application management than it is for the things that Docker or other container solutions would help you solve. But I just wanted to throw that out there. They're not the same. There's overlap, yes, but they're not quite the same. Now, moving on from there, let's talk about the actual container technologies. Now, Docker is going to be the most common. And it's kind of like, at least here in the United States, we'll say, if we're not feeling well, I'm going to go grab a Kleenex, but it's not a Kleenex brand. Kleenex is the brand, it's a tissue, but we call it the same thing. Whereas a container, it's almost like Docker becomes a term, and sometimes it's used incorrectly, and I think I even use it incorrectly as well, where I'm just going to run a Docker container, and then they spin up a Lexi container, because they think it's one and the same, or they don't necessarily think that, but it's just how the verbage works. But Docker itself uses container D as the runtime. The runtime is what makes the container happen, what runs it effectively. Docker is a way to manage containers. Now, they used to be one and the same. There was a split. That's a long story. But nowadays, the recommended approach, if you want to run Docker containers, and there I go using that term, you're going to install something like container D, a container runtime, to run that container. So just wanted to get that out of the way. Now, Docker containers are absolutely the most popular. Like I mentioned a few minutes ago, they're not the first. I know Lexi containers were earlier. Kleenex containers is what that stands for, if anyone was curious. So Kleenex containers were there. Docker just had like a much bigger marketing kind of thing, it's like you put a term on something, and something that already existed for a long time, all of a sudden becomes super popular. And what a container is, is basically an abstraction of an application from the rest of the operating system. And there's going to be, and the term is right there, and I'm trying to make it come out, C groups, there it is. So using C groups, it helps isolate the process from the rest of the system. And that's one of the great things about containers is because they're isolated from the rest of the system. Virtual machines are as well. And as long as no one finds a CVE that allows an escape of a sandbox, then yeah, the application is completely contained. So we touched on Docker containers, and you also mentioned as well, there's the Docker hub and all these things that are wrapped around it, which is pretty cool, because if you want to run Docker containers, then you'll have easy access to any container you could probably think of. You want to run a container based on Ubuntu, go ahead, nginx, fine, no problem. On the Docker hub, you'll find no shortage of containers that you can run. But how does that differ from Lexi? So when we get to Lexi, the concept of containers seems a lot different because with Docker, there's layers, right? So if you have a, let's just say you download in Ubuntu container, and then you install nginx. When you install nginx, that becomes another layer. And then if you do apt dist upgrade and pull in all the updates, that's another layer. So every change you make is like a stacking layer. Lexi containers are not like that. They're more like virtual machines. And that's one of the things I like about them because the networking is easier to figure out on Lexi containers, because you can actually set it up to get an IP address from your DCP server. It could present a MAC address to your PF sense, open sense, whatever it is on the other end that provides IP addresses for your network. It could grab one and you can communicate it with it over the network just like you can anything else. I'm not saying you can't do that with Docker, but it's more or less built in with Lexi with Docker. You need to put a load balancer in front of it. If you're not accessing it local host, for example, but I would say Lexi containers are more of a virtual machine approach to containers. You get a lot of the strengths of having a virtual machine. You can log in to a Lexi container. So if you access it, you get a login prompt, you log in and then you're using it just like you would a Linux instance that was dedicated. So I really liked that a lot about it because I could just expose it right to the firewall, get an IP address, and there you go. And I could send traffic to it, no problem. Port forwarding, no problem. But Docker, again, you could do those things but you're gonna have a little bit more work to do to set that up. And there's gonna be some turnkey solutions that'll help you with that that I won't get into. Now, another technology that I have not used but I've been wanting to try this out. So I just felt like I needed to mention this. It hit my radar a couple of years ago and I still haven't had a chance to look into it yet. It's called Cata containers. That's what the technology is called. And my understanding, again, I've not used it but just from the marketing speak alone, what I gleaned from this is that it's a container technology that can utilize the virtual machine extensions of the host hardware. Now that's a big difference between containers and virtual machines in general because virtual machines are going to need that virtualization support in the CPU in order to work. Container's not going to require that but Cata containers can hook into that so you can get some additional acceleration capabilities there that you might not be able to get otherwise. Maybe that might make the difference for you. Maybe you have an application that you wanna run into a run in a container, you've tried it, it doesn't work very well but then you find out maybe all it is it just needs a little bit more horsepower or something. And of course, I'm making up a use case because again, I haven't used it but I just wanna make sure that people are aware that it exists and if anyone has a chance to try it out I would like to hear what they have to say about that. Now one of the big disadvantages and I've seen someone comment on this and this is where I have struggled sometimes when setting up things in containers. The networking is a lot different and it's not something impossible to solve and I've obviously seen and know large companies doing very large scale, wonderful, amazingly stable deployments with these and have it all sorted out. But coming from a holistic system that's either virtualized or if it's a just a simple virtual install I know where my networking is. I understand how whichever hypervisor I'm using I understand how it handles networking in a very clear way and because I'm just emulating a machine I can treat it accordingly. Once you start tying all the networking together with the images and the different container tools that can be a little bit more confusing and then that can be compounded by the different deployment methods. And I actually have directly commented this on the TrueNAS deployments that they've done with scale where even if you check the box which would make sense that it's supposed to do host networking. I believe it's called host Mac VLAN networking in the Docker system they're using. I think I used the right term there but essentially I wanna give that image its own network space and have its own IP address. That doesn't always work consistently and there can be some challenges in getting that set up. And if you're going upstream further and you're using some type of external firewall such as PF Sense to manage that it can be a little bit more confusing when you wanna manage things or do policy routing based on IP. So there's extra complexities. Now these are not unsolvable or intractable problems and reasons not to do it. It's a different learning curve if you're coming from the old school world that Tom has been in for a long time before all this technology existed. Right, right. And some of these containers will solve a really awesome use case that is not unsolvable by other means but are just such a great fit. So one really good example of this is and this is gonna impact a lot of you guys listening. We buy off lease server hardware quite commonly and sometimes you'll have like the iDRAC card that requires Java. So that should mean that you should probably install Java right in your browser and make sure, no, don't ever do that. Don't ever put Java in your browser, bad idea. So this container that exists, I forgot the name of it basically contains Java and everything that's needed for you to utilize your iDRAC card before iDRAC switched to HTML5 because later versions as we've discussed before they don't have a Java requirement. But if you are stuck with the server, I shouldn't say stuck, but if your server has an older iDRAC card that cannot be upgraded then you could use this container to access the iDRAC card without installing Java on the host machine and then adopting all the security vulnerabilities because the slogan of Java is right once exploit everywhere, right? I think it was something like that. But that's a great use case for a container and that's a great value that someone made that available for people that are in a similar situation. They saw an issue, they solved the problem, they made it available for other people and that's a good use case for it. So I just wanted to mention that exists. Now, to your point about host networking you're gonna really love Lexi containers, man. Let me tell you, because it's absolutely the virtualization approach to that. Now, in the example of the iDRAC container that's running on your local computer at that point it doesn't matter. It doesn't matter about the networking or I mean it does but it doesn't because it's running local host and some people will install containers on their laptop or desktop just to run an app local host and even though it's solving a different problem than like snap packages, some people will actually use it as a universal app, nothing wrong with that. If that works for you, that's awesome. But when you have a dedicated server that's running your containers that's when the host networking comes into play and you have to figure that out which could be a load balancer in front of it. Maybe you'll use something like traffic in front of it. There's different solutions for this. There's built-in solutions as well but I feel like there's a reason why there's all these third party solutions for networking for Docker containers because they just do it better, they just do. And one concept we're not going to get into today is container orchestration which also solves the networking side of things. That's when you have something managing your containers for you and an obvious example there is Kubernetes. That's the most common one that anyone will run and that'll help you orchestrate your containers basically make sure a container is running. A certain number of containers are available out of a certain number of maximum if you want scaling or things like that. There's other resources and that's a whole other episode. So yeah, go ahead, Tom. I was going to say there's also a portainer. We'll throw that out there. We're aware it exists. There's other people have done some videos on there. Our friend Christian Lempa, that digital life he's done some videos on it as well. I believe Techno Tim has a few videos on it. So that's another, it's a great tool. I've actually been playing around with that. I like it. It kind of makes your Docker system a little bit more manageable through a nice web UI. Yep. So now with that all that high level explanation out of the way, let's just double down on the topic to container or not to container. That is the question, right? And I've already touched on some of these. If the application doesn't run in a container and sometimes the only way you can know that is by just giving it a try, then running it in a VM. You shouldn't force something to run where it doesn't run well by default just by nature of how it's designed. I mean, I'm not saying you can't get an application not meant to run in a container to run into a container because if I tell anyone in the audience you can't do something, they'll do it and they'll let us know. So we're well aware of that. But the question is not can you run it in a container at that point is more or less should you? Because yeah, let's just say you get it working later on down the line. There's an update to the application. Now whatever workaround you're using doesn't work. You have to find another one. There could be some weird issues with the application that might not be otherwise explainable. And that's where we get into the whole point where you shouldn't force something to run where it doesn't run. I would hope that every application could run and be more dynamic. But the world we live in is not that world. So that's the first consideration. Actually the second consideration. The first is your hardware. If you don't have very much of a way to ram, you don't have a choice anyway. It doesn't matter like anything I'm about to tell you just goes out the window, it doesn't matter anymore. So that being said, so getting the hardware side out of the way, let's just assume you have an application that runs great in a container and it runs great in a virtual machine. Well, in that case just throw it in a container because you're just gonna have a smaller footprint just makes more sense. Otherwise unless you have like so much disposable ram that you just don't care, which is probably not most of you, then I would throw it into a container as the default if there's no other reason why it should be in one or the other. So that's one consideration. Another one is like web apps. Let's just say you want to, I don't know, show an Apache webpage. There's very few circumstances I could think of where that would not work in a container. That's a great use for a container, a website, a blog or even just a static HTML page container all day long because that's just gonna be the best fit for it. If anyone knows of a use case where that doesn't work, let me know. But if you're just running like engine X or Apache and trying to serve something, well, you know what? It's probably container at that point. I think that makes the most sense. The other side of this too is you might have an application that runs only in Windows. I mean it happens, right? So often we'll have a Windows VM and we'll run that app in there. Well, you're not gonna virtualize Windows. Can you find a way to do it? Containerize Windows, I meant to say. Can you containerize Windows? No, is someone in our audience gonna find a way? Probably, so again, I have to be careful how I talk because somebody somewhere knows how to do it but I'm speaking more in a general sense. So if you have like an operating system and you want the full operating system experience maybe you wanna run macOS in a virtual machine. I've seen people do it, Windows obviously. That's gonna be a virtual machine because when you run containers it's implying Linux at that point. It's like I said earlier, it's almost like Linux as a runtime. Again, that's not completely correct. That's an over gross simplification of it but that's just going to be the way that is. If you have an application that runs in Linux then it's gonna be the most probable thing to get working in a container. But if you want the full Windows experience it's gonna be Windows. If the application requires something at the OS level to be emulated and I've seen this happen then that application must be running a VM. Otherwise it's just gonna throw errors all day long if you try to throw it in a container. The downside is sometimes it's just trial and error to know maybe you might not be aware that an application will work in a container until you try it. That's part of the fun of HomeLab. It's also part of the frustration as well. I would love to give you a formula for this in particular but sometimes it takes just giving it a try and then see how well it works or how well it doesn't and sometimes that will determine your next step. Absolutely and someone pointed out and this is another good point though. If you need to manage kernel level things it's not an impossible task but it's probably, it can be more problematic. If you need to kernel space for NF tables, IP tables, kernel modules, et cetera there's probably a way to containerize it but you are now getting things a little bit even more convoluted and maybe that if an application has to run that. There's that I know of and it doesn't mean someone won't invent it in the future or someone already has done it. Firewalls that run in containers. That would be like, you know, can we just build our own firewall running it in a container to help manipulate all the other stuff? I mean in some ways portainer because it has access to managing things might use something like that because you're using it to control the networking of the other images you have but yeah, that can get, yeah, think about that too because of the, you want clear privilege separation for security reasons so you have to really make sure you understand what any of the privilege cross the other containers might be and where there may be a risk inside of there. And it was early in my home lab experience where I ran no virtual machines at all, zero because the hardware that I had was four gigs of RAM and I just didn't want to, you know, just try to force virtual machines to run in that. I could have. I had like somewhere between four and six different things I needed to run. So it was like C containers. That's what I used. And it's funny because I remember it was a Black Friday deal which I normally don't go for but Dell was selling some servers, brand new servers never before used and they were like some ridiculous price, I think less than $200 brand new or something like that. I'm like, I'll buy one. And they came with four gigs of RAM and I figured I'll just upgrade the RAM and found out later the RAM was like extremely expensive for that server. So I just stuck with the four gigs for a while until I had a dedicated server of my own to build my, what's now my Proxmox installation. So everything had to be a container because, you know, running a virtual machine was just going to be a problem. And I didn't run and this is kind of like the point I was trying to make. I didn't have any piece of software orchestrating these or anything. It was simply Ubuntu server with Lexi installed will actually Lex D with Ubuntu server because I'll talk about that in a minute. But effectively Lexi containers just run by the command line. I didn't have any web interface or anything. That's just how I did it. I wanted to start or stop a container, just SSHN start or stop the container. And I didn't really feel I needed a web interface. I still don't, but it worked for me. And then later on I had some things I wanted to run in virtual machine. So, you know, I went that direction. Now I did mention Lex D so I feel like I need to talk a little bit about what that is. Lex D is a layer on top of Lex C that is primarily developed by Canonical aka Ubuntu. Lex D could be run elsewhere as well. So just because it's Ubuntu doesn't mean it's only Ubuntu or developed there doesn't mean it's only available there. But it gives you some additional functionality on top of Lex C, you know, like central storage. It can hook into ZFS. I'm pretty sure ButterFS if I remember correctly. So it's almost like Lex C plus plus. It gives you and even use the Lex C command set to administer this even though it's Lex D which is why it confuses people. But I actually liked that a lot. And that's actually what I was using on that server was Lex D because I actually wanted some of those features that it provides which I thought adds a lot of value to Lex C. Lex C itself being the virtual machine equivalent of containers plus the Lex D features almost makes it seem like a complete virtual machine orchestration solution but it's containers which is really cool. And one of the reasons why I like it so much. So that's what I was running at that time. So, and that's Lex D yet another technology. It's like an alphabet soup of various different technologies that people can choose from when it comes to these types of things. Absolutely. Ah, so much. There's so much to talk about, you know, because it's like, you know we define the things a little bit here and yeah, it's more high level and we can always go deeper into anything that we talked about in the future episode. Anything that we mentioned is bare game here. But at the end of the day what I really wanna get people away from and someone in the chat room, you know brought the, or said it better than me, you know some people try to pound a, you know square peg into a circle shaped hole or something just because they're trying to force it to fit where it doesn't. As long as we don't use that mindset and I've seen this mindset on cloud people will say like companies especially everything needs to be in the cloud. I read that in a white paper and the white paper never lies. So since the white paper says it needs to be in the cloud we need to move every server in the cloud. Next thing you know, they move their NAS and everything into the cloud and then they have like a $20,000 a month storage bill. Congratulations. That probably wasn't a good idea but wherever technology fits that's where we should use it. And that was some high level information about where some of these things fit. Yeah, and I think what Oliver Lambert said if you're not familiar with that name I've tweeted out many things that he said because he's the head of the VATES team that develops XP and G and he says that's why containers should be seen only as a way to package and never isolate tools. And that's referenced to some of the comments about people saying, you know security and things like that around them. Yes, it's a great way and the containerization has really helped push forward a lot of large scale web application deployments. A lot of companies have to build things on the resources they have. And if they were to run a virtual machine for everything that would be kind of cloud heavy if you will but by having a virtual machine that you're running and hey, you can run that an XP and G and then have each of those virtual machines and a whole list of applications that build a series of essentially like, you know microservices that are all running and contained within there. So you can switch them out from a deployment and control level. It's just a way of thinking of how all these tools stack together. At the end of the day, there may be a virtual machine running all these and then your containers live inside that virtual machine where you have a whole ton of little applications running in these containers to make the most efficient use of that. And then of course they can coordinate with other nodes that are maybe geographically separate and you can scale this out to a giant enterprise deployment. So quite a bit there. You're on mute Jay. Ah, that mute button. So okay, so I think we should probably bring up something that like I almost forgot to bring this up. We promised we would talk about this. So we got to talk about that Docker hub because that is a big deal. And I know this is probably going to be the most controversial side of it but we're coming with experience and we pay attention to the industry and anyone that says, well, there's no problem because I've never had a problem. Right, nobody has a problem until they do. So that comment never really helps because we need to report on the fact that something can happen and that is supply chain attacks. Now, before we get to that Docker hub, I do wanna say it is amazing though because there's like countless and I forget how many container images are available there that you could use to create your containers with. And the idea is if you wanted to run an Nginx container, why install Nginx manually when there's an Nginx container, you could pull down, it's already set up and that could be a layer in your chain. That's valid, that makes sense but supply chain attacks are very common, they happen, we see them in open source. There's reports of poisoned Docker images on Docker hub and I'm not saying this to throw Docker hub under the bus until everyone not to use it but just to keep your eyes on this because if you didn't build the container yourself manually, then you don't know what's in it. And my opinion is whenever possible you should always build your own images never use Docker hub or anywhere else, just use your own. I know that's gonna anger a lot of people but that is my opinion that being said I do understand there is a use case for the Docker hub and if you have tools and things that scan it and you're on top of things there's no reason not to use that but when you manually build things everything that's there, what version it is there's no mystery things in there you've built your own image that's always going to be better but sometimes you don't have time for that you have like two weeks to get something deployed and you have to get it done as fast as possible your boss said so it doesn't matter what Tom and Jay says you have to do what you have to do and I totally understand that but these types of things do happen and we need to be mindful of that because we shouldn't just trust every container that's in the Docker hub if you're going to use an image you should be looking into there or find out what's in there or better yet just build your own I feel like for HomeLab we like to build our own stuff anyway for the most part so I don't think that's going to make you're too many people but I do understand a lot of people are very vocal but again we come at you from industry experience Well and I come at it from the supply chain and thinking a lot about and this is where things are ramping up as supply chain attacks and someone said I was fear mongering when I had posted a bleeping computer article and probably they exaggerated things at some point to get you excited 1600 Docker containers found with some type of mail there was some number like that and if you said hey percentage rise Tom that's extremely small percentage and I would say you're right but the reality is it depends on what container it is and I'll throw a Unify out there one of the questions I had because people wanted to know why I wasn't recommending running Unify in Docker and I'm like well I don't think it's maintained by Unify and then I looked and confirmed it was maintained by some individual awesome that that individual was maintaining it it doesn't matter if there's you know a quantity or only a small percentage of Docker containers that had some type of mail where they found it's probably gonna be and it's the same thing that goes for things like Android apps it's gonna be in the popular ones that's where they usually target can I get it in a popular app that there's also not direct support for and ubiquity is an easy example of this people really like Unify it's a really popular product they don't have that I'm aware of still a way to deploy it via Docker in terms of like from them officially there's no officially supported way to do it doesn't mean it can't be done it's actually an app that works fine in Docker as people have proven but if you're not the one building it and you're just pulling someone else's Docker image well how well do you know that someone now if you know that someone you reached out to them you say I trust this person then that's fine and that's what each one of these should be vetted with I use Docker when I pull from official sources you know for example Bitwarden that's how they do their application delivery that's fine it's from Bitwarden it's not from someone who decided to start up a hobby project of maintaining it and it's fine if someone does until that person gets an offer they can't refuse or just doesn't maintain it very well and someone else just picks it up later because they give it away to someone you always have to think about where those images come from and that's just good supply chain now we already have our own challenges in the supply chain of you know what if someone takes over this NPM or Python library those are challenges that exist anyways they're only compounded by the Docker problem that I had posted about with Blakey computer you can just Google you know compromise Docker images and you know this is just one of those things that is really should be I should say in people's minds where did I get it because so many tutorials because they wanna do it with the briefest way possible to like hey here's how to get started with the insert name of something and Docker pull this random person's Docker image and throw it into your stack all right now here's how we get the rest of it done think about that in a very concise way before you rely on something that's built that way another aside too and you mentioned this but just to expand on this sometimes someone getting a better offer moving on or unfortunately some people get burned out or whatever the issue is and they're not maintaining that anymore it means that you can have a container that has no you know that no one got into and put poison in so to speak so there's no supply chain attack going on it could just be something that is just sitting there on updated and unless you're checking this regularly then you might just bring in a CVE just on account of the software in the container not being updated or something in the container not being updated but to be fair that same thing is true for PPAs on Ubuntu for example to have been repositories or any software repository regardless of what it is for example recently I think it was yesterday there's an SSHFS video that hit on the channel that I uploaded and in that video I'm telling people be careful using this because it's not maintained right now I'm hoping that it becomes maintained but it lost its maintainer and that's a very huge technology there so you shouldn't be using SSHFS to transfer anything that's you know confidential because again it's not being maintained and I found that out by going to look at the GitHub page and if I didn't do that I would not have known I might still be recommending that on the YouTube channel but we have to check these different things so it's not specific to the Docker hub all repositories have this issue but you just have to be mindful of that if it works and it has no security concerns great just make sure it's maintained but ultimately just know where you're getting your software from and just don't freely accept any container image that comes your way anymore and then you would just run a curl pipe to sudo bash command one liner on a website without checking the script first right just make sure you're checking things out and be responsible and I think you'll probably be fine. Yeah, because it was in the comments here I'll bring it up because there's people offering and I have a video where I use a GitHub script because I have talked to the developer of this particular script that has a script that installs in Orcasha but then there's also people who have Docker images of it but I've also built in Orcasha for using the full detailed instructions from the official people doing it and that's gonna be your best way unless you take the time to understand the GitHub script or take the time to understand the Docker images because you wanna know what's in there and ultimately it's just one of those things you really need to consider and think about and it's fun for testing I mean if you just wanna build something in this lab hey cool this Docker image if you're gonna run something in production really take your time to vet it and of course you have the official for anything look for the official way to get the packages and in the case of XCPNG there's an official way to get these in Orcasha appliance with a full delivery method from the official people it's just one of those things you really have to think about before you put things into production. And sometimes these containers just again there's just so many clever use cases I just wanna make sure I'm clear on that so we're not anti-container but one thing for pro security I remember a friend of mine and this was so long ago I wish I had this Docker file still to this day but he developed the first version of this and we kind of worked on this together it was just an internal thing but he literally built a Docker container image that had wine, Linux steam and Windows steam running through wine in the container and it was like the ultimate Linux container or gaming container because if an app wouldn't work on Linux or a game wouldn't work on Linux natively then it would just bring up the Windows steam through wine and most of the time that would work or native games through Linux was great too but it was very complicated because you have to expose the video card and hook into that it took a while but it was so cool but anyway at the time we had a container image we could just pull down and then start downloading games and play them so I remember going over to the desk and seeing Skyrim running back before I knew that was easier to get working on Linux and I was like, oh my God, how'd you get that going? And there's so many different cool use cases like that that I feel like containers just really created this renaissance of we separate applications from the host OS which is great but then people take it further and then they abstract gaming even through a container which is just crazy but that's just how smart, homeland people are. Yeah, fun stuff. Very fun. I think we've covered all the topics here I'm looking at some of the comments to see if we missed anything but We did miss something, we missed that email address. Oh, email address, feedback 2022 at the HomeLab show. We create the email address with your troubles setting it up and then we've been forgetting to mention it so that's an easier way to get feedback to us. We're gonna start mentioning it at the beginning of the show next time so that we'll have to wait till the end to hear it but yeah, we do wanna hear from you we wanna make it easy we like doing the feedback shows and we like the thoughts and ideas that come from the HomeLab people here that follow us and different things they wanna cover different topics they wanna expand on is always a lot of fun we love engaging with the audience because hey, we're here to preach some education talk about security and some better ways of doing things and raise all of us up to be better people and better technicians on all this. I can't really help if you're not a good person or not but I can at least try to make you a better technician. Better tech person, I guess. Better tech person, that's always our goal. Well, thank you for joining us. This was a lot of fun. Jay's got a brand new Bash series I can't help but, you know, plug it because he- I should have plugged that why didn't I plug that? Yeah, it's plugged in. That's great, I mean, it's free. You just go watch it on his channel. Bash scripting series and when my employees even just started watching it I liked, I started watching it I'm like, okay, I need to brush up because boy, I had forgotten a couple of commands and it's a good refresher of course Hey, it's at no charge to you. That's the Cyber Monday deal. We're giving it away. You don't need to remember it. We don't have a sale. Jay's just giving it away for Cyber Monday but that all got released. Give it away and it's gonna be free forever. It's not like this week and I'm taking it down. It's just there and it's gonna stay there. A quick, I mean, there'll be links in the show notes but a very quick way to get to it is if you type linux.video-bash1-2-3 it takes you right to that episode. It's super easy linux.video-bash and then a number and then that'll take you to the episode number so you can start at bash1 and then if you hit bash1 it's gonna, the playlist will keep you going through the rest but that's kind of like the system I have set up so you can get to it pretty easily. It's linux.video, right? Yeah, linux.video-bash1 and then so on to the rest. Perfect. All right, you're on the show notes. All right. Take care everyone, thanks.