 Welcome to the Home Lab Show, episode 60. We're gonna talk about OpenStack. And Tom's gonna be quiet a lot, because I'm gonna learn about OpenStack along with many of you. I've got Jay with me and Jayden. And Jayden is an OpenStack expert. Because even Jay's like, well, let's bring someone on for this one. We really wanna answer people's questions about it. We really want to dive into it. But I don't know that me and Jay are really completely as qualified as Jayden here. So welcome Jayden. Thank you, I'm happy to be here. Yeah, so how do we get started with OpenStack? I think we're gonna cover probably some of the topics and clear up some confusion and things like that about it or what it is, what it isn't, and why it's pretty cool. I'm really excited about this one because I mean, we've been asked about OpenStack a number of times in the chat room. I mean, they may or may not have answered those questions as they came up, but I've seen them and I've seen you guys ask about it. And I've been wanting to do this episode for a long time. So I'm really excited. I feel like I'm specifically like 20% qualified to talk about this. But I'm learning it right now. So I'll be more qualified later. But like Tom said, we have the expert, so. Yep, cool. So just to make sure everybody understands, so OpenStack is a open source project that aims to give you the same kind of cloud experience that you can get from VMware or Google or Amazon, but on your own hardware and your own data center. And it's all open source and has a really nice, vibrant community with people from small teams to big enterprises. It's especially popular among telecommunication providers. I think nine out of the top 10 telecommunication providers in the world use OpenStack to power their networks. So if you're using 5G, there's a really good chance that OpenStack is underneath it. But OpenStack is also very popular among other big companies and small companies. I know I think Walmart.com and Walmart internally uses OpenStack to power their infrastructure, Alibaba Cloud in China is a really big OpenStack public cloud. In Europe, they have lots of OpenStack public clouds. If you're a T-Mobile customer, they're a parent company. Deutsche Telecom runs a big public cloud using OpenStack. So you're probably. It's in the enterprise market for sure. Yes, yes, it's very popular. But I'm really excited too about OpenStack now because people are trying to bring it down market so that home lab users, small teams, people like that can use OpenStack and get that advantage. I want to say one of the important things is a lot of the cloud companies, they always want you to use their proprietary thing that gives you a lock-in to their cloud. Whatever that is, whatever the offering is from any of these large providers, they don't want, they lift and shift is not a term they like. They're like, how do we stop lift and shift? It's kind of like, I don't know that they actually say that in a board meeting, but something tells me they do. And they're like, how do we stop them? What thing can we give them that'll keep them from leaving? And I think what's really cool about things like OpenStack is it becomes a little bit hardware agnostic, whether you're running it local, as you said, in a home lab or in the enterprise environment on your bare metal or tying it to a series of other servers, it kind of eliminates that lock-in, that way of, there's a better opportunity hardware-wise to physically host this and get connectivity, you can lift and shift it over somewhere else. Absolutely. I think one of the big things I like about OpenStack too is that it gives you a chance to look under the hood and see how things work and get hands-on with the infrastructure in a way you can't on public clouds. And I mean, even if you look at a lot of the public cloud certifications and education, they're teaching you how to use the cloud in their particular way, their particular fashion on their particular terms. But with OpenStack, you get a much more well-rounded cloud education, I think. And if you wanna go digging in the internals and see how the networking or the hardware or the storage all works, you can absolutely do that since it's open source and not locked down in any way. I think this is fun too, because this is where the enthusiast in the audience we have, they don't like the word serverless because we all know it runs somewhere. Serverless is what's sold to people who don't wanna know how the magic happens. And we're the next people, we wanna be there, we wanna see how the magic happens from the bare metal on forward. Yes, absolutely. So just for folks who've been following along, I would say OpenStack is probably like an enterprise or more professional version of like Proxmox or that kind of solution. So it's definitely not for the faint of heart. I will say one of the big problem areas with OpenStack is that the deployment journey isn't super great. There's some really great tools that can like get you with a toy OpenStack or like a test OpenStack, but it's really hard to go from there to production and running it, running workloads and having it be ready and not gonna be at risk of failing or going down. Yeah, that's, well, we'll hopefully clarify a few things to get it set up, but I know that's one of the things it's not like a distribution. You just love the distribution. Here's your rub UI and load your VMs, right? That's not what OpenStack does. It's getting better though. Yeah. I mean, I've seen there's multiple ways to set it up, which I'm sure we'll get into. I just as an aside, I've used OpenStack a number of times and I really like it a lot. At the time I was using it, there was like three, maybe four different ways that you can install it, which was interesting, but it was also like, which way do I go with? And then in the chat room, someone brought up MicroStack. So it seems to me like there's, it may not be like a distribution that's on an ISO. You could just load and everything's there, although I wouldn't be surprised if somebody has done that, but there's different distributions, but in a different context, as far as like how it's distributed to the user. How do you feel about that in like HomeLab in general? Like is the footprint too high or is it kind of like lowering down or is it approachable more now than it was before? So you can definitely get it on a smaller footprint now than in the past. It does require some careful configuration and consideration. And now if you're just looking to try out OpenStack and play with the OpenStack features and not deploy workload, you can deploy OpenStack on a single machine using DevStack. That's the official OpenStack, like testing evaluation setup. And DevStack will set up all of the core OpenStack services in a single virtual machine. You do need a good amount of CPU and RAM, but I think eight to 16 cores and 32 gigabytes is enough to get you to a point where you could use the OpenStack API and evaluate OpenStack in that way. But that's definitely not for deploying workloads. That's for the people who develop OpenStack to make a code change, run it and make sure it still works and then keep developing OpenStack. So that's the bare introductory like, oh, this is what OpenStack looks like kind of solution. You can get that from the main OpenStack website. Microstack is canonical's approach on that. So it's a little better than DevStack. You might be able to get a little more done. You might be able to do a little bit more with it than DevStack, but I think it's still newish and kind of in early days. I don't think it's quite ready for production and using it to run production workloads. But I understand canonical is working real hard to get it there and to make it easy to manage OpenStack and deploy OpenStack. So if you've gotten a bunch of system and you like using Snap, definitely try it out using Microstack. Should be a really easy way to get you set up and get you with OpenStack without having to worry about the configuration because it's the configuration that really makes OpenStack difficult. Like you said, it's so powerful and it's hardware agnostic. You can run it in all these different ways, but because it's that way, you really have to know what you're doing and how to map the OpenStack configuration for the networking and the storage onto your exact situation. So I think most of it's in the fundamentals of getting the OS set up and ready for the OpenStack to sit on top of making sure you have like the storage persistence and everything else is more or less, right? That's where a lot of the heavy lifting is when getting it set up. Yes, so making sure that you have your, I would say really the big pieces are the storage setup and the networking setup so that all of your OpenStack services can communicate with each other and so you have storage for your virtual machines. OpenStack can store just directly on disk using like LVM volumes. A very popular configuration though is to use Seth which is a highly available open source enterprise cloud storage solution as your OpenStack storage backend. That way, if you have a hardware failure, you don't lose data. Yeah, I think this important goal when you're setting these up is when you build these clusters is having some level of storage persistence. Seth distributed storage through that works well. Then it doesn't matter the nodes. This is once you get into the larger enterprise, the nodes are just irrelevant. You have as many as you need to get the workload done but if one dies, you just pull it out and kick off the install script and load it again and it reattaches to Seth because what's really important is your data and the whatever you're running within there that wherever you spin it up within the cluster, your data persistence is the same so you can pick up where you left off if one of these nodes dies or has to be upgraded or replaced. Yep, absolutely. But I understand as great as Seth is, this is a whole extra set of complexity and technology that you now have to understand and configure and manage on top of managing OpenStack. Yeah, and then the networking pieces, it's just making sure that you've got the, well, just that all the OpenStack nodes can communicate with each other and make sure that they know what's going on. I mean, you can simplify that by not running like a highly available configuration. So that's the experience I'm coming from is running it like in production, sort of that enterprise kind of workload and you can do all of that on three physical boxes with careful configuration and if you run storage, control plane and compute all on the same box but I know for like the HomeLab, you can absolutely probably do it on one box or just have one box for control plane, one box for storage, one box for compute or simplified in a lot of ways if you're not worried about hardware failure impacting you. Yeah, if you just want to get it up and running, you can run it on an individual server, not for production workloads, but for absolute, you want to learn, it's the same, it's just scaled down. Yes, absolutely. Now, if you are want to be more adventurous, if you're wanting to do more than just the basic like, oh, this is an OpenStack trial kind of thing, there are tools to do that. Red Hat makes a, well, they have their own OpenStack platform. The open source version of it is called TripleO, which is OpenStack on OpenStack. That one is probably more geared towards the enterprise because the way TripleO works is that you set up an OpenStack cloud that is the under cloud and then you use that to deploy other OpenStack clouds after it. But that TripleO requires baseboard management controllers, it uses IPMI, Redfish, those kinds of things. It'll very much streamline the setup of OpenStack for you and make managing clouds very easy, but it does have all of these extra pieces and you have to run an OpenStack cloud first before you can set up OpenStack clouds, which a little bit of a barrier to entry for TripleO. So I think one of the things that I like about this in terms of HomeLab users is that I feel like it gives people that wouldn't otherwise have a project to work on to learn, a project where they have different servers that have to communicate with each other because when, I think this might be a confusing thing for some people, when they hear OpenStack, they think of OpenStack as one thing, just one component, one thing to install, kind of like Htop, you install Htop, which is a bad example, but it's not one package, one service. You have OpenStack consisting of multiple things and some of what you've already talked about. And then for the HomeLab person, if they wanna do it right and split the services off onto different servers, then they start to look at real enterprise issues that they have to try and solve, like getting them to communicate to each other, creating a backend network, like your management network and whatnot and then a VM network and how they route together. And then it's flexible too, because if everything starts to run slow because you're just rolling out a bunch of things, you could just actually replace the compute or server or add another one to scale it out. And I can't think of very many projects that gives people that many examples of things that people in their enterprise IT do pretty much all the time. Yes, you are absolutely right about that for OpenStack. That it is, it's a modular kind of architecture. So even the core services have seven or eight different projects that each have multiple subservices. Where the company I work for, we use Kala Ansible to deploy OpenStack and that uses Ansible Playbooks and some tooling to deploy OpenStack services as Docker containers. And I think in our deployment, that has the core services plus two extra services. There's maybe 70 Docker containers. So 70 different individual service processes that are running that make OpenStack happen. And that's everything from the database that keeps track of OpenStack's internal state to the RabbitMQ messaging broker that all of the different services use to communicate to the actual compute hypervisor service that runs your VMs and provisions the VMs and keeps them up and running. So there really is a lot that goes into OpenStack and the choice of configuration tool, your choice of configuration tool influences what your resulting OpenStack cloud will look like. So triple O on the one side, I think they use, I think they install OpenStack using RPMs. Kala Ansible uses Kala containers or Docker containers in the Kala project. There's another deployment that uses Ansible called OpenStack Ansible that deploys OpenStack services as LXC containers. And then I think in MicroStack, it does it all in snaps. But I think there may be, I think if you do canonical OpenStack, it does it as Dev packages. So there's really a lot, like you said, that you get into like an enterprise person would trying to decide these things and see, is it this way or that way? Or how should I configure this? And now for spelling real quick, it's K-O-L-L-A Ansible for those of you that want to take some notes on the podcast here and listen and want to Google. It almost sounds like you said koala, but I was making sure I got the spelling right. So I was like, I want to make sure I get that right. Yes, K-O-L-L-A. So I wanted to mention really quick, some of the names of the components because as an aside for everybody that's watching live, if it looks like, well, Jay's just Googling this as he goes, I am actually doing that. So I was Googling this because for whatever reason, I just haven't memorized the names of the different components, but I have it right here. So we have Nova and these names are really cool. I mean, they're really good at naming things, right? So they have Nova, which handles compute. We have Glance, which gives you access to images and Swift for object storage. The dashboard, the UI is called Horizon. So my understanding is that means you could run, you know, the UI on whatever server you want or sell it out as well. There's Keystone for identity. Networking is handled by Neutron. We have Cinder for block storage and it goes on. There's others here. So those are the names of the different kind of components and it's very common, at least in my experience, I've seen people that are just testing it out. Like I said, they have a VM and they're running everything on there. I guess my first question before we talk more about these different individual components is, is that why the footprint is so huge? Because I know one mindset could be like, why would I run OpenStack on my server if it's going to run slow on a 32 gig machine? But if you think about it like, but you're running all these different things on one machine, that's different than Proxmox, which is all in one. Am I correct on this? Is that why the footprint is so high than compared to other things? Yeah, that's probably a fair assessment that there is so much that OpenStack does and so much that it is doing under that. Because don't forget, OpenStack is built to handle thousands of virtual machine instances, tens of thousands of virtual machine instances or a hundred thousand CPU cores of compute. Like those are the kinds of scales people are using it for. And if you're running a hundred thousand CPU cores of compute, it's okay to spend maybe a hundred on your OpenStack control plane services and the storage services. So there definitely is a high cost. And this is one of the troubles or one of the challenges of running a hyper-converged environment like we do is that you have to, I mean, honestly, I think at our default configuration, OpenStack's using OpenStack and Ceph use between two and four CPU cores and 32 gigabytes of RAM. And that's on a low activity cloud, like a small cloud that maybe could run 20 or 30 VMs at once. So when you're getting into like the big, big, big, big clouds and you're running a thousand VMs or 10,000 or a million VMs, you're gonna see much higher resource usage from those services. Which again, if you're that big, it doesn't matter if you need to spend a hundred gigabytes or a terabyte on that. But for the home lab, not everybody has terabytes of RAM just in their pockets, ready to go and spend on control plane services. But if you do, let us know, because we wanna talk to you because how did you come into possession of all that RAM and all those CPUs that some medium businesses may not even have? Right. Yeah. So it seems to me like the experience was and it still kind of is where if one way of installing OpenStack doesn't work for you. I mean, for most things, there's one way to install it. You get frustrated, you just have to get through it. You have to ask questions, do whatever you wanna do or can do to get through it. But then at a certain point, it's like, well, this might be difficult for someone who's starting out, but with OpenStack, if the way you're installing it doesn't work for you and you gave it a fair shot, chances are, this is true of my experience, try a different way to install it and it might even work better, might even be easier and quicker because of the different ways that you can actually get it installed. So I think that's kind of like a benefit we don't normally have. Yes, you're absolutely right. Whenever we were, whenever I started working with OpenStack, we tried, the team of one tried two or three different ways. I tried triple O, it was pretty good, but it wasn't, it was too much for what we were trying to do. Some of our people tried deploying it just with the packages. Some people tried deploying it with Kala Ansible and that's what suited us best. We tried, we really liked Kala Ansible and that's what we used. Oh, I do remember, speaking of the Kala Ansible project, they, the people who are behind that project, they make a really nice all-in-one installer as well. It's called a universe from nothing and you can find it on. I've learned so much. I think I have that book on my shelf right now actually. You can find the project on GitHub under, it's done by a company named Stack HPC. They do high-performance computing with OpenStack and target like the niche research academic market. But that was actually, now that I think about it, the best installer for OpenStack I've seen and I did it on a laptop that had like four cores, eight threads, eight gigabytes of RAM and it was good, it worked. It worked well enough to like, give me that login screen where I could see, oh, this is OpenStack. And one of the other things too that I like about having so many different deployment options is that you can see how they think OpenStack should be deployed. So you can look at their configuration and you can copy off their notes and you can cheat a little bit with your own deployment and like figure out, oh, okay, they did it this way. So let me try that and oh yeah, it works for me too. Great, that's a huge, huge help than just trying to grit your teeth and push through it if you've only got the one option for installation. I think you just described a normal conversation between a couple of homelabbers in the same room. Stopping off each other, copying up the notes. So this works better, but that then just keep trying different things. And I think a lot of people's homelabs are a combination of what someone else was doing what they thought up and best things they found from either side. So it actually kind of makes me think that it's something that might even be right at home for these people, for us. What's your preferred based OS when you're doing things like well, is you do this all the time towards your preferred based OS I should say. Okay, so we are, the company I work for is a CentOS shop has been for decades. So we use CentOS for the underlying operating system to run open sec. Now caveat that I understand it's been a turbulent time for CentOS. Yes. You're probably gonna find, you are probably gonna find better support using Ubuntu. I think even Kala Ansible and the Kala project that we use it has, I'm not gonna say necessarily limited support for CentOS but I think they make fewer guarantees than they do for Ubuntu. So I would think, I would say overall, if you're not Red Hat, you're probably using Ubuntu to deploy OpenStack. And one other side of that too is that I know Ubuntu historically has had better, it's gotten newer kernel features in faster. Then I would say CentOS does and some of those newer kernel features have had performance improvements for virtualization either in performance or just in new features being able to pass through more. I'm sure there are, I'm sure Alma Linux and Rocky Linux will come out more but I haven't seen them yet. I think folks are still waiting to see how they shape up. One of the things you have to remember too is that these, most of these OpenStack users are big enterprises and they have really long cycles that they're gonna operate on like a five year cycle for refreshing their cloud hardware or their poor cloud operating system environment. So you brought up an interesting point about CentOS and I don't wanna make this episode into a anti IBM CentOS Red Hat episode or anything but I think everybody who knows, they know what happened if you don't, I mean, you can Google what happened with CentOS and probably get like two pages in Google at least minimum of people talking about this. But why I wanted to kind of elaborate more on that is because I think it's really interesting. I didn't really think about this particular aspect because my impression, let me know if you disagree that people that run OpenStack just like a lot of other enterprise solutions they're not really trying to have too much change if it works, they want something long-term supported that they could keep patching but they don't have to reinvent the wheel and you are mentioning your company using CentOS and has been for decades but I do kind of feel like this is kind of like a side of it. I didn't even think about that the change of CentOS is causing change with people that really don't benefit from that and OpenStack deployments, you're not trying to kind of play with it too much. If it's working, it's working, right? So that had to have been a pretty big thing in the OpenStack community to have to figure out how to deal with that. Yeah, I would say you've got it right. If you look at the OpenStack releases actually so the release page, you'll find that there's a tons of really old releases that are community supported. So OpenStack only supports three releases. There's the current release and two previous releases. That's what the community officially supports but there's this long tail of OpenStack users and providers who are using four or five year old versions of OpenStack but they are back porting the patches, they're bringing the code back because they can't shut down their cloud to upgrade and do a upgrade to a new release or they don't have the resources to test that or it works so why change it? Why do they don't need those new features? I mean, I even think some of the main platforms like the OpenStack Red Hat, I think they lag a few major releases behind on their hosted OpenStack platform. And again, it's just like people say, it's just the enterprise, they wanna know exactly how it's gonna work and they wanna have plenty of time to make sure it's right and they wanna, well, honestly avoid changing as much as they can to avoid introducing new problems or bugs. So I think that's definitely a dynamic that's going on in the OpenStack community. Yeah, when you're running a thousand nodes at a time, you kinda want those thousand nodes to be predictably the same and not necessarily because just because a new OS feature came out, it doesn't really lend itself. That's when sometimes people misunderstand and I work with some of the enterprise environments like they really value support and stability and lack of change, especially at the base level because it could create, well, an environment that is unpredictable and then hinders their service delivery of whatever it is they actually serve up on OpenStack for their product. So they're like, how would this disrupt business? Do we need the latest version of this software as the base OS, does these new features drive anything that makes our experience to the customer better or worse and those decisions, they have cascading effects that problems at scale. Yes, Tom's got it absolutely right. I mean, if you've got a thousand servers that you have to update, that is a huge amount of work in time that you have to spend. Even if you have it well automated, you've gotta really carefully manage that process. And I mean, honestly, I wouldn't be surprised if a lot of these enterprises will only consider this kind of move when it's time to do like a refresh when they've reached the end of life or if they're gonna open a new region, which that also carries the fun cost of you having to manage two infrastructure operating systems. And that's been another challenge for us on our team is that we're like, well, we could switch to Ubuntu, but we still have to maintain the CentOS systems. So now we have two systems we have to keep up and up to date and running instead of just the one because it's gonna be a couple of years before we can end of life the CentOS system all the way. Are you considering stream at this point? I mean, what are you guys going to do about if it's okay to answer that question about how to handle the situation CentOS? Is it, did you guys try out CentOS stream yet to see if that was gonna work or did you guys decide maybe, because I have, I mean, if you're on CentOS 8, then you're out of support, right? Or am I wrong? Or are you on seven? I think so we were unfortunately on CentOS 8. We made that upgrade before everything happened. And that was like a big deal going from seven to eight. People were like, great, we made it to eight. Perfect, we're locked in for the next decade. And then we weren't. I think we still have a little bit of time on CentOS 8. If we don't, it's a number. Okay, last, it's one of those things that we are trying to solve, but we haven't settled on a good solution or a solution that we like. I know CentOS 8 stream looks good enough. AlmaLinux is pretty solid looking. And I know like as a company, we have some previous relationships with Cloud Linux. We use Cloud Linux in other places on the in motion hosting side. And that's been a really good experience. So I think it's one of those things that people still have, nobody's wanted to say, all right, we're gonna do this guys. We're gonna take this pain and we're gonna do it yet. But I'm gonna guess when we get to that point, it's gonna be CentOS 8 stream or AlmaLinux. I need to clarify and correct myself on something. Yes, support ended for CentOS 8 last December, but that doesn't mean you can't get security updates. It's just that you're not gonna get them from CentOS itself, but there are companies out there that you can pay in their making patches for CentOS 8. So I don't mean to imply your company is, just sitting there as a sitting duck, not patching anything, that's totally not what I mean. But for the average person, right, CentOS 8 is end of life for an enterprise, it might be a completely different story there. But that is a little turbulent, understandably so, but it's kind of like the reason why I tell other companies, always have a plan B distribution. You don't have to use it, right? But you just have to test your configuration on it. But getting back to OpenStack, there's just so many resources that we could point people to. One of the things I wanted to talk about too, was that how highly developable development, I can't even talk today, right? It has an API, which all the cool kids and cool services have that, but you could also, there's Python libraries you can hook into, you could script it with Python, you could script it with Ansible, you could create an instance via the API and never even touch the UI of the actual OpenStack installation. So I do wanna point out too, that it's more than just figuring out how things work together, networking, although that's a huge part of it, you could also practice your Python skills and your DevOps skills against it too, almost to the point where, someone might think that it's too heavy to run in HomeLab, but I can have a counter argument that not many things give you every single piece in one solution, that's what people actually use in the enterprise day-to-day in one particular installation. Yeah, that's a really well said, Jay. So OpenStack is written in Python, it is entirely done in Python, all of the core source code, so you can download it, you can check it out, you can modify it if you want, if you have something you'd like to do. OpenStack does have first-class support for popular automation tools and software, you can use Terraform to deploy your OpenStack resources, you can use Ansible to deploy your OpenStack resources, the community has providers and adapters for those libraries. OpenStack itself has a command line interface, if you'd like to use the command line. At its core though, OpenStack has, it's an API-first kind of platform, so they really heavily emphasize using the REST API to do things and every single project has a Python library that you could pull into a script or a program to do some kind of automation or do some kind of task. And since, like I said, it's open source, if you've got something that you need to do or that you wanna try out, you can absolutely do it. Like one of the things that I've done with OpenStack is modify horizon so that it can use OpenStack's native two-factor authentication with time-based one-time passwords. Oh, wow. That we had somebody who needed that, so I read the source code, learned how horizon worked, horizon's just a Django application, made some modifications and hooked into the OpenStack system using the Python SDKs that OpenStack provides. So yeah, it's a really nice community of code to work with because it's all pretty systematic, the API documentation I think is good and well put together. And it's all using the same SDKs, like the developers themselves use the same kinds of tools and code that they make available for other people to use, which is really nice because you know it's gonna be good quality, they're gonna really be invested in it. That's really cool. So something I think is gonna be a natural question that is probably gonna come up if it hasn't already and I didn't just miss it, but it comes to getting started. Now, it's often the case, obviously someone in the home lab is gonna have just one server, but considering that there's a lot of off lease servers showing up on eBay for really affordable prices, if somebody was to get a few of those servers and I don't know, maybe a 10 gig card, 25 gig cards to link them together, I kind of feel like with a decent, not a huge budget, but a medium, the small budget, if you get the right hardware, you could probably have a more comfortable installation on multiple servers rather than running it all on one. So what would you say to someone who wanted to get started with OpenStack? Should they start with an all in one test first and then deploy it on the multiple machines? Should they just go for multiple machines? What would you suggest for getting started? Like I said, I think the all in one on a single machine is great if you wanna just test stuff and not have to really get into the configuration. If you wanna do multiple machines, I think two machines is probably a good minimum for deploying OpenStack in a not highly available configuration. You could use the one machine to run the control services for OpenStack. You know, those eight cores, 32 gigabytes of memory that OpenStack needs just to exist and then use the other machine for compute and the storage because you don't have to use stuff. You don't have to use network storage. You can just have your VMs now directly onto like the local disk. You can even use spinning disks if you're really on a budget. Just understand the performance is gonna be, you know, maybe not quite as good as SSDs or especially if you're getting in via me if you've got that kind of money. But you should be able to do a minimum deployment. I think even some of the documentation that OpenStack has looks at that kind of deployment. So putting on the two boxes, one with control plane, one with compute. And the reason why I would make that split is so that if you provision too much of your compute, you don't ruin OpenStack because if everything's running on the same hardware, you have to be real careful with how much RAM you allocate for VMs because it's very easy to allocate all of the RAM on your system and then now it's unresponsive you can't delete the VMs because OpenStack's unresponsive and you just have a bad time because everything's all seized up. Wow, yeah, that doesn't sound like fun but that does kind of sound like something we run into with one solution or another we usually lock ourselves out of something or over, you know, extend something too far. So in some ways that might be like, oh, that's a challenge. I'm gonna totally tackle that. That not that challenge in particular, don't do that. Well, another thing I'll mention too is, you know, disclaimer, they're my publisher but I'm not endorsing these books. I haven't even read them but I just wanna let people know that they exist. I'm sure other publishers have them as well. If you go to Pact Publishing which is misspelled intentionally as P-A-C-K-T publishing their website. Last I looked and it's been a while, they had a ton of OpenStack books there. So if that's a way that you like to learn I had some books on programming like Python related books that are geared towards OpenStack probably make sure they're up to date before you buy them and just, you know, read the reviews obviously but it doesn't appear to me like there's really any shortage of information out there. You just have to find whatever resource works for you and whether it's training videos or reading books or both. Yes, you do have to be just a little careful that the information you're looking at is for the version of OpenStack you're using. One of the downsides that I've come across with the documentation especially like the official documentation is that if you search like how do I do a thing at OpenStack? You can easily get three or four different versions of how to do that from a decade's worth of releases. So you just have to be mindful that the information you're talking about is for the release you're targeting. In lots of cases it's usually not a big deal. Usually it's pretty much the same but you may come across documentation that's like, hey, things do it this way but that's in the new cutting edge release of OpenStack or a release that's 10 years old and so it doesn't apply to your situation. Right, that's true of a lot of things but I did run into that with OpenStack. I think it's especially true but I think there was an instance where I was following a tutorial and it just wouldn't work and I'm thinking, why doesn't this work? I mean, the person laid out every single step that you have to do and I'm just copying and pasting commands. I'm not even going my own direction this time, promise and it doesn't work. And then I look at the version like, oh, right. I'm on a different version of OpenStack than that individual was when they wrote it. And then I no longer feel bad about the people that comment on my YouTube videos that my tutorial doesn't work and then I find out they're using a different version. So it all came back full circle, I guess. But it is true for sure, yeah. You definitely wanna make sure it's a match and sometimes you might find that you set up OpenStack and then you look at another tutorial and it's for one version behind the one that you successfully installed and then you have to translate. But as long as you make sure everything is for the version that you're running and that shouldn't be an issue. Yes, absolutely. I've seen a few people discussing this in a chat. So a couple of questions, maybe two-fold here. What is the protocol used for the OpenStack components to talk to each other network-wise between the nodes? Does it automatically, is it just using the VM network to communicate that's built on the backend? Or does it already have some level of encryption between there? And second, when you build all the VMs, what are the networking protocols supported there in terms of managing IP addresses? I've seen someone asked if it used BGP. So let's talk a little bit, OpenStack networking. Okay, so OpenStack networking, you can do everything just about with OpenStack networking. That's probably one of the most complex or feature-rich pieces. As for the services themselves, they, by default, they don't use any kind of encryption. They just communicate over the network. You can either use a single network connection over for all of the information. So for the control plane, for the compute services, for storage services, you can do that. And I think some of the all-in-one or simplified configurations will do that. It's better to separate that over separate networks for security or performance. I know for our configuration, we bond, we have two ports that we bond together for the increased bandwidth and redundancy. And then we use VLANs to segregate the CEF traffic, the control plane traffic, the internal OpenStack service communication, and then the actual networks for the virtual machines. So that's just one example. There's plenty of different ways you can do this. One of the most powerful things about OpenStack is that it has software-defined networking features. So you can set up your OpenStack Cloud and you can give your end users the ability to manage their own networking for their OpenStack resources in a completely seamless, transparent kind of way where they don't have to be able to touch the hardware, the networking hardware, which in most cases is trouble because if somebody can touch the networking hardware, there's all kinds of malicious things they can do with that access. OpenStack lets you give people that same power without having to worry about the security pieces. But like I said, there's lots of different ways you can configure OpenStack. Networking to work, you can have OpenStack manage all of the networking and run network nodes. So in those kinds of configurations, OpenStack runs OVN or OpenV switch that's a provide like virtual switching and routing within the OpenStack space. You can also configure OpenStack to offload all of that functionality to the switches and physical networking gear. You have to have support in your networking hardware, like your vendor has to have that support built in. There's specific drivers that some vendors make for OpenStack to use. And what that lets you do is that lets you offload. So like when your user would set up a software defined network inside of OpenStack it would map directly onto the hardware instead of OpenStack kind of running OVN, OVS to make that happen. You can use so many different networking technologies and strategies though to, I mean a lot of this comes down to the VMs and how the VM networking works. So like I said, we use just VLANs because we have some networking gear that can't do VXLAN. So we're kind of limited there. I think you can use GRE tunnels, I think you can do some of the other fancier networking technologies or vendor specific technologies. There's really all kinds of different things you can do to run OpenStack and facilitate traffic that is all transparent to your user that all your user sees is make a VM, make a network, attach VM to network. And that's their experience and it just works. So basically they can hose themselves but they can't hose other people. Yes, yes, exactly. That's a good thing, that's a real good thing. If you've worked in IT for any period of time, I don't think anybody could ever stop users from giving themselves a hard time but if you could stop users from giving other users a hard time, that's always good. Yeah, you can tell Jaden here has worked with development teams where you know you need to segment them, they don't intend to harm each other but someone will over-allocate something. Hark, something like some of the worst things imaginable were done with the best of intentions. I'm misquoting it, I'm sure, but something like that. And part of the networking is this way because not every networking use case is the same and you need to find a technology that works for you. So, and there's also limitations to some of them. Like I know with, if you're using VLANs, you are limited in the number of VLANs. I think there's only like 40, 96 VLANs or something you can set up in many cases. But if you're, so we're talking about the enterprise. Don't worry about it. But if you're using VXLAN, it's like millions of private networks that you can set up. And this, for homelab stuff, not a big deal. But if you're at the enterprise and you've got 100,000 VMs with 10,000 networks, suddenly you have to worry about this kind of stuff. Make sure that your system can accommodate that. So for sure, for the homelab, I would say you're probably great just running everything over a single network connection and you don't need to worry about it. And if you wanna try to get a taste of that enterprise life and you just hate yourself and your weekends, try to bring in those other technologies and set things up in those more complicated ways. As long as we're not asking people to set up an email server. Yes. You don't need to host mail anymore. But no, I think this goes to when you're building these structures for enterprise. They've reminded many people the things you choose are directly related to the ultimate scalability of the product you plan to do. Yes. Yeah. Well, we only plan VXLAN, but we plan to have more than, we're gonna give every client their own VLAN until you realize, well, then that means we have a cap on a number of clients we can have. Yes, absolutely. And I can tell you too, make sure, if you do need to use 4,096 VLANs, make sure your hardware can support having 4,096 VLANs assigned. Cause we learned that lesson the hard way. Cause it'll just crash if it doesn't have enough CPU or RAM on the little networking motherboard. Yeah, that's a good point too. Your network hardware, it's probably not something that homelab people ever think about. There is a difference in some of the enterprise here. When they say max, like they, everyone just assumes well it supports VLAN so I can use all the VLANs. Not exactly. You can only have so many tables is segues into, I'd love to find a good story about it, but one of, I for understand as we further has segmented the ASNs, there's been problems with how big the tables are to understand routing tables and some of the large scale enterprise equipment. Cause we didn't know how to break the internet up like this. Nobody thinks they need more than 32 kilobytes of storage until they do. Yeah, until they do. Yeah, I mean, it's amazing to me that you could get a word processor type nothing like not even a single character in that word processor save an empty document and there's probably a good chance it won't fit on the floppy disk with typing nothing, we used to, you know, store those documents all the time and games themselves are at one point smaller than the average word document. And yeah, you've mentioned storage. Just want to throw that in there. It's just amazing how far we've come and either we're more powerful or we're wasting more. I'm not sure which but that's a story for another day, I think. Yeah. So what do you have any other thoughts about getting started with OpenStack? Cause I also don't want to scare people off and make anyone think like they shouldn't try this because of the higher footprint. I think the higher footprint might be a value add because, you know, if you're trying to get a job in the enterprise that'll help you practice for that. And I'm sure there's probably certifications out there that people could practice too. But are there any other thoughts you have on your side, Jaden? So the main OpenStack website has some really good resources. I think for like certifications, documentation, resources, some of the different vendors in the OpenStack space also have pretty good documentation. You can also log into IRC, join the IRC, that's where OpenStack, the OpenStack community lives and works and listen in or talk to people. They're real friendly. I haven't had any bad interactions with OpenStack folks so far. And I mean, I'd say if you just want to set it up, then yeah, absolutely dive into the documentation. If you want to use OpenStack for like a workload, if you want to say, see this is a good fit for work, try to find an OpenStack hosting company. There's tons out there. They're usually smaller, small or medium-sized hosts. They're often located in other countries, but they'll have virtual machines that you can use that you can get for a couple dollars a month. They'll give you access to the APIs, many of them, and you can try out and evaluate OpenStack as a user before you get into having to be an architect. Because for me, that's one of the biggest struggles for OpenStack and people trying to adopt OpenStack as a company is that it's really hard to set up and configure. And so many teams have tried to set it up and configure it and they'll spend six months and millions of dollars and fail. And that's not a good experience for anybody, and especially for folks who want to use OpenStack. So if you like OpenStack and you want to pitch OpenStack for your job or your team to use, just make sure it's going to work with your workload first before you embark on trying to set it up and build it all yourself. Yeah, it's important when there's a complicated project that to hire the expert because it's just randomly throwing it all together. You may not get the best experience. You may get a misrepresentation of the product based on that your first time you set it up. So... Yeah, I would also say too that adjust your mindset going in. I'm not, I'm sure like a lot of people will disagree with me on this, but I'll stand by it. Being confused and frustrated is the best state in which to learn. Because that's when you're looking for a solution to a problem. Don't let frustration make you stop. I mean, if you are having fun and you see it as a puzzle, putting together a puzzle, a puzzle that's going to take some time to really understand how the pieces fit. I think you'll get a lot out of it because it's challenges like these that really help you learn. And if you're just starting out in networking or infrastructure, don't be hard on yourself. You just don't have the same experience that somebody might have that's been doing it for decades and that's okay because they too were at your level at one point not knowing what the heck anything is and how it fits together and why the heck won't this work, right? We've all said that, but absolutely working through those types of things is what helps us learn. So if you approach it as in like this is going to be different than anything that I've installed before but it's going to help me learn and through any doubts that you might have that state you'll learn a lot more going through it. So just be comfortable, take your time, try not to get frustrated and focus on the fun aspect of it. And I think if you go into it, that mindset knowing that, yes, it's a challenge but it's not insurmountable. Many other people before you have succeeded here. So all you have to do is just keep at it. If there's still time I can answer a few more questions that folks have. So one question I commonly get is open stack and Kubernetes. Why pick one or why should I use one over the other? I would say, first of all, they're for two very different kinds of things. Open stack primarily is for running virtual machines. So full kernel environments and Kubernetes is geared for containers. So two different kinds of things. If you've got a containerized application that's ready to run for Kubernetes by all means use Kubernetes. That's what it's for. But if you've still got something that's kind of an older fashioned type of software that isn't well suited to a container or the kind of stateless architecture that Kubernetes enforces, then OpenStack may be a good fit for you. I will say one increasing popular thing is running Kubernetes on top of OpenStack. This is the open infrastructure foundation who's the open foundation that runs OpenStack and some other infrastructure projects. They've been pushing, it's a new stack called Loki and it's Linux OpenStack Kubernetes infrastructure. So you run Kubernetes in your OpenStack virtual machines and use Kubernetes to orchestrate your container workloads. And so this way you get the advantages of both that you can easily scale your Kubernetes clusters up and down. You can deploy your containers up and scale them automatically with Kubernetes and they've found the open infrastructure foundation in their own research. They found that 70% of their big OpenStack users are running Kubernetes this way on top of their OpenStack clouds. Wow. So yeah, you don't have to pick one or the other. You can have both and get all the advantages and benefits from using both. I know another question I saw is monitoring for OpenStack. So there are some OpenStack services that you can use to get information about the state of the OpenStack system for your workloads and your virtual machines yourself. You'll just need to use whatever traditional monitoring tool you use. There's not anything special OpenStack is bringing to the table. There are some vendors, some vendors who have closed source monitoring solutions for OpenStack, if that's your thing, if you need that kind of paid for service for your company or team, but OpenStack itself does have some tools you can use to get in the source and see what's going on inside of the messaging queue or get alerted whenever the state of the cloud is this way or that way. But those are non-core systems that you have to set up and configure yourself, which again gets back to that enterprise use case. I will say one thing I really like about OpenStack is that it is highly configurable and then you can build a cloud that is exactly right for your use case instead of having to pay for and carry 20 or 30 features that you're never gonna use. There's also, I'll just throw it out there, because I know we've not covered it, maybe Wendy will be on this channel, but Prometheus, there's an integration from Prometheus to monitor OpenStack a quick, I figured it was and I figured it even, it was part of the, it's on the OpenStack in the docs of how to tie Prometheus to it. I've seen it in there with a quick Google search. So yeah, there's plenty of open source things you can do to monitor it. Yup. Thank you. So those are the questions that I saw come through that I specifically wanted to address. All right, see any more, Jay? Did they think we answer as many questions as we can in a podcast about this? Yeah, we have, I mean, because I think to go the next level from here we have to show it, right? And we have to walk people through it, which is, like I said, I'm learning it myself. So I'm sure it's gonna be fairly soon. I'll have some OpenStack content on my own channel. But until then, I don't think we could do that in a podcast because the majority of people are probably just driving or listening while they work and can't see us, so yeah. And I feel a little foreshadowing here, so Jay seemed to be asking a lot of questions. What if someone were to buy a few servers and maybe build something? I think it could be, because Jay could be that someone who might want to build something, maybe for a few OpenStack tutorials. And maybe I have a few R710s, yeah, a few R710s in my closet right now that are begging for a purpose in life. And it might be that for them. And maybe as time upgrades, there'll be another one on the way too, because I got some, I may have some older servers because we're building new stuff at my office, so. And I'll be in touch with you about that. Well, thank you very much, Jayden, for joining us and educating us on OpenStack. This was a lot of fun. I learned a lot about it. I'm hopefully our audience did as well. And as we said, even Jayden said, it's hard. He's someone who does this for a living, so he's still, he's- Don't feel bad. It's okay, we're all learning here, right? That's, it's fine. It's a learning opportunity. If you don't get it in the first hour, you start playing with it, keep trying. You don't give up. It's not you, it's OpenStack. It's a big project. All right, well, thanks. Take care, everyone. Always listen, head over to the feedback section. If you have comments and concerns, because me and Jay love doing the Q&A episodes where we follow up on some detail overlooked or learn about new projects. And if you've had some experience with OpenStack, leave it in the comments down below. We're always learning about other HomeLab users experience. I know at least a few enterprise people were talking about having their own BGP route. So I know we have some advanced users in here as well. So we'll hear from everyone and thanks. Thank you. Thank you.