 Good afternoon, good evening and welcome to Red Hat portfolio. And welcome to this week's Ask an Open Shift admin office hours. Thank you everybody for joining us today. It is, it's not a beautiful day. It's a nasty rainy cloudy cold day here in Raleigh, but hopefully it looks like your weather is doing a lot better there, Johnny. So how are you? I'm great, man. It's a beautiful day. It's, it's cold. It's like Texas cold. So it's almost frozen thunder right now. It's like 48 or 49, but your power seems to be good. Right. Yeah. For now, for now, you know, the, like the wind can blow and everything could go south. Probably shouldn't make jokes about it. But once you've lived it, right? It's, it's all fun and games. It was funny. I was watching the, uh, the intro scroll by and right as it switched videos, right, where it goes from the treadmill. Angela, I see you commenting about seeing yourself in the video, right? As it was switching over to the intro for this, the stream, I was like taking a drink of coffee and I'm like, man, I really hope that I didn't just like get these backwards or anything where now I'm going to be like trying not to choke on coffee. I gotta, I gotta be more aware. Yeah. It's episode 51. You'd think I'd be used to this by now. So, uh, yeah. Hello, everyone. Welcome to, uh, the second to last, um, I think the, uh, the fancy word for that is penultimate, uh, stream for 2021. So, uh, this stream today, it is just Johnny and I, we don't have a guest joining us. Uh, we do have our ever present and always amazing producer, Stephanie behind the scenes. So the, the person you're seeing from the, the red hats or the open shifts, you know, type of account that's Stephanie. Uh, so this week we're going to be talking about something that is near and dear to, uh, I think our hearts and I think a lot of other people, which is how do I create a cluster that is ultimately never going to be used for anything other than like testing something, right? Validating something. Is this going to break or is this going to succeed, right? Eight, a non production cluster, if you will. And it's something that is, um, the way I have described this before is with open shifts, we kind of force you to always create a production cluster or, or as nearly as possible create a production level cluster. So it can be a little awkward or difficult sometimes to create a cluster that is deliberately like not meeting those expectations. So we've got a whole bunch of things to kind of go through both official solutions, some official unofficial solutions. Um, I'll caveat this whole episode right now by saying most of what we're going to be talking about is, um, at best it's going to be marginally supported, um, because, you know, again, we're creating non production clusters. We're doing this with minimal resources. You know, we're doing this with, uh, minimal overhead. So just keep that in mind as you look through all of this stuff. And I'll try and remind folks of that as we go through specific things. So, uh, uh, Johnny, I know you were going to talk about code ready containers, right? Yeah. Yeah. So, um, code ready containers, you know, it's, it's probably, it's been around for a while and like different variations, you know, like we had, um, many shift for a long time for open shift three. And I think like, you know, like late versions of open shift three. And then, um, code ready containers with open shift four. And if, if you were like an early adopter with code ready containers, uh, then you may, your mileage may have varied as far as, uh, experience was concerned, but like it, you know, I, I was messing with it yesterday for like Linux and Mac and just kind of going around to see it. And it's, it's really streamlined, man. I mean, like it is, it's awesome. And so I'm anxious to see it cause, uh, you know, we were chatting before the stream and you're like, Oh man, this is great. They've done, they've made such huge improvements. And I'm like, well, I used it back in like four or two and kind of dismissed it and haven't used it a lot since then. So I'm interested to see it. Um, but before we get that far, uh, let's talk about some of the top of mind topics for the last week or so. Um, and you and I, we brainstormed, um, over the last 24 hours or so. We came up with a whole bunch of these. Um, so I want to, uh, be cognizant of our time here, but make sure that we, uh, hit on all of the things that will be, uh, that are important. Um, so I'm going to, I'm going to start by talking about, um, specifically in announcements, uh, so late last week, open shift pipelines, which remember as the CI CD technology built on tecton were released, uh, open shift pipelines, one dot six was released. Uh, so let me find the right window here. I'll post this link in here. Um, so this is the, the release notes that were, um, included in the internal email. If anybody can't see that, just let me know. Um, I'm not logged into JIRA and it allows me to see it. So hopefully you'll be able to see it as well, but there's a, just a ton of information in there about all of the things that have, uh, changed or that have been included in that. So pipelines is one of those things that is really, really useful when you use it. Um, I've been trying to experiment with it some and figure out how to do things like automatically rebuild containers off of, you know, a source image or template image updates and stuff like that. So, you know, I'm trying to, trying to get smarter every once in a while. It'll happen eventually. Right. It's just a matter of time. Right. It's, it's persistence. Yeah. I was, I was most this maybe, right? Exactly. As my now 14 year old son likes to say, well, dad, even a blind squirrel gets a nut every once in a while. That's right. Uh, so, oh, another thing, um, that I'm excited to talk about because, uh, we're working on it for a, uh, future episodes either here on ask an open ship, bad men, maybe on one of the other streams. Um, but we were talking with some of the SRE folks and, you know, the SRE folks apparently have released a whole series of ebooks on like DevOps culture and SRE practices and all this other stuff. Um, so I was, I was really happy to learn that. I've only just started to kind of dig into those. But, uh, so I'm going to post the link here in chat. Um, I would definitely encourage if that's something that's interesting to you to go in and check that out. Um, I feel like in 2021 DevOps, the word or DevOps, the phrase has started to lose a little bit of meaning, maybe. Um, but I, what I looked through, it looked pretty good. It looked like they were touching on a number of important points. Yeah, that's awesome. Yeah. So like SRE is like the, I, I like to consider SRE, like the evolution of DevOps, right? So DevOps is getting, you know, out faster, you know, quicker, faster, better, and then SRE is getting it into production and then keeping it alive while it's in production. So you don't have a, you know, like an event like yesterday. So it's funny you say that I was looking for, um, looking to see if there was a root cause analysis for yesterday's AWS incident. And, uh, so for all I found is just a tremendous amount of snark. Uh, yeah. So it's, it's been fun reading all of the tweets and, you know, everybody knows the register, seeing the comments on the register article about it and all of that. So, um, yeah, definitely, um, it's important, right? S, SRE, DRHA, right? Taking into account, you know, I'm not going to put on my marketing hats, you know, the whole hybrid cloud, multi-cloud, all that. Yep. It's, um, it's been interesting. I did send, uh, Christian, uh, a message asking if, you know, speculating as is it DNS or, or, uh, uh, uh, BGP this time. So that is awesome. Cause yeah, usually if it's, if it's not DNS, it's BGP. And if it's not BGP, it's DNS. Ask Facebook about that one. Uh, let's see, just a couple of other things. Um, so Johnny, you mentioned ArgoCon. I'm not, I'm not familiar with that one. Yeah. So it's a, it's a Argo conference, uh, hosted by Intuit and, um, Christian Hernandez from Red Hat is going to be presenting there today. Um, I think it's this afternoon. So it starts at, um, uh, I think 10 central. So whatever that is, uh, in the rest of the world. So, you know, like it, once you drop from here, you can go over, if you register, it's all free. It's all online. So it's just a bunch of, um, get ops practitioners going out and, um, given seminars on best practices on, uh, implementing get ops in your environments. Yeah. We, we know that Christian Hernandez guy, uh, heard of him before. Yeah. He, uh, he doesn't have a stream this week, but it looks like next week is the next get ops happy hour, which, uh, I believe that will be his last one for the year as well as our last stream for the year. Um, speaking of which, uh, so next week we will be joined by at least one, if not two VMware folks to talk about, uh, a number of different topics. Uh, so please be sure to subscribe on whatever platform you're on. So you can get alerts as to when that happens. You can also go to red.ht slash live stream. And there'll be the streaming calendar there. You get, there's a button in the calendar down in the lower right hand corner that where you can add it to your calendar so you can get, uh, get all that stuff on there. We try and update that calendar regularly. Like if stream gets canceled or moved or anything like that. So you'll be able to know. Um, counting down the top 10 episodes of 21 next, 2021 next week. So it's a, it's only like an every other week show. There's only like 10 episodes, right? That's all we want. No, I've, I've been known to poke fun at Christian about, uh, you know, only doing half the work that we do here on the, uh, weekly stream. Uh, so I think the, um, actually I've got two more things to talk about here and then I'll, I'll let you talk about, uh, the last one. So, uh, I've gotten a couple of questions recently and actually I was working with a customer yesterday, um, on a POC and we were kind of digging through how to configure network. And it, I felt like it was a good opportunity to talk about, you know, Andrew's philosophy, Andrew's perspective on with four dot nine, you know, how, or what is the best way to configure both the primary as well as secondary interfaces. So my rule of thumb has always been if it's the primary interface and by primary, I mean the interface that is, or has an IP address on, uh, the machine network.cider. So in your install config.yaml, uh, there is a stanza networking.machinenetwork.cider and that represents the subnet that, you know, is, is the best way to is the one where every machine will have every node will have an IP address on that interface. So that interface is used for, among other things, the SDN and all of the traffic that crosses the SDN and all kinds of other stuff like that, right? So that one is, in my opinion, it is much more difficult to change after the fact. So if you have, you know, if you're using a static IP, you want to set that through, you know, kernel parameters or, you know, the VM option. If you're deploying to VMware or something like that. And if you need to change that, the most effective way to do that, even though it's inconvenient is to reload the node. Right. And I say that because it is the safest way to do that. You know, sure, you can use something like the tech preview NM state operator. But when you do that, effectively you're disconnecting from the network while NM state is doing its work, right? It disconnects from the network. It reconfigures things. And then assuming all goes well, it reconnects to the network. And Petter, who is the lead engineer for NM state, he and I have had some conversations and yes, there is intelligence built into there, right? It does a lot of checking and a lot of validation and basically we'll roll back if it detects that something goes wrong. But my opinion is, you know, and has been that you basically have to measure that risk because if something does go wrong, if it doesn't come back for whatever reason, basically have to reload the node anyways. So if you want to be ultra conservative, you know, ultra sure it's reload the node. If you want to be slightly less conservative with it, you know, maybe coordinate and drain the node and then do that in place modification. And if you want to, you know, it's a non-production cluster that you don't care what happens to it, then by all means, you know, give it a test and see what happens. Fire away. Yeah. So for secondary plus interfaces, it's it's a couple of different things that come to mind. So one, there is a KCS and I should have dug this up beforehand and I didn't. Do I have, I don't think I have it in. Oh, I do have it in my bookmarks. I tell you, it's somebody made fun of me for using bookmarks the other day. Like, what is this 2001? Like, well, is there something better out there? So there is a KCS as soon as I find the right window here that is not the link to the KCS. That's the redirect link. There's a KCS that describes how to configure additional network interfaces using machine config. And in my opinion, it's a little weird the way that it recommends that, which is basically, you know, create a single machine config that goes out to all of the nodes that has configuration for all of the nodes network interfaces. The rationale being network manager will only apply the one that matches the interface that it has. Right. So there's a UID or a MAC address, you know, map or identifier inside of there. So it'll say, well, I don't have this MAC address, you know, none of my nicks have it. I don't need to apply this config, so on and so forth. So that's one way to do it that is, you know, completely within bounds, you know, it's documented. It's supported today. It just feels weird to me. So another option is if you're comfortable using a tech preview feature so you can use the NM State operator. And to be clear, NM State is only tech preview when not used with OpenShift virtualization. So if you're deploying bare metal, if you're using OpenShift virtualization, it'll install NM State and you can use it fully supported there. Can we ask about privacy on OpenShift encryption question? Yeah, by all means. I should have started with and thank you for reminding me the premise for our show here. So the ask an OpenShift admin office hour stream is one of the office hours series of streams here on Red Hat Live Stream. So what that means is we are here for office hours, right? If you've ever had a manager or a professor or anybody like that who had office hours, it's a time for you to come and ask us anything and everything that's on your mind. You know, of course, if it starts getting, you know, way off in the weeds, you know, we might not be able to help, but that's okay because we can always reach back into product management engineering, right? Find all those folks and get those questions answered for you. So you're welcome to ask us those questions here on the stream. We'll do our best to answer them to the best of our knowledge and worse comes to worse. We don't know, Andrew is not afraid to say, I don't know, I'd admit that and we'll find the answer for you. So yes, please by all means ask away. Our hope nine, I think have been down this rabbit hole discussing why OpenShift uses DHCP. Yeah, so that's a good point of DHCP is required specifically and always with IPI configuration. And that's simply because, you know, it doesn't know about the outside environment, if you will. So basically the cloud provider is talking to the underlying infrastructure provider. So vSphere or OpenStack or AWS or whatever it happens to be. And it just says, hey, provision me a virtual machine that he has these characteristics. We don't do any internal management of IP ranges to be able to do things like, hey, this IP, these 30 IPs I can allocate to, nor does it do any external DNS management. So there is no ability to reach out and talk to, maybe you're using Power DNS or Active Directory or whatever it happens to be. There isn't anything inside of OpenShift in the cloud provider that can reach out and say, hey, I assigned this IP to this node, create this DNS record. So we rely on DHCP to do that, right? And DHCP, you know, we pass it information like the node name, but it uses DHCP and DHCP's dynamic DNS updates to accomplish that type of information. With UPI, DHCP or user provision infrastructure, DHCP is optional. So you can use static IPs through almost all of those. And our hope nine, please, please don't hesitate to ask further questions if you'd like there. JP Day needs some help with open data hub operator installation. Got it installed, but not Jupyter Hub notebooks, which is what my guys need, any articles or eBooks. I will have to research that and find out. So I know the right person and we can certainly get you pointed in the right direction there. So feel free to send me an email, andrew.sullivan at redhat.com and we'll follow up with the right team. Tiger here talking about fried turkey. So yeah, Tiger sent me a Slack asking where it was a good place to get it. And then my response was, I don't know. I've never had it anywhere except at home. I think here in the South you can, there's some places that don't make it, let's say Bojangles maybe. There are some places where you can get it, but. Yeah, I've only ever had it at home or somebody else's house, but I love some fried turkey. So that's not a bad idea. Yeah, I feel obligated to always mention safety if you choose to do it at home. Like take that stuff seriously because you've got a pot of like three gallons of, you know, 350 plus degree oil. So yeah, make sure you watch a YouTube, you know, I'd say look at the dislike count to find the right one, but you know, cheap shot. So yeah, definitely take the safety aspect seriously if you choose to do that yourself. I feel like I need to mention that because my wife, we've been married for almost 20 years. She still gets a little paranoid every time we do it. Oh yeah, mine too. Mine's out there, like she's got like the little checklist. Yep. Do you have your PC on? Yeah. Let's see, our hope nine. Thank you for helping JP Date appreciate that. David, back up and restoring a cluster. We back up the SCD database of a VMware UPI cluster, but what's the mechanism to restore this? So the documentation is, it should be in the documentation but there's a script that you run inside of there that will basically restore that data. So I should mention, so Andrew's opinion of SCD backups is not a full cluster disaster recovery solution. They're really good for scenarios where like an SCD node goes down and you need to completely recover it from scratch. And you want to precede all of the data in SCD so it only has a little bit of data to re-sync instead of potentially gigabytes. Or if you have two or even all three of those SCD nodes, control plane nodes, something happens to them but you still have all the control or compute nodes. So you can recover the control plane at that point. But yeah, if it's site A is now, it's completely offline for whatever reason, we want to move to site B and site B is different, right? And by different, I mean different subnets, different virtual machines running the OpenShift cluster, all that other stuff, just restoring the SCD database there that will, it might work but it's gonna be messy getting there. And we talked about that with, who do we talk about that with? No, not Adele, Anand, when Anand was on for the SCD episode. So I'll see if I can dig up the episode. I think it was 21 that we talked about that episode 21. Yeah, so yeah, tiger off on a tangent about fair food. So we missed the North Carolina State Fair this year. My kids were very disappointed and when we went to the Chinese Lantern Festival earlier this week, they were very happy to see that they had, it wasn't fair food but it was similar type of things. So they did get their fill in, I guess. Just reviewing the chat here real quick. Southern Maryland Stuffed Ham, now I'm gonna have to look into that. Thank you, JP Dade, because I was telling my wife that we should do a ham this year. What if I accidentally delete the default and OpenShift namespaces? Do you know how to recover it? My first response to that, DMI three, would be open as support case because those guys are gonna be best prepared for that. Off the top of my head, I don't know. Default shouldn't have anything in it except for a couple of services and some other stuff. It is a special namespace but I would think that that one would be a little easier to recover. The OpenShift namespace on the other hand is probably gonna have some important stuff, if you will. So I would definitely suggest first thing would be to open a support case if it's any kind of production cluster. If not, theoretically, if you recreate the namespace, the operators should recover the workload because I don't think that there is, I guess I can look. Let me share my screen here. Window, I want this one. This is the single node OpenShift cluster but for the purposes of this, let's show all. I don't think there are any, yeah. There's not really any workload that runs in the OpenShift namespace. So in theory, it should be able to recover most of those things. You can see all of these are kind of the default. A lot of the templates are in that OpenShift namespace. So if you were to do that. Is that what it is? Like, oh, so you get templates? Okay. Oh, I don't have the, I guess while I'm here, I can install the console or the terminal operator. So yeah, that's, okay. So that's good to know. I would say then that in theory, like the samples or the templates operators should recover that content if they see that it's missing. But I don't know for sure. I've never tested it. Helm deployment versus OpenShift deployment. So to me, that's a matter of preference. So an OpenShift deployment. So let's come over here is, and please correct me if I'm wrong here, is basically a Kubernetes deployment object, right? So if I were to come in here and I'm defining the overall metadata associated with each one of these. And then I'm going through and defining this set of containers that are associated with this deployment. And I can do things like scale up and scale down. So I think probably the most analogous thing to a Helm deployment in OpenShift would be an operator-based application deployment. So let's say I was going to go over here and use my SQL, right? I don't know which one of these to pick on, all right? There's a Maria too. The ephemeral. I thought there was one of the, there's some database that is one of ours, whatever. We'll pick on Crunchy, right? So if I come over here and, you know, I deploy the Crunchy DB operator, I can then go in and then via CRD request, right? And instantiate a new Postgres database, which is sort of analogous to using a Helm chart to deploy an application. So what's the benefit? What's the change to each one of those? So usually the way I describe this is that operators are a mature operator, I should say, has more complete or more thorough lifecycle management of that application. And I want to be clear here and say, so let's back up here. So if we look at this guy, right? You see here this capability level. So the Crunchy folks have done an amazing job of making sure that their operator has a lot of functionality inside of it. So a Helm chart kind of does this basic install and seamless upgrade type of operation, right? I can go in and do a Helm deploy or a Helm upgrade and it'll take care of that. But once we get into things like deep insights, so the operator can look at metrics coming out of the deployed applications and then proactively or preemptively take action to do things, you know, maybe scale up or maybe make adjustments or something like that. There is a, so if I do kind of find, there's a blog somewhere I think that talks about this. And the other cool part about like the operator too is with that reconciliation loop, right? It plays good cop, bad cop with state, you know? So if it's supposed to be there, it'll, once that loop goes through and it's gonna reconcile what's supposed to be there and it'll put it back if it's not. Yeah, so this blog, which is a little old now, it's about 18 months old. I'll post the link in the chat here. But Daniel, who is the product manager for the operator, operators, operator hub, operator life cycle manager, talks a little bit about this. There's also some material that is completely escaping me right now. There is some material out there that compares them against each other. I thought we had a blog on it, but I guess not. Yeah, so I'll see if we can find that and I'll dig up a link and include it when we get a chance. So the short version is Helm, absolutely great. No reason not to use Helm, especially with 4.9, where there is now, I don't need the tour, thank you. There is now in the, yeah. We can add certified Helm charts directly from the interface here, right? So absolutely use Helm charts, right? There's nothing wrong with using them. Operators, in theory, have more thorough, more robust functionality when it comes to kind of day two operations with the deployed application. Definitely that clarifies. And if you're looking to do like, get opposite type things too, you can use Helm to deploy operator. So it's, you can either click and need it too. All right, so I got completely distracted there and forgot which one of these I was talking about. Oh, network configuration. So yeah, just for secondary interfaces, you can use machine config. Machine config can get a little clunky if you have a bunch of hosts that need to be configured. So I always recommend folks use an M-state operator. It is tech preview, so you just have to be willing to accept a little bit of risk if you choose to use that with a production cluster. So the last thing that I'll talk about is just a quick reminder that, so it has been, if I, the internet answered my question correctly, 51 days since the OpenShift 4.9 general availability release. So it was mid-October when that released. So what that means is that we are kind of entering that window where we should hopefully see 4.8 to 4.9 upgrades be available in the stable channel, hopefully soon. So if you remember back when 4.8, we did this with 4.7 to 4.8, the average is something like 55 days, 55 to 60 days. And remember just like the stock market, previous performance is not a guarantee of future capabilities. So it may not happen. I'm just saying that we are hopefully close. So if you haven't had a chance to evaluate, to look at, check out, maybe do a test upgrade to 4.9 in the fast channel, now might be a good time to start evaluating that. And then you wanted to talk about Kubernetes 1.23, Johnny. Yeah, so just putting it out there that Kubernetes 1.23, it released yesterday, they're calling it the next frontier and it's got an awesome icon. So it's essentially the USS enterprise. If you're a Star Trek nerd, then you'll definitely know what I'm talking about. So it's just got an overlay of the enterprise and it's the next release or the next frontier. I'll put the link here in the chat. But yeah, it's just the news version's coming out. We should probably see it in the next release of OpenShift if we stay true to the release cycle. But yeah, it's awesome. There's a Google podcast that came out on it as well. So if you're in the podcast, then you can check that out. Yeah, that's pretty much up for it. I just want to make mention of it. Yeah, I haven't seen that one yet. And just a reminder that we are, so Kubernetes has switched to three annual releases. OpenShift is in the process of doing the same. So just be aware that not things are slowing down but there will be more time between why releases. Yep, no, it's good, it's good. Because like the CNCF umbrella is so big and Kubernetes has gotten so big that you need time to burn in. Yeah, yeah. Yeah, what do they call that? The big CNCF map that has the like thousand icons on it? Yeah, it's insane. Okay, so that's all I've got for top of mind topics. Please don't hesitate or all we have, sorry. So please don't hesitate to continue to ask questions in chat. We love getting your questions, helping to answer those. You can always follow up also. So if we don't have time to get to a question, if you think of something after the stream, you can follow up. You can reach me at Andrew.Sullivan at redhat.com or practical Andrew on Twitter. I know Stephanie's been putting our contact information. Yeah, as we go along. So you're more than welcome to reach out at any point in time. So let's talk about non-production clusters. As we said at the start, right? A lot of times, you know, especially as administrators, we really, we like to test things, right? We don't wanna break production. And there's a, I saw a Pithy saying recently about, you know, not everybody has tests, but everybody has production and therefore everybody has tests. We don't necessarily want to make that a reality, right? I don't know about you, Johnny, but I remember many late nights and weekends because, you know, something went wrong. The one that stands out in my mind the most. The reason why I became a storage administrator was one of our devs had a Pearl script go wrong and just it got into a loop of writing out an error message and promptly wrote out roughly 10 terabytes of text before somebody caught it. And, you know, that was our poor storage admin at the time was just adding this to the volume, to the NetApp aggregate to keep it from filling up. That is awesome. That is awesome. Yeah, so if you remember back in the day we had that VDI and we were sharing apps between the sites and one of our guys had like, he had a rogue R sync script that went out and it was just blasting all this content out to all the sites and blew it all to NFS servers. It was awesome. So I think, you know, if we were to ask you all, you know, ask our audience, every one of us has a scenario for something like that where, you know, gee, if only I had, you know, tested a little bit more or even had a test environment at all. And that was kind of the impetus for this. And the secondary motivation is quite simply, you know, learning. It's sometimes hard to get started, to get, you know, just step one down the path of learning OpenShift and ultimately learning Kubernetes. So we wanted to present, you know, we wanted to talk about a couple of different ways to get moving with that. Tiger, I see you talking about single node OpenShift. And yeah, with 4.8, 4.9, now we can do single node OpenShift. So that was something that was asked about for a long time. You know, I have always advocated for if you have the resources, which is basically eight CPUs and 32 gigabytes of RAM, the easiest way to get up and running is assisted installer and single node OpenShift, right? It's just, it's, you go to the interface, you click the button that says I wanna deploy, you know, you check the box and we've seen it, we've shown it on the stream before, I'll find it, I'll dig up a link for the blog post. But yeah, it's, it takes roughly 30 minutes, 45 minutes depending on your hardware and you can be up and running in that with kind of a minimal amount of effort. The problem with that is, and you know, Tiger rightfully highlighted that it takes a fair bit of resources, you know, eight CPUs and 32 gigabytes of RAM, just flying around is not something that everybody has. You know, during the disconnected deep dive, as well as one of the other, I think it was the OpenShift 4.9, the what's new in OpenShift 4.9, I showed how to deploy a single node OpenShift from the CLI. And with that, if you're not using assisted installer, you can bring those resources down, although I would still not recommend less than like six CPUs and 24 gigabytes of RAM. You know, it's still a fair amount of resources, especially, you know, your company-issued laptop, you know, probably doesn't have that much extra, especially if you're using a browser, you know, and everything in our lives is browser-based. So yeah, Tiger saying he's got 16 gigabytes and CRC, you know, it doesn't work yet. It's close, but it doesn't work yet. So I thought I saw another one go by here. What's the most affordable OpenShift test machine setup? You know, it's a matter of opinion. And Johnny, I'll say my piece and then I'd be curious what your perspective is as well. You know, if your laptop, desktop, whatever you're using has the resources, that's of course going to be the most affordable, right? You've already got those things. Probably the next most affordable, depending on how long you're running it for, is something like a hyperscaler. You can go and get an AMI from, or a EC2 instance from AWS for a couple of dollars an hour, not even a few cents an hour. So if you're keeping it running for a day or two or even a week, it's really not that bad. The next step up, and this is where it's not, a lot of individuals don't want to pay out of pocket for it would be to use something like Equinix Metal, formerly known as, oh man, how did I just forget what they used to packet? Packet.net. You know, they, I think for $2 an hour, you can get a machine that will run, like not just a single mode open shift, but like a full, you know, virtualized open shift, you know, five, six, seven, eight node cluster. I used to do a lot of that where I'd keep those clusters, you know, keep that one machine for, you know, two to four weeks. And yeah, it would cost a couple hundred dollars, but you know, I had a lot of power, a lot of capability in there without having to have a quote unquote, home lab or anything like that. So it kind of varies. You can also, you know, I'll say that you can also find, sometimes old hardware on sale. There's a number of resources for finding like recycled servers and stuff like that where you can get for a couple of hundred dollars, you know, a server to put in your house. The problem is then you got to have a server in your house and I don't know about you, Johnny. My wife does not like the sound of servers in the house. Yeah, it's a thing, right? That's like a legitimate issue. Yeah, I have mine even tucked away in a corner upstairs and it's still a conversation starter. Yeah, yep, yeah. Yeah, I did for a brief period right after I moved from being a customer to being a vendor. I had a storage unit in the house, you know, a storage system. It did not go over well. I will leave it at that. Yeah, I bought, I've been like downgraded. I had like some two U's and I went to like the super micros thinking that like, I was like, oh, they're just a little bit better than a Mac mini. They're so loud. Yeah, little fans mean loud fans. Oh man, they are loud. And for any Red Hatters, there is a thriving home lab user group for lack of a better term, right? That there's several folks inside of Red Hatch who have like expertise on how to remove those fans and replace them with quieter ones and all kinds of equipment that I never would have thought it possible. But you know, anyways, I thought I wanted to hear your opinions, Johnny. Yeah. So my first thought as soon as I read that was like the machine you already own, right? Like, I mean, out of the boxes, if you already have a machine, then go ahead and use that. I mean, there's some other ones like, there's Linode Linux that you could use for spending up some instances or digital ocean and stuff like that where the droplets and stuff like that are pretty cheap. But it really, everything else I, what you're saying, I actually wrote down the Epinox because I haven't heard of that before. So I'm going to go check that out. But if you're looking for like cheap compute out in the cloud that's like a reliable then, you know, digital ocean's been good. And so as I think it's Linode, I think that's the other one. And yeah, they've both been pretty reliable for me. But I'm a big fan of digital ocean. I would say, you know, probably all Linbit is another one that comes to mind. You know, if you have the, you know, the accounts and stuff like that with AWS, Azure, Google, you know, of course you can deploy IPI or something like that. You know, all of those things considered. But if you're trying to use your corporate account, a lot of times they are permissions limited. You know, we talked about this before, you know, with the, in the disconnected deep dive of things like VPCs are often tightly controlled by corporate security teams. So it's, it can be hard. So, and using a personal account, oftentimes you mean means you need to pay for it. So Cochreker has a C7000. That's a, wow, that's serious. Yeah. And yeah, that is, I think that's a two, is that 240 power? Like, and then like the super micros that I have there, like a 208.10 or something like that. Yeah. I mean, they're, they're normal power. So like the real lightweight. Yeah. I saw somebody down below in the, in the chat, I say down below. I think it's only down below for us in the restream interface. I think everywhere else it's up above. Yeah. Oh, JP Dave, that was you saying 220. Yeah. It can get, that's another thing to take into account, right? Tiger mentioned heat and JP Dave mentioned power. It can be expensive to run those things. You know, in the, in the winter, it's nice to have a little bit of extra heat in the summer. You don't want extra heat. No. And regardless, you know, you're basically running a very inefficient heater year round, which is consuming a lot of power. So let's talk about some other options. So one of the things, so let's move a little bit past the infrastructure part. An old laptop, you know, an old, an Intel knock or whatever the AMD equivalent is, old desktops, right? All those types of things. Most of them are going to have the CPU. You can probably, you know, and I don't want to speak for anyone or anything like that. You can probably relatively easily add RAM to those systems. I would recommend again, somewhere between 24 and 32 gigabytes is the minimum. Really the hardest part is almost always storage. I wouldn't use, you know, if you're deploying a single nodes open shift, if you're trying to deploy a virtualized instance into a single node, I wouldn't use anything less than a good quality SSD. And preferably an NVMe device. It'll just make that experience better. Trying to use spinning rust is just not going to end well. You know, some folks I've talked with, they've had success with the consumer NAS devices like QNAP and what's the other one that I can't think of. Anyways, Synology, thank you. Where they'll have accelerated hard drive. So they'll use, you know, an SSD or an NVMe device to cache or to accelerate the hard drives. That can work depending on a number of factors. Just be aware of all of those things and keep an eye on, you know, the latency and stuff like that. Cause that's ultimately what impacts it. Had to run special power to it. Yeah, there's a, I know of at least one person here at Red Hat who has like a, what it would basically be considered a data center in their garage. So yeah, it's a, I am both envious and not at all envious of those people, right? I like hardware. I've, you know, Johnny, how much time did we spend together in a data center? Yup, I know, I know. I love server equipment and it's just, you know, I used to love to rack and stack cause I find it very relaxing to go in and you know, you organize cables. It doesn't take a lot of efforts and you make things look pretty, but I can't imagine having it in my house. So. Yeah. And I like keep the money that I make, you know, like. Yeah. Yeah. It's essentially like. So one of the things that, and I want to be cognizant of time, you know, we don't want to go too much over even though we can't go over today. So one of the things I want to talk about is, I think if you are running a production environment, you want to have something, you know, for kind of a test or a dev environment, be cognizant of what it is you're testing and how closely you need to replicate production. So for example, I just, I need to test updating from 4.8 to 4.9. Do you need to exactly mimic your production cluster? Maybe, maybe not, you know, maybe it's good enough to have, you know, only one or two compute nodes and only the most critical workload that's, you know, been mimicked in there. Maybe you don't need any workload inside of there. So, you know, kind of being cognizant of, do I need to exactly clone production or can I have just a completely one-off thing? I would say, if you're taking a full cluster, you know, three control plane nodes, two plus compute nodes, all that other stuff, trying to do or emulate things like updates with single node open shifts is not ideal because single node open shift does updates differently. So you can't expect it to behave the same. And of course, things like code-ready containers aren't basically gonna be the same. So, Johnny, before I jump too far ahead here, I realized I'm going down our list of topics or a list of our notes here. I wanna give you, I wanna go back and revisit code-ready containers. So code-ready containers and, you know, from your perspective, what are the use cases? And I say that knowing that it's really intended for developers, you know, when we created and released it, but for administrators like us, you know, what do you see? Yeah, it's really about like, you know, getting familiar with the different resources within OpenShift. Like you have a full, not a full API, but you have a lot of the API. So you can still look at cluster operators. You can deploy operators. So getting, you know, the feel of working within an OpenShift cluster, because, you know, like it as an administrator, right? A lot of times it's not just deploying the cluster and then saying, you know, okay, you've got it. It's, you know, you always get the question back, like, okay, well, how do I do this? Or I'm having a problem with this? You know, so it allows you to have an API with, you know, a console and like, it looks and feels like OpenShift. It's pretty, it's pretty awesome. And so you can create, you know, persistent volumes, you can run build configs that deploy apps and stuff like that. So I think that's the biggest thing is that, you know, it helps put you in the driver seat. So when you do get there, you know, like it's gonna be the same same when you get to production. So I'm curious, because you've piqued my interest when we were talking about it this morning. What does modern code ready containers look like? Okay, hang on, I'll show my screen. All right, so, and then let me make this, I guess, can you see this? You might need to bump it up a couple more. You can actually make your, it looks like you can make your screen wider here. All right, and then- Your problem cracks me up, by the way. Dude, it's my, it's a, oh my God, oh my ZSH. So if you're into it, like go check out the profiles, it's pretty cool. So if I do like a OCD console, or my gosh, come on, CRC console, credentials, it'll give me everything that I need to log in. And so, yeah, I'm gonna blow this away after this. So, yeah, don't have to worry about anything, but this is, this is all CRC. So if I do OCD get CO, so for the cluster operators, just like I would in a normal OpenShift cluster, I can see that I've got a pretty good amount of operators. The install itself is pretty easy. You just go and pull down the tarball, unpack it. It's gonna spin up a virtual machine on your node. But then if I want to do OCD project, favorite demo. I just posted a link to code-ready containers in the chat. And then OCD new app, and then, like from a, how do I get something going in OpenShift quickly? Like how do I prove that OpenShift is cool, or how do I, or Kubernetes for that, right? How do I prove that this is cool? It blicks and feels just like it. So the build config just ran, it built the build pods now. We should start seeing that deploy pods coming up, then the actual pod eventually. Oh, that's probably running the build. So in the OpenShift three days, which I guess are still now, OpenShift 3.11 still supported, we have MiniShift. Yep. So MiniShift was, you do a MiniShift up or whatever the command was. It's been a minute since I did it. And it would do, it talks with your local operating system to create a Hyper-V virtual machine, a KBM virtual machine, a Beehive, I think if it's Mac OS, a Beehive virtual machine. Deploys everything for a single node instance of OpenShift inside of there. Code-ready containers is effectively the same thing. It's just, it's OpenShift four centric. And where with OpenShift three, it was like a full-fledged, just like you would find any other OpenShifts with OpenShift four and Code-ready containers, it's always been kind of stripped down. There's been less functionality in there that's really been focused on developer centric things. Yep. And the reason for that is quite simply because until single node OpenShift was a thing, we didn't, kind of what I was saying at the beginning, we didn't focus too much on non-production OpenShift. We always force you to deploy a fully production-ready or we do our best to deploy a function, a fully production-ready OpenShift instance. So I know Code-ready containers as, and the single node OpenShift folks have been missing or missing, I was, I read the text that you just sent DMI three that said I miss and brain mouth. Anyways, so that I know that they've kind of worked to help to identify those services, those things that can and do work or aren't needed and stuff like that in order to reduce that footprint. So Code-ready containers, and as I said before, I tried it out back in like the four, two, four, three days and dismissed it because it was one big. At the time, I think it was like four CPUs and 16 or 18 gigabytes of memory, which was kind of too much for what I wanted it to be. I wanted it to be like a new mini shift. Mini shift was like two CPUs and four gigs of RAM. So I know that there have been some changes and improvements there, but I think it's still missing some things that we care about as administrators. So Johnny, I don't know, you're on the administrator view now. So like under compute there, is there like machine config and stuff like that? Yeah, so there's machine configs, there's machine sets. They're all set to like zero. So like, let me, let me find this machine set. So you see it was zero, zero. So like they, it creates the machine set. So it's, when I was reading up on it, I was thinking, I was like, okay, it's probably not gonna, I was thinking it wouldn't have any machine sets or any machines, but it does as a set. You can't scale them up. One thing about this is that you can't actually, there's no supported upgrade path. So if you try to upgrade, it's gonna say it's blocked. And then as far as resources, it's like, it's four CPUs and eight gigs of RAM minimum and a 35 gig desk at a minimum. So it does still take some resources, but it's still significantly less than most other things. And on top of that, it takes like 10 minutes to install. Like it's super easy. Yeah, so that will probably be the smallest footprint that you can achieve to have a functioning OpenShift. And probably the majority of things that we would want to do where you can kind of test out, especially deploying operators is always a big one. You just have to be a little bit careful with which operator and how much resources it's gonna consume. And keep in mind things that like sandbox containers or OpenShift virtualization that require hardware virtualization themselves aren't going to work. Also, Johnny, when we were chatting before you were saying that, so because it's deploying a virtual machine, it requires hardware virtualization. Did you get it working with nested virtualization? No, I've been sitting there talking the whole time. So I'm gonna test it. I have a machine that I'm spinning up right now that I just need to go back and install CRC on. And then I can post like any things that I find, but as long as I think as long as it's got the flag, it should be good to go because it's doing a parse. If you don't have it set or if you're trying to do it in a VM without nested vert, it will very quickly spit out that it won't work. Yeah. Yeah, just be aware of any performance impacts of using nested. It's always a concern. Nested virtualization can have some pretty significant performance impacts. So I think the next least resource intensive option would be a CLI based single node OpenShift install where, so if you use assisted installer, assisted installer requires it won't let you proceed unless you have eight CPUs and 32 gigs of RAM. I am confident, but not having personally tested it that you could deploy single node OpenShift with four CPUs and 24 gigs of RAM, maybe even 16 gigs of RAM. You wouldn't have much left over for anything that's inside of there. But yeah, you should be able to deploy inside of there and at least have a running cluster. I would say if you wanna start deploying anything inside of there, you know, six CPUs, 24 gigs of RAM, kind of minimum and up from there. So yeah, oh yeah, mis-deploying in VirtualBox and CRC. I don't know why that changed. I'll have to check on that. I know my previous employer, we actually had a security policy that came through that's like forcefully removed VirtualBox from everything. I have no idea what that was about, but it's been a few years. So single node OpenShift, you know, kind of manually you can reduce those resources probably close to what CRC is using. But remember, it is going to have more things inside of there. It is going to be a quote unquote real OpenShift cluster that doesn't have the same degree of things removed. I know I keep saying that. I couldn't tell you off the top of my head what CRC doesn't include, but I know that there are things that they do. After that, a full-fledged single node OpenShift, eight plus CPUs, 32 plus gigabytes of RAM. Next option would be compact cluster. So compact clusters are kind of interesting. So compact clusters are a three node cluster where the nodes are both control plane and compute nodes at the same time. So a couple of things to take into account here. So one, if you want to test things like adding a new node like you wanna try creating a new machine set, provisioning some nodes, that type of stuff, that won't work with single node OpenShift. I don't know if it works with CRC. I think it does, but I'm not sure. But I'm happy to- What was it? I'm sorry. Adding a node via like machine set or even just manually joining with code-ready containers. Maybe manually. Let's look at the machine set. It didn't look like it would allow it. So maybe manually. So I know it doesn't work with single node OpenShift, but with a compact cluster, you can add nodes. So if you wanna test out like a new machine set configuration, hey, can I provision nodes with this set or what happens if I set this type of stuff? I'm pretty sure you would need at least a compact cluster. So compact clusters, I think if you look at the docs, the official recommendation is like six CPUs and I wanna say 24 gigabytes of RAM per node. Again, you can reduce that a little bit. The thing that you're encountering in that instance is the control plane processes, the services, along with some of the infrastructure things. So I think Ingress controllers all have requests associated with them. So even though like after it deploys, after everything's running, it's not using a whole lot of CPU. I'm still sharing my browser if you wanna turn that on, thank you. So if we look over here, you can see it's using one CPU out of the six that have been provisioned. But if I were to look across all of the, oh here, you can see here the dotted line, 2.54 have been requested. So I can't have less than three CPUs in this cluster. Even though it's not using three, it's only using one because of the requests, it has to have at least three CPUs worth of scheduling capacity available. So, and yeah, you can see when I deployed the, that must have been when I deployed, come on, get out of here, helper. The spike there must have been and the increase must have been when I deployed the web terminal operator. So- Yeah, so you can only imagine any kind of like real workloads gonna make that jump up. Yeah, as you deploy other services, other capabilities in there, as things start taking, they start requesting resources, even if they're not using it, the scheduler still has to take it into account. So that's where that kind of minimum CPU. And remember, this is single node. So with a compact cluster, effectively each node is going to have a very similar resource request. All right, so that kind of covers, and I'm trying to think, while I also dig at the same time for our notes document, I think that covers a lot of the kind of basic things of what are the opportunities or what are the different deployment architectures, configs that you can take advantage of? So what are some of the options for actually doing the deployment? And this is something that there's been a lot of really red hat folks, although I know there's a lot of other folks who have put a lot of effort into doing automation for exactly this type of thing. So the first thing I want to mention is, of course, Hive. So I've still got this shared. So if we go to GitHub, if I could type, open shift Hive, oh man, I'm in trouble here. Come on, hands. And hands and brain don't work at the same time. Yeah, so I'll paste that link into there. So if we look at Hive, Hive is effectively a cluster as a service deployment. So when we go to this using Hive page here, effectively what I'm doing is deploying an operator that registers several custom resource definitions. And then using those custom resource definitions, and apologies for all the scrolling, I'm trying to find the right place here. I should have used the table of contents for what it's for. So we have this cluster deployment custom resource definition. So all of the things above this in the documentation were things like setting up what are my AWS credentials, what cluster version do I want to deploy? What's my pull secret? What's my SSH private key? Sorry, this is the pull secret. This is the install config, the template install config. So that at the end of the day, I now have a service where I go in and I create this an instance of the CRD and say, go provision me a cluster. And it does exactly that. It basically spins up a pod and inside of that pod, it runs, open shift install, blah, blah, blah with whatever parameters you've specified inside of here. So I find that useful. And I've been trying to adopt this inside of my own lab environment for like, hey, I need to spin up a cluster and just kind of let it do its thing. And when I'm done, we can go forward with it. I think, and I don't work on a team that works that way, but I think that this could be useful for folks who have a team of people or a group of people who need to come in and just kind of dynamically request clusters. Like, hey, I need to test this or I need to validate this type of thing, have some sort of automation or something, spin up, create a new cluster deployment instance or a CRD instance, and it'll go out and deploy that cluster for them and return everything that they need. So that's certainly one option. It really works best with IPI deployments, I will say. Using it with UPI gets a little interesting, especially on-prem because it doesn't have built in, it doesn't have the ability to manage external DNS. So unless you have something, and there are, I think there are DNS providers that have not necessarily an operator, but some way to be automated from within OpenShift or Kubernetes, not that you couldn't create one, right? Ansible. Anyways, so usually on-prem, that's the biggest challenge is managing the external resources that you need for dynamically creating clusters like that. But it is an option. And to be clear, so Hive is some of the technology that is used by ACM, by Advanced Cluster Manager. So it is something, even though it's not released as a standalone product within Red Hat, it is something that is used widely and it is something that is pretty robust and capable in that respect. I gotta reorganize my windows, this is... If only you had bookmarks. And... Well, it's too many windows. Yeah, browsers, everything's a window. So the next thing that I wanted to really highlight is, again, just the massive amount of automation that has been created. So let's look at this one. So this particular GitHub repo, you can see it's Red Hat official. So most of the projects in here are, or a lot of the projects in here, come from our, the Red Hatch community of practice. So the COP is effectively a set of Red Hat experts that kind of focus on different areas. So the team here, Andy Block, who's been on the screen before, is one of the major voices, major faces inside of there, but they've created a bunch of resources. And this UPI automation is one of those. So it's a set of Ansible playbooks that do exactly what it says. And I do wanna highlight, and I think they have it up here. Again, sorry for all of the scrolling. Yeah, Home Lab and proof of concept scenarios. It's not supported, and it doesn't necessarily deploy a production level cluster. I know here they caveat it by saying that it can, but just to be clear, the automation is not supported. That is something that is important to point out. So if we haven't already, it doesn't look like we have, I'll post that into the chat as well, cause I always encourage folks to look at that one. And then the last one, and when we published the blog post, I'll link to, I've got like half a dozen of these from all over Red Hat. So the last one that I'll poke at here comes from Reese. So Reese has been on the stream a couple of times, usually to talk about single node or not single node, assisted installer. So Reese and some of the folks on his team created this set of automation. So this was actually how I, you know, we were talking about packet and Equinex metal earlier. This was actually how I got started with all of that stuff. So Reese's automation here, what it does is it uses KVM on the local host to go in and instantiate a full open shift cluster. So five plus nodes, right? Everything hosted internally. So it's one of those like, yeah, I can go and, you know, get one of those $2 an hour, you know, Equinex metal servers and, you know, execute the script about an hour later, I've got a full open shift cluster running inside of there. And he does a really nice job of showing what everything looks like. And I think that he updated this so that it also will emulate IPMI. So if you wanted to test out, if you wanted to experiment with bare metal IPI, that would be an option utilizing this. And you can certainly, you know, one of the really nice things about working for open source companies, right? Everything is documented literally in the code that you can see in this available from here. So if you ever wanted to replicate it, you know, emulate it yourself and walk through those steps, it's all right here. That is awesome. DMI three, thank you for mentioning the helper node. Christian, every time I mentioned helper node, Christian has to cut me a check. So thank you, Christian. So the helper node, and let me go back. So the helper node, which resides now in this Red Hat official GitHub repository, it was originally started by Christian, right? He was back in the early 4.0 days, 4.1 days, you know, we were, he was going through and I was taking advantage of his hard work to go through and like, hey, I need to deploy a cluster, but I don't have DNS, I don't have DHCP, I don't have Pixie, I don't have all of these things, or it's not, I just need, I want to get up and running quickly. So the helper node is, and there's two versions, there's version one, which is a set of Ansible scripts that goes through and on a single virtual machine instance or physical, I guess, it deploys all of the services that you need. You can see the list of them here. Including things like it starts up in an NFSv4 server and exports out a bunch of directories that you can then use to create PVs inside of your deployed cluster. So helper node v2, which if we switch over to helper node v2 beta. So this has been an interesting one that I've watched Christian work on because it's all containerized. So whereas v1 is, you know, it's all an Ansible playbook that goes through and does, you know, the equivalent of DNF install and configures everything, just so you would expect v2 is all containers. So it's literally like podman, you know, instantiate, although it's all been extracted by the helper node control CTL CLI utility. But yeah, I thought that was really cool. So yeah, what a nerd that not only describes, oh, I see cat there. So that also describes Christian. That's right. Yeah, so speaking of cat, cat's a good friend of mine. I've known her, you know, I knew her when she first came to Red Hat. So for all the GovCloud users out there, there's a project that that's out there. It's called CodeSparta if you want code CTL.io. It's also like, it goes along the lines of unsupported deployment, but it does give you like a fully automated way to deploy like a UPI OpenShift install in AWS GovCloud. If you remember back to our disconnected conversation a couple of weeks ago, like there's the issue with Rop53 not having a private link service. And so it does have a registry node that sits out on the public side, but the rest of the cluster itself is private. So it's a cool project. Check it out. The other one, I kind of get to the link. I keep forgetting to do this, but it's by the Red Hat for GovTeam. And what they've done is they've gone through and they've documented the IPI instructions for like VMware in a disconnected environment. They've done it for Microsoft Azure and Microsoft Azure GovCloud instances and then as well as AWS GovCloud. So it's a procedural document, but they also have some Terraform that... Is that this one? It is that one. Yeah, yeah, that's a great link. So that's managed by the Red Hat COP, the community of practice. So Jay Flowers, Alex Flom, the guys them, it's escaping right now like Dave Anderson, all those guys, they've done a lot of great work out there on that repo to make sure that the information for disconnected installs and these different environments is out there. And back to the CodeSparter thing, I got to give a shout out to Kevin O'Donnell, Kat and John Holtz and Dean Leicester because they're the masterminds behind all of it. So a lot of hard work, a lot of weekends I remember. So if you can take advantage of it. Yeah, it's always impressive to me the involvement that our field folks have with like making all of those things successful and automating it and all that other stuff. You know, as a formerly federal government customer, it was always impressive to me. Yeah, it is and it's even as like a super nerd, you know, like seeing all the stuff coming out and like watching it work, it's really, it's incredible. So Ketchup, I see your question there. I don't know the answer to that. Johnny, I'll give you a second to read it. So I will have to follow up to get an answer to that. If you would please send me an email, Andrew.Sullivan at redhat.com and we'll get you connected with the right people and get that answer. Yeah, I don't know. So yeah, thank you, Stephanie for flashing contact information at the bottom there. So please reach out, send me an email and we'll make sure to get that answered for you. And we do have, so they just formed a, or they converged all of the security PMs, all of the security folks into one team now. So it's really convenient to get ahold of them and work with them. All right, so let me review the notes document here. We're running just about 10 minutes over our hope nine to your point. We should really investigate. And I know you haven't said it, but you usually bring it up. We should really investigate extending the length of our stream here. So the last, I've got two other things that I wanted to poke at on this topic. So one is there is some anywhere from official to pseudo official automation from our partners as well. So you may have heard me talk about, for example, Nutanix and Nutanix has some calm playbooks. Are they called? No, they're blueprints. Calm blueprints that go through and deploy an open shift cluster using their set of automation. So I believe that they're in the process of productizing these. So I can't speak for them, but I know that it is something that they are working on in that respect. Similarly, and oops, I meant to copy that so I can paste it into the chat. So there's that link for the Nutanix calm blueprints. So similarly, some of the open shift open shift VMware folks have done, man, I cannot talk and type at the same time today, have showed how to do open shift deployments with VRealize automation. So Dean Lewis is the person who runs this site, VEducate. So Dean will actually be on the stream next week. So he'll be joining us. So we'll go into some more detail here on this, but I wanted to go ahead and get it out here of, hey, this is one way that if you have your, an environment, if you were inside your work environment, you have capacity, you have the tools, this is an easy way to spin up clusters as well. So lots of options when it comes to that. But yeah, I didn't want to skip highlighting that to him or highlighting the hard work that he has done. So the last thing or the last one and a half things that I'll talk about is something that we try to mention on a regular basis. And that is so learn.openshift.com. So if you haven't been here in a while, you'll notice that they redesigned the site. So if we go to view all OpenShift learning paths, and you can do things like your interactive lessons. And now I've got an OpenShift 4.9 playground. I click Start. And after a second or two, get this launch. Come on, there's more clicks than there was before. So I'm now at, and let's see, get node, I'm out of OpenShift cluster. That was like 10 seconds, I didn't log into anything. So it's a great way to just, and many of you have probably heard me say before, in the before times when we go to conferences, this interface was what I used to do demos and booths. So it's a great way to just, what does this do? What does this mean? I usually use it to look at operators. So as you get package. And you see it choosing CRC. So that's exactly what it would look like with CRC. So like, if I don't have a cluster up and running, right? I can go in and do things like when we did the disconnected deep dive, this was how I figured out which package or which operators I wanted to prune. Cause, you know, hey, I can just drop in here and it's got the whole list of them right here. So that's a great way to, you know, immediately be hands on zero cost, although these do cycle after like 30 minutes or 60 minutes, something like that. So they don't last very long. Another option, and that was learn.openshift.com, is the open shift sandbox. And I don't have a link to that. So we're gonna take the risk of doing a random internet search and see what pops up. So the sandbox is really targeted at developers, but you do have the ability to kind of see and experience some things with regard to, you know, the bigger open shift and all of that. So these environments, I think last for 30 days before they reset, but it's one way that you can, you know, again, free of charge. You do have to register, but you can get access really quickly to an open shift cluster. The last thing that I'll mention, kind of the one half. So for Red Hat employees, we of course have things like RHPDS, the Red Hat product demo system and OpenTLC for both partners and employees. However, you as customers, right, you can ask your account team like, hey, can we do a workshop on X or Y or Z? And they have the ability to utilize resources inside of there. So, you know, hey, we want to learn about, I don't know, open shift virtualization. Like, hey, you know, can we do a workshop? Can we do like two days and they have the ability to go in and request resources from, you know, whatever team is behind that GPTE to serve and provide that environment for you to go and learn on. So, you know, work with your account team. You know, they're more, they're good for more than just swag. Although make sure they do have their, you know, you make sure you do get your allotment of swag and what is it every quarter, I think is the normal. That's right. That's an absolute must. So, yeah, Johnny, anything to add? I just wanted to kind of go back on the single note open shift. It just occurred to me. If you're deploying single note open shift, it uses the local storage operator for your storage. So, you know, if you get, if you're in there and you're like, why isn't my registry coming up like myself? It's because you actually have to go in, configure your registry like you're, like you're deploying to a bare metal node because that's essentially what you're doing. So you go in and you patch it and then using local storage operator, you create your PVs and stuff like that. So just something to keep in mind when you're doing that. It's just a little gotcha that I ran into. Yeah, and I don't think, so I don't think single, I'm trying to think of how to phrase this. So officially single note open shift doesn't support cloud provider integration. So if you're deploying and now moving back over to unsupported. So if you're deploying to VMware, you can't have the cloud provider there to be able to do things like VMware or CSI. That being said, it's dev test. Try it, see if it works. I personally haven't tried it, but now I want to. But you can also, you know, there's a lot of, you know, CSI provisioners and stuff like that that work without a cloud provider integration. And this page blinking around is distracting me. I gotta move to a different page. Squirrel. I can't see the stream, but I see it out of the corner of my eye because the window is up on my screen. So yeah, there is a number of CSI provisioners out there. So for example, in my lab, I use a TrueNAS virtual machine. And there's a CSI provider, Democratic CSI, that works great. You know, I've been able to do everything I need to with it. And it's all, you know, I, you can choose to pay for a TrueNAS, you know, if you want support from them and all that other stuff, but it's a great way to just have a small environment, minimal resource requirements and get up and running with that. You can of course, you know, I always recommend folks, you know, stay within the Red Hat portfolio if you, if you want to, you know, Ceph, OpenShift data foundation, the new name for OpenShift container storage, all that stuff you can take advantage of as well. There's a number of resources I'll have to dig up. I'll include them in the blog post about deploying Ceph with kind of minimal resource requirements into an OpenShift cluster, but you're kind of always going to need more than one node to do that. So if you're doing single nodes, that might not be an option for you. But other than that, yeah, there's a bunch of other folks that have storage offerings out there. You know, there's, well, let's see, I can go to the catalog.redhat.com. Nope, catalog.openshift. Well, where's the catalog at? It is catalog.redhat.com. Didn't it work the first time? Whatever. So if you go to the catalog and go to the software and containerize products, we have this, well, it used to be, there we go, browse all categories. Storage used to be at the top. But anyways, there are, oh, did we do away with the storage category? Well, that's disappointing. So anyways, there are containerized storage options that are out there. You can look in the catalog here, remember that the catalog is going to be only certified offerings. So like OpenEBS, PortWorks, these are just names that I recognize. So there's a bunch of them out there as well as a bunch of other options that are not certified. Johnny, you and I were talking before about like NetApp. NetApp is not a certified CSI provider, but they absolutely work. NetApp supports their product on top of OpenShift. So if you have NetApp in your environment, yeah, by all means. And they're just one of many that are out there. So yeah, I think that kind of covered the basics. Oh, maybe the last thing that I wanted to mention now that it occurs to me. And I think I took a note on this and then maybe removed it is networking. I know I've talked with a number of people who are like, yeah, in my home lab, I can't deploy OpenShift until I get 10 gigabit networking and stuff like that. You don't need 10 gig. One gig is more than adequate. The only time that you might see issues is like when you're, if you're deploying virtual machines to a remote storage device across a one gig link, when you're doing the initial CoreOS install, because it writes out like three gigabytes for the operating system, right? So if you're deploying, you know, three, five, 10 nodes at the same time, that it'll extend that process while it's trying to write out all that data. But in general, one gig is perfectly fine. Yeah, I use one gig at home. Yeah. Yeah, and it's never been an issue. Like I said, it just extends out that install time for things sometimes. Usually the biggest bottleneck is internet speed. So just putting the relevant security doc for SSLV2 related to EAP, thank you. I'll open that in another tab here. It's now officially like my 130th tab. So yeah, that's all I got. I don't think we have any other notes. I'm trying to find my doc here. I don't see any other notes that we missed or anything like that. So before we take off, this is good question. Camera off, 18. So when you deploy a, if you CRC, you can use your developer subscription, right? So like that's just out of the box or put it on Windows or Mac or whatever and it's not necessarily a thing. But when you deploy a single node or you deploy, you know, the compact cluster or whatever, you can use, there's a 60 day trial license that comes with those that you can use. And then you can use a developer subscription after that. I think for some of them, I'm not exactly sure how far the developer sub goes out, but initially just getting it kicked off. There's a 60 day trial that you can take advantage of. There is also the developer entitlements apply for OpenShift as well. Oh, that's awesome. So let's see if I can do this on the fly. So if we look at this, which we, in typical legalese appendix one to the subscription guide, this describes down here, the developer use cases. So I'll post this link into chat if I can. But if you read through this and I would encourage, you know, folks who are interested to read through it. So use of subscription services. I want to say, anyways, it does explain in here that you can have or you can use up to 16 OpenShift cores for non-production single users purposes. And those are good for as long as you use them for that use case, right? So, and I believe, I haven't checked in probably six weeks or so, but I believe that you can go into the Red Hat console, so console.redhat.com, and you can also entitle those through that. So that way you can see all the things in Red Hat console. You know, they are self-supported. You know, it's a developer entitlement, but you can use that with OpenShift as well. Yep, and you do, you enable it through the console. It's actually pretty straightforward. Yeah, so yeah, it's, I was recently working on an update to the subscription guide, so I had to read through this. Yeah. It's a real snoozer, huh? Yeah, it's great if you're having trouble sleeping. You know, legal. All right, so I don't have anything else. I think we covered all of the stuff that we had. You know, thank you to Kamarov. That was the one thing that we were, that we missed was entitlements and licensing. So 60 day eval or up to 16 cores of developer entitlements are available to you. And all of that is of course free, no charge for that. With that, thank you so much everybody for joining us today. As I mentioned before, next week we will be joined by at least one, if not two VMware folks, and that will be our last stream for 2021. If you have any ideas, thoughts, suggestions for stream topics that you would like to see in the new year, please don't hesitate to reach out with those. Anything, any time that you all are interested in by all means, you know, we'd love to hear from you. Even if you think up of questions, you know, after the stream ends, feel free to reach out and contact us. So Andrew.Solomon at redhat.com or at practical Andrew on Twitter. And Johnny is JROC TX01. JROC TX01. I was close. That's on Twitter and then Johnny at Red Hat, J-O-N-N-Y at Red Hat. So yeah, please don't hesitate to reach out to us at any time. We're happy to help as much as we possibly can with that stuff. And again, if you have any topic requests, ideas, suggestions, et cetera, don't hesitate, some of our best topics have been suggestions from you all. So we very much appreciate those. And with that, have a great rest of your day, rest of your week. Everybody stay safe out there. And Johnny, I will give you the last word. Great show, Andrew. Thank you. Thank you, Stephanie, for everything. You're awesome.