 This is so awesome. Good evening, good afternoon, whatever the case might be, welcome to the level-up hour where we consider all things containers and Kubernetes and of course that wonderful amalgam of them all. The red-watt way of doing those things, open shift. So like, share, subscribe, announce it to the world so that people know that we're here and they'll want to know that we're here because we have a really, really interesting topic today. I am joined by my co-host, Jafar Charibebe, Charibebe, oh I'm going to get it, I'm going to get it. How are you doing today, Jafar? I'm doing much better than yesterday actually. Well, we are relieved to hear that. We were thinking that you might have to be doing this from bed with an IV drip, but you're looking good, sounding good. So today's show, we might also be joined by Brad Weidenbender, who is the Red Hat Advanced Cluster Manager, Product Manager. We've had a few technical difficulties this morning and we'll see if we're able to have Brad join us. But in the meantime, we're going to be covering something that I think a lot of people will be interested in. That's the open shift single node cluster, or what's sometimes called S&O, although I'm very reluctant to introduce yet another three-letter acronym to our world. And so, Jafar, maybe if you could tell us a little bit about what is the single node open shift and then we'll get to why somebody might care about it. So what is it? Yeah, sure. So let me try the camera screen. We have a single slide that talks about it. So if you've been using open shifts for some time, you know that with open shift four, we had a topology where you needed at least three control planes. So that's of course part of the Kubernetes architecture and at least two worker nodes. So the minimum was a five nodes cluster, but we had some customers who wanted to have a more compact installation of open shifts. So we worked on what we call the three node cluster, which came out, I believe, with open shifts for seven. So basically the masters and workers were co-located and allowed you to have a smaller footprint for some type of use cases. But as we started playing also into the telco 5G edge area, we also had some customers asking for very minimal footprint. And basically they wanted to have a full-fledged open shift running on one single node. And that's basically the single node open shift. Also known as SNO. So this is available starting from OpenShift 4.9. And basically this is, so there's been a lot of engineering to overcome those some challenges because as you might know with OpenShift 4, we had a bootstrap machine that was controlling basically the flow of installing everything on the other nodes. And basically we were using, I would say, containers to deploy containers. And doing this on the same machine was not something easy to handle, but thanks to our wonderful engineers, they've been able to work around that and have a single node open shift that not only does the bootstrap as it was done before, but once everything is loaded, it reboots and then it becomes a master slash worker node. So it builds itself basically. So yeah, exactly. So that's a self-installing machine. So we will be talking about the installation process just a bit later. So basically the use cases for this are mainly, so this is for production use, first of all. So we've had some other distributions like code ready containers which were aimed for developers. Basically it's not for production use. And those were not really a full-fledged OpenShift. This is a full OpenShift running in a single node. So what you don't get with that is, of course, high availability because everything is running on a single machine. But other than that, all the other features of a traditional OpenShift clusters are available. Even upgrades, you can go ahead and upgrade through the regular channels. So that's a great thing that we are able to support that. Well, yeah, so first of all, that's really sort of an interesting thing in that it's going to upgrade in a typical way and it will behave in some of the sort of management aspects, like it's OpenShift, right? So you're not going to have to do a different process necessarily in terms of where the upgrades are, managing your upgrades or anything like that. Exactly, yeah. Okay, we had a couple of questions already. I love this. We already got questions in the chat. So Satchel asked, you know, and I know we're going to get into the particulars of install, but I figure we might as well answer a couple of these. Yeah, sure. First of all, you know, is this something that can be installed on a VM or is it only bare metal? So there are two sides to this answer. Can this be installed on a VM? Yes. And that's basically what I'm going to be doing. And for support, this is for bare metal usage only. So as an end user, as a customer, you will be using this on bare metal. All right. Well, and that kind of goes to some of the use cases that we're going to talk about here. Manicanton had a question as well, say, isn't a single node OpenShift Snow very similar to CRC, which runs inside of VM? What's the difference between the two? Yeah. So, first of all, first one, and that's a major one, SNL is for production use. CRC is for developers. So you cannot run applications, workloads on CRC. That's the first thing CRC is really a different distribution of I would say the OpenShift experience, but for a developer to have that running on his laptop. So I mentioned the upgrades, because if you've been using CRC, you know that there's no upgrade. You had to basically reinstall a new CRC instance to get the newer versions. So yeah. Well, the assumption is it's a developer piece, and there's no need for it to be continuously available. So production readiness really. And again, CRC stands for code ready containers. We have so many acronyms. So let's talk a little bit more specifically about why this came into being, and I think the thing that maybe makes the most sense to understand why this is important is to look at edge computing, right, where you start to think about, okay, we're not talking about running things in a data center now. We're talking about running something somewhere else, and it's running by itself. And maybe availability is a different question now, because the thing that's available can go away. It might go away forever, or it might be something that can go away periodically. It's just not that big a deal. So talk a little bit about how this fits into edge computing, first of all. So if we speak about things like 5G, telco, edge, basically we're trying to run those virtualized networking functions closer to the, not from a central standpoint, but closer to the end customers. And that's basically the main use case that this is intended for. So you're able to deploy a fully fledged open shift cluster on a single node. And so basically some customers want to have a portable, I would say environment. They want their applications to be available on a minimal footprint. And as I said, they don't want to have a whole data center just to be able to deploy these small sets of applications that may or may not be business critical. So for instance, you said these applications may be working not necessarily 24 hours a day, but maybe while you're in a restaurant or you are in a shop and you have some local applications that can- Yeah, your point of sale. Yeah, your point of sale while you're in business hours, but not necessarily afterwards. So there are many use cases where things like that can be suitable for this type of deployments. And basically when you speak about edge, that's why we made that type of new topology. Right. Right. Well, so there's one other phrase that has been sometimes used here that I think I'd get an explanation of, and that is Frankensteining your own car at minimal cost. Okay. What on earth does that mean? What on earth is that? Yeah. So I've been an opportunity solution architect for many, many years, so for about five years. And of course, for our own learning process and also sometimes for preparing POCs, preparing demos and stuff like that. We needed to have OpenShift clusters. And going back to OpenShift3.x, we had what we call the all-in-one install. And basically you had a full-fledged OpenShift cluster running into a single machine. It could be a virtual machine. So since we moved to OpenShift 4, it was something that was not available anymore because now if I wanted to have that OpenShift cluster running on my laptop, now I needed to have five VMs running. And of course, the resources, if you have 16 gigs of RAM or even 32 gigs of RAM, it's not going to fit in there. So yeah, when I speak about Frankenstein in your own car, say it means deploying an SNO to experiment with some things, to try new features, to try upgrades and things like that at a minimal cost. So basically you're not deploying five or six VMs, you're just needing one machine and then you can try things like that. Also we speak about multi-cluster management. We have our solutions like ACM, the Red Hat Advanced Cluster Management solution that allows you to manage multiple OpenShift clusters and say you want to do a POC or prepare a demo and you want to manage many clusters. Say you want to manage three clusters. So now the minimal footprint for your demo becomes 15 machines. If we are using the SNO, it becomes three machines. So it makes us much easier for us to fool around with these things, to learn also new things without necessarily having a heavy infrastructure. So what you're describing is sort of intermediate between code ready containers, which is very much a developer thing, not something that you can really use necessarily as a POC. It is something that helps you kind of work on your app and see how it's going to work in a container. So this is sort of in between code ready containers and then the full blown, you know, highly available OpenShift. Like you said, it's a difference between having five things versus three things and all the resource requirements for having that smaller footprint, right? Yeah, yeah, correct, correct. So I see we have some interesting questions already. They're firing up. Yeah, so someone was asking about the minimum resources for this. So yes, of course, when we speak about edge, it can be, you know, many things. If you are speaking about IoT, it's a certain type of devices and it's very, very minimum footprint. But here we are speaking about servers that are going to be running business applications. So the minimum footprint for the SNO is 32 gigs of RAM and eight cores. So that's, I would say, the recommended minimum. And probably depending on your workloads, you might need more than that. So yeah, so there are some questions about the installation, the requirements. Do we need DHCP reservation and stuff like that? So these are some questions that we are going to tackle when we speak about the installation. So Renji, should we go there or should we wait for it? Oh, I think we are ready. Let's leap into the live demo, what could possibly go wrong. Yeah, exactly. So first thing, the purex are pretty straightforward. So you have, as with any OpenShift cluster, you have some requirements regarding the API. So the API, you have the wildcard for the apps for your cluster. And so in a traditional OpenShift environment, these are held by some VIPs. But for the SNO, basically you need to have a DHCP reservation or a static IP for your virtual, for your node. And then those DNS records are going to point to that IP. So you need an API.cluster name that points to the SNO static IP. And you need the wildcard.apps that points also to that static IP address. So these are, I would say, the main requirements. So in order to make it easier for you to install the single-node OpenShift, we actually came up with what we call the assisted installer that supports the single-node OpenShift installation. And let me go ahead and share that with you. Stand back. Okay, so can you guys see my screen now? Indeed, we can. Okay, perfect. So what we're going to do, we are now at console.redhand.com, which is our hybrid console for managing your OpenShift clusters, for managing your subscriptions, etc. And as you can see here, it says create cluster. So it's something that we made to make it easier for you to create your OpenShift clusters. And basically, here we are going to create a cluster name, we are going to specify which is the base domain that we are going to use. And here we are going to mention that we want to install a single-node OpenShift. Then afterwards, what this is going to do, it's going to generate an ISO for us. And this ISO needs to be loaded on the machine that is going to be hosting the SNL. So you can boot it via USB drive or PXE booting. So whatever suits you will work. So for the sake of the demonstration, I'm going to be using a virtual machine. But keep in mind that this is not the production use case. So what I answered or asked earlier, we're going to start with an exception. Exactly. So just for the sake of time, I had already prepared an ISO that I have loaded on my virtual environment. And this is the one here. So as you can see in the UI, it says waiting for host. And what I'm going to do now is I'm going to create a machine on which I'm going to load the ISO. And we'll see how the installation goes on. All right, so let's go ahead, create the VM. Let's call it SNO. And here I'm going to choose the local installation and choose the ISO that has been generated. So there's a minimal disk of 120 gigs. And for the memory here, we're going to go and put 64 gigs of RAM. Although again, with 32 gigs, you should be just fine. So I'm going to change the CPU here and put it to the minimum. And now I can start the installation process. So as you can see, it's going to boot the CoreOS instance. And it's going to boot into the bootstrap mode. So basically it's going to load all the services that are used to configure the OpenShift cluster. And at some point, when everything is loaded in the VM, we should have the machine showing up here. So basically it talks back to the UI. And this is very easy for you to get your hands on. So I just did an installation of a single node cluster this morning. It takes about 45 minutes to one hour to have everything loaded. There are multiple reboots needed because as we said, since everything is running on the same machine, it changes roles. So it starts as a bootstrap, then it reloads. So now we see it entered into the discovering state. And it's going to, so it discovered that it's the machine that I had. So it asks for some NTP sources. So let's give some servers over there. I love that you just googled for an NTP source. Yeah. And that's it. And so with this, what does it say? So if there are some configurations that are missing, the installer is going to complain about it. So for instance, when you boot the machine, it starts with the local host name. So we're going to change the host name there. And we're going to call it SNO. And afterwards it's going to get into a ready installation state. And it's going to start deploying the components over there. So OK, let's go ahead, hit Next. So now I need to select the network on which the installation is going to be deployed. And let's go ahead and hit the install cluster. So that's it. That's all I need to do. And if we come back in 45 minutes, maybe, or maybe a little more, we're going to have a fully running open source cluster. So that's it. It's very easy to configure and install. And once this is done, you have access to your cluster information. Of course, while the installation is running, you have access to the logs. And you can see the state of the cluster. Once everything is finished, you get this nice UI that provides you the cube config to access the console. You can launch the OpenShift console. And it tells you also what DNS records you need to put in there. And so from here, we can access this single node OpenShift cluster. And as you will see, it's basically a fully fledged OpenShift cluster, except that if I go to the nodes section, I see that I only have one machine. And it plays the role of the control pane and the worker as well. And I have all the traditional features I can upgrade if I wanted to update to a different version. And of course, you can do everything that you... So you have the monitoring, all the dashboards are available in there. So it's really a fully working OpenShift environment, except that it's running... It's not highly available. Exactly, it's not highly available. That's it. Right. Well, so we had some more questions here. So, Anne, we might be able to answer some of them. So first of all, in terms of CPU, would it run on an older Xeon or is eight cores all that's required? So I don't think that it's going to look for some specific CPU set of instructions. So, yes, it can run on the Xeon and actually I'm running it on the Xeon. And, yeah, so again, I don't know what the performance is going to be. But, yeah, it can run. And it might be that the application might not need performance. How about, you know, sort of the subscription basis? This is basically an OpenShift subscription or is it a separate kind of subscription? Yeah, it's a traditional OpenShift subscription, exactly. OK, another question is that it, you know, it seems like a small disk. You know, is it better if you use something bigger or, you know? So I don't know what you mean by a small disk. You mean the 120 gigs of RAM? The 120 gigs. Yeah, so you can, of course, you can put more if you want to add installation. So it's not a limitation. You can, if you wanted to go with 500 gigs for the disk, go ahead and do that. But keep in mind that you can add disks afterwards, of course. You're not necessarily going to be resizing the disk, but. I don't see why you could not, actually, to be honest. But yeah, you can have this disk for the OpenShift, I would say, components for the images, for the registry and these things. And if you needed more storage, you could ask some disks to that machine and configure them and maybe even use the local storage operator to access those disks. Okay, so we also had the question that, is this available? Is this only OpenShift or is it something that's in OKD? That's a good question. And actually, I don't know if... I believe it's going to be on OKD also. I don't have the answer. Actually, but I think you can do that with OKD. I haven't tried it, but I guess so. All right, so this is actually really instructive and it's great to have all these questions. Keep them coming if you have them. Was there something else that we should maybe talk about here where sort of coming up onto the top of the hour or a little short because we had a few technical difficulties but maybe we should talk a little bit about the relationship of single node OpenShift to advanced cluster management and some of the things that maybe are worth mentioning there. Yeah, sure. So as we mentioned, the single node cluster is intended for massive deployments. For example, you're going to be managing 10s or even hundreds of clusters and you don't want to be managing those clusters by hand, connecting to each console and upgrading the things, deploying your applications to those clusters. So this is why we have a very tight integration between ACM and SNO for many use cases. So of course, SNO is a regular OpenShift cluster so it can be managed with ACM as any other cluster. You can deploy your applications using the GitOps approach. You can deploy your policies and stuff like that. But what's interesting is that we can also install single node clusters using ACM. So basically the discovery process that you saw is something that is available from ACM. You can generate those ISO files and you can have those single node OpenShift clusters deployed from ACM. So we wanted to have the product manager here to walk us through some of those features but unfortunately, as you said, we had some technical difficulties but it's probably something that we can try to... Opportunity for a future show. Exactly. We can have a more focused episode on ACM and those types of use cases. But basically ACM can help you manage a fleet of single node clusters and upgrade them, deploy your policies, et cetera. Yeah. So boating McSquare pants and I want to read this question just because I want to say that name, was looking for a little more clarity on this question about DHCP versus static IP. Because I think when we start talking about that edge use case, that can actually be a pretty important consideration, right? Okay. Yeah. So I believe you can specify a static IP in the boot configuration when booting the CoreOS instance. So you have to enter into the advanced kernel parameters and then you can specify your static IP that you're going to use. If you don't have a DHCP server, you can choose to go with that option. Okay. Well, so I don't think we have any more questions, but I want to go ahead and leave the floor open here. If anybody has some additional things that we haven't asked, maybe if I have missed one, let's post that. There we go. Yeah. So somebody wants to try it out on AWS. Is there a doc right up available? Yeah, sorry. So there was a question about ACM. Yes. It's available in ACM 2.4 for no integration. Sorry. What was the other question? Trying it out on AWS. So, well, basically, yes, there are some blogs that are available. We can share the links to those. Let me see if I can find the links. I imagine a lot of people would actually like to try it out through that mechanism. So any other questions? Go ahead. Yeah. So, Randy, I'm going to post the link in the private chat if you could restrain that. Okay. There we go. Coming up. So, okay. There it is. Yeah. So I think you already shared it. So it's not going to be specific to AWS EC2. So this is a very, very generic installation process. Once you are able to load the ISO into any machine, so whether it's a virtual machine or a bare metal host, it's going to be working. The only thing to keep in mind is for production use, it's only supported on bare metal. Right. I think we already covered the question about the Xeon. Yeah. Yep. And we've covered the question regarding the licensing. This is an OpenShift subscription, and it really just comes down to a choice about how you want to deploy OpenShift. Yep. All right. Let's see. Oh, you know, actually, yeah, I think we, you just covered the ACM version. Is that correct? Yeah. All right. Other questions or concerns? This is good. We had a lot of them. Obviously this is a very hot topic. And so, you know, apologies to everybody for a bit of a late start today. You know, we had hoped as, as we've mentioned that Brad Weidenbender, who is the product manager for ACM would join us. But you know what? That's the opportunity for us to bring Brad another day and really dig into ACM in particular, some of the newer features that are in there. We can even revisit sort of single mode. So again, if you find, well, one last question here from Makoto. Are there any plans to support or? I guess so. I mean, that seems like, I don't know. It seems like a logical thing to do, given some of the particular use cases, but you know, I think we might have to come back to that question another day. Yeah. All right. Well, so just another reminder, if you enjoy and find value in the level of our. Yes. Yeah. So just to answer the arm question. So open shifts and arm. Of course. Is a thing we are working on having open shift. Supported on arm. I don't know where we are for that, but so for the SNO thing, I guess, as you said, that would be a logical step forward. Once we have full support for, for the arm architecture. All right. Well, so great session, Jafar. Thank you for walking us through this. This is a, you know, a very compelling topic, obviously. And I think, you know, when we think about the impact of edge and how that's really becoming just more and more of a, a priority out there in the world. And we think about the coming title wave 5G and some of the implications there. I think, you know, the implications of single note open shift become very clear. So with that, I think we covered a lot of ground. We'll come back to the subject of ACM another day, but I just want to remind everybody, you know, like subscribe and share the level of power. If you're finding this useful, join us for our next show. And thank you for joining us today. Thank you, Jafar. Okay. Thank you very much, Randy. And thanks everyone. Have a great day.