 Good morning. Good afternoon. Good evening. Wherever you're handling from welcome to another episode of ask an open shift admin. I am Chris short host of the most here on red hat live streaming. I'm joined by the one and only Andrew Sullivan and fellow teammate, Mark Schmidt. How are you today? Y'all doing great. I'm, uh, I'm impressed with your little mini crisis. You just dealt with seamlessly there. You like that hit the intro bumper button and then Oh, no coffee and coffee is now here. Yeah. Well, that was, that was impressive. Very important. Hey, you know, I mean, everything's on wheels for a reason. Right. The desk, the chair, the whole nine yards. Yeah. Yeah. Speaking of which, when are you moving back down to your, uh, your original office? It's probably October ish. I'm thinking, right? Like once it gets cold again. Okay. Yeah. Well, you got the fire behind you. Exactly. Yeah. Now the fire has started back up here because it has become September, which is the magical month of hoodies. So the fire is back in place. Well, you know, it's, uh, it's after Labor Day and I'm, I'm wearing gray, so it's not white. So I'm good enough. Yeah. Yeah. Sure. Whatever. I don't know where that rule came from or why. Um, I don't either. Actually, that's a, I should look that up anyways. Uh, so hello everyone. Welcome to the asking of an open shift administrator office hour. Uh, so this is one of our office hour series of streams here on red hat, live streaming, which means that we are here, uh, much like if you ever had a teacher or a manager, somebody like that, who had office hours, right? We're here for you. We're here so that you can come in and ask whatever questions, uh, talk about whatever things happen to be top of your mind. And us, uh, so Chris and I and today Mark, of course, uh, are here to answer those questions. Uh, so we also have, with every show, we have a theme. We have a topic that we like to talk about. And today I am very happy to welcome Mark to the ask an open shift admin hour for the first time to talk about open shift at the edge. Uh, Mark, if you don't mind introducing yourself. Sure. Mark Schmidt, um, been red hat for a little over three years now, um, prior to joining red hats, uh, I spent many, many a year in the telco industry, uh, wireless telco, uh, doing everything from working for network equipment provider, uh, to an actual wireless carrier. And then spent a lot of years, uh, in a consulting role doing mostly system integration for large, uh, the large wireless telcos. Um, and with that came, uh, a wave of virtualization, uh, within that industry. And that kind of, uh, led me down the road to, to red hat, uh, and, uh, been working here with, uh, open stack and open shift, uh, as a TMM, like I said, going on three years now, uh, mostly now focused on, uh, open shift at the edge, uh, and the various use cases, uh, they're in. Yeah. So funny story. Mark and I actually started, he started two weeks before me. And, uh, and then I started in the beginning of August and then Brian Tannis started two weeks after that. So it was like a, uh, the string of the three of us all joining, all joining the team. And now Brian, Brian has moved on to bigger and better things, namely being a developer evangelist. So, you know, yes, he's in the big shower in the sky. Big shower in the, trying to come up with a shower. So all right. Um, yeah. Anyways, we'll, we won't pick on him. He's not here to defend himself. We won't pick on him too much. But, uh, yeah. So happy to have you, Mark. Um, I know it's, it's fun when we chat about your, uh, your experiences in the telco industry, um, and with open stack, uh, a long time ago, um, I was an open stack user. I'm happy to, happy to say I'm fully recovered. So I'm sorry. Christian, yeah, you only pay for the entire open shift, but you only need the edge, um, just that little edge piece. Yeah. This is why I want, every time we say edge, I want there to be an echo effect, but I didn't have time to figure that out. It's like an AI thing or something. Exactly. Yeah. Uh, so as is tradition here on the ask an open shift admin, uh, office hour, I like to start off, uh, after we do our introductions and banter here with some top of mind topics. So these are things that have come up, uh, in the recent, uh, past things that I feel are important or maybe topical or things that you all should be aware of, uh, as administrators of open shift. So the first one that I want to talk about is, uh, one that we've been batting around quite a bit. And that is something we talked about, uh, was it two weeks ago, three weeks ago? Uh, we talked about it recently and that is API deprecation. Yes. So I, we first talked about this briefly when we did the, what's new in open shift 4.8 stream. And that was, we, we talked about it because one of the things that I thought was really cool and interesting is you can see which API's are being used. So let me, I'm going to share my screen just so that we can see what this looks like. And there we go. Hey, I got the right one on the right on the first try. Hey, uh, hopefully that's big enough for everybody to see, if not, just let me know. Uh, so with open shift 4.8, we have an API specifically that tells us which APIs are being used. So if I do an OC, get API requests, it'll return back this, of course you don't request counts. I was close. I was close. Remember last time I did this, it was, I completely borked it. And it took me a little while to find it. So we have this get API request counts and it returns back all of these different APIs that are in use. How often they're in use if we scroll all the way up to the top here. So requests and current hour requests in the last 24 hours, all that other stuff. So you can see, for example, this API request counts API is pretty popular for some reason. I have not done it 507 times. So, you know, here, cluster role bindings, of course, you would expect the RBAC stuff to be, you know, pretty widely used. So the interesting part here is we have, for example, this ingresses.v1 beta one extensions, and you can see this column here, 1.22, represents which release it's being removed in. So this is important for a future release or past. Yeah. Yeah. So this is important for a number of reasons. So one, if you're using one of these APIs, uh, effectively you won't be able to upgrade the cluster. So when, where is this 4.8? So when 4.9 comes out, that would block and upgrade because that API is in use. Right. So of course we red hats, you know, we're aware of this, we're working on this with all of our, you know, you would expect the ingresses to be one of ours. So we're working on updating all of our operators, all of our stuff to make sure that before that time comes, all of our stuff will be not using those removed APIs. Our partners are also doing the same. Yeah. This affects everybody equally. Yes. So just be aware, if you have created your own operators, if you're using any of these APIs and any of your own codes, or if you have partners, you know, you may want to reach out and poke your partner and basically say, Hey, are you guys aware of these things? You know, we need to take this into account. You can ask them if they're ready for Kubernetes 1.22 and see what happens. Yeah, exactly. So it's, I don't want to make too big a deal out of this, but it is kind of a big deal because, again, it will prevent the updating of your cluster to the next version. And one of the main reasons you're hearing this from us first potentially is because, well, we're going to be one of the first to update to 1.22. So these things need to change before you do that. And that's why we're I'm beating the drum in the community and I'm going to beat the drum here as well. Deprecations happen in Kubernetes and that's not a bad thing. It's actually a good thing because a lot of them are being upgraded to stable, as opposed to beta, agreed for only took 22 versions. That's it. So reading through the chat here, Christian, you would be biased towards Mark shirts. That is one of the one of the get ups shirts. Emanuel, congratulations on passing your exam there. Nice work. That's awesome. The CK is not trivial or neither is the OpenShift shirt. So, yeah, yeah. So definitely congratulations. And I will move on to our next top of mind topic, which is so this is a question that comes up frequently enough that we actually published a blog post on it. Nice. Always like those. Yeah. So I'm going to you got it. OK, I can get it there. Well, then I'm going to share my screen so that we can all look at it together. Yeah, let's see. Share this guy and paste that in there. So a while ago, you could see not quite a year ago, we published this blog post that I think it's overlooked fairly often. And it is basically what are all of the components like the specific components inside of OpenShift from upstream projects? So, for example, we say OpenShift has, you know, the monitoring service or the logging service. What does that actually mean? Right. The monitoring service is made up of Prometheus and Grafana. You know, the logging service is made up of Elasticsearch, Fluentia and Kibana, so on and so forth. So this this blog post kind of walks through each one of those projects that make up different components, different features, different functions inside of OpenShift. So nothing really, you know, surprising or outstanding here. This is just a lot of times I have come across, you know, account teams who are asking us in the view like, you know, hey, my customer's security team needs to know, you know, what is the component or what is the thing used for this or used for that type of stuff, because they want to know, you know, what is it, what version is it so that they can compare it against any, you know, kind of vulnerability or other databases that they have. So this is a great starting point for that. I always encourage folks to look at this whenever they're trying it out. One thing to point out, I'll put on my I do have marketing in my title officially, technical marketing. I'll put on my marketing hat of, you know, we take all of these components, you know, what makes OpenShift is we take all of these components, we combine them together into a product that is fully tested for integration and all that other stuff. So it's kind of neat to see all of these things come together into one thing. Yeah, yeah, it's always a when you when you look at all the projects involved, right, it's like, oh, wow, you did pick every like one thing from every box on the CNCF trail map kind of deal. Yeah, and then added more. All right, I'm going to switch back to the terminal for this next one. So another question that I answer fairly regularly is, OK, can I do this? Can I customize this? Can I change this? Can I modify this when I'm doing a cluster deployment? So if we do and things like, you know, hey, can I change or how do I change? What's the setting to change the amount of CPU or memory that's dedicated to, say, a control plane node in vSphere IPI? So it's pretty easy to figure out. So if we do an OpenShift install, and then we do explain, I can type. Install config typing is optional. Yeah, again, I cannot talk and type at the same time. My my my fingers want to do what my mouth is doing. So this, yeah, so this output is effectively all of the things, all of the options that are available inside of the install config.yaml. So again, it's OpenShift install explain install config. And then from here, just like with OC explain or kubectl explain, I can go through and I can dig into each one of these. So install config.compute and it'll tell me, OK, what are all of the options available inside of here? So compute dot, let's go with platform. And then, OK, I'm using the vSphere platform dot vSphere. And OK, well, if I want to modify any of these fields, I can do so inside of here. So for compute. Remember, compute is going to represent the worker nodes. So if I want to change the default size of the worker nodes provisioned by an IPI install, I would do so compute dot platform dot vSphere dot say memory megabytes. And I can specify, hey, I want you to provision worker nodes for that initial machine set should have 24 gigabytes of memory or whatever that happens to be. Yeah, whatever you want to. Yeah, I want to use a one terabyte operating system disk instead of a hundred and twenty gigabyte disk, so on and so forth. So really helpful for finding all of those things, especially. So if I do an open shift, install explain. Install config dot. So I want to install config dot platform. And then inside of here, I want to look for bare metal. So bare metal bare metal IPI has a whole bunch of options available to it. And to me, they can be very confusing. And even inside of the documentation, not all of these are thoroughly fleshed out. So it's an easy way for me to look and see, like, oh, you know, do I need to set this provisioning bridge? You know, what is this provisioning DHCP range? Do I need to create that? Do I need to provide that? So it's a ton of options in here. If there's, you know, see here, valid values, you know, are there specific things that I can provide or that are only, you know, valid fields, et cetera. So a helpful one there, if you just happen to be exploring around what I know what options are, stuff like that. I've found the install or excuse me, the explain command to be extremely helpful just in general. So I don't see explain API request counts. It gives me back all of the different fields that come back from that and an explanation of what each one of those is. So I can see how this is an object. So I can do the same thing with spec. Oops, if I could spell it correctly. So number of users to report. You can see it's an integer and stuff like that. So if I do do status, I think that's the one that we care about. Yeah. So with status, this one explains what each one of those columns who are looking at in that previous output represents. These are kind of obvious, but not all of them are all of the time. And also not every object outputs every field to the command line. So if I do like a, let's see, explain node and status. So you can see there's a huge number of things available here for a node. And if I just do an OC get node, there's only a couple of them that show up. So this is one of those ways that you can see like, hey, maybe I want to expose certain fields. Like, so if I do that dash o and provide a specific fields or stuff like that. Or if I want to do kind of a raw query or output the whole thing, you know, what are all of the fields, what's all the data that I'm going to get returned back. So again, super helpful to use the explain commands, whether it's OC explain, OpenShift install, explain, et cetera. Just check and chat real quick. Will Kubernetes 1.22 be in the new major version of OpenShift? So 1.22 will be in OpenShift 4.9. So you can see this is a the window I'm showing here. This is an OpenShift 4.8.5 cluster. And it's running Kubernetes version 1.21.1. So generally with each OpenShift update, we have a version update to Kubernetes as well. I think the one exception to that was OpenShift 3.8, which was never shipped. I think we, I think we skipped two versions of Kubernetes when we went from 3.7 to 3.9, something like that. That makes sense. Kind of, yeah. So, yeah, Chris, I see you responded there in chat, it'll be 4.9. So, yeah, that's why those deprecations are important sooner rather than later. Because 4.9 is the next release. Definitely, yeah. And so we've talked just to piggyback on that for a moment. We've talked briefly before about how upstream Kubernetes is moving to the three annual releases. So OpenShift has not yet done that. That is targeted for next year. Right, when we'll start to adopt that schedule. So we are still on a four annual release cycle. So approximately every three months, every quarter, there will be an OpenShift release. I say approximately because there's any number of reasons that it could get delayed out or anything like that. Yeah, we've had things break at the last second too. And it happens, right? Like developing software, it happens. So, yeah, speaking of which, I think we're on the cusp of four dot eight upgrades being unblocked. Yes, we should be close, I feel like. Yeah. So the next thing I'll talk about here, and I'm going to switch back to sharing my browser window. Catalan says, I wish there was a Kate spot so that I could ask it to explain what is wrong. That would be nice, right? Like find me a failure. So I get asked, I would say once a month, I get asked about OpenShift and vRealize. OK, so vRealize Automation, which is a VMware product right as part of their poll suite of things in the vRealize realm, is it is a method of deploying virtual machines into an application. And oftentimes, because there's also a guest agent and stuff like that, that also means doing things like installing packages, et cetera. So vRealize is a very powerful suite of tools. I've previous employer, I wrote a plug-in to use vRealize to configure, control, right, and consume storage devices, stuff like that. Super powerful uses a Java dialect, JavaScript dialects at scripting language. So we get asked fairly regularly about, hey, is it possible to use OpenShift or, excuse me, vRealize Automation to provision OpenShift? And the answer to that is yes, but not in any official way from Red Hat. And really not an official way from VMware either. So the only pseudo official thing that we have from VMware is this fling. So you can see it's from November of 2019. So we're coming up on two years old here. So remember, flings are effectively a proof of concept type of thing or a pre-supported, or what do we call them, Dev Preview? Yeah, so it's, hey, we can publish something. I think vSphere on ARM is a fling right now. So we can publish something, we can get it out there and, you know, we can see how popular it is, how much people want to use it, whether or not it's sustainable from our models, stuff like that. And I'm speaking in the perspective of VMware here. But anyways, so this fling was published a while ago. I don't remember if it works with OpenShift. Yeah, so OpenShift version 3.11. So it's not even OpenShift 4 compatible. However, one of the VMware folks recently published to their blog effectively a way to do this. So this is Dean Lewis. So you can see just a little over two months ago, published a deploying OpenShift IPI using VRA. So VRA being the Realize Automation. So if you can walk through this blog post, it's a fairly lengthy. He goes into pretty good depth here that explains one method to deploy OpenShift using the Realize Automation. So if you are a customer, a joint customer who's using all of that, be sure, you know, if you're interested in this, this is a great way to check it out. Chris, I trust your copy and links. Yes, happy pasta is fully cooked. Thank you, sir. So I've just got two more that I'm going to quickly go through before we talk with Mark here. So one, this one gets brought up pretty frequently. And that is how can I resize control plane nodes? So control plane nodes resizing any node is interesting with OpenShift 4. And I say that because in theory, and especially with IPI, we exert some positive control over the configuration of those nodes. So for example, if I want a machine that is four CPUs and eight gigabytes of RAM, I create a machine set and say, hey, deploy me all of these machines. If I need a machine that has a different amount of resources, I would create a new machine set and provision machines out of it. Alternatively, I can edit the existing machine set and then basically do a scale up, scale down. So scale up to provision new resources and then scale down so that it removes those old ones. So or you can specify like explicitly to remove these ones that type of stuff. We covered that before as well here on the street. If I'm using UPI or a non-integrated platform, then the method is slightly different. And specifically, we don't support hot ad. It may technically work, but we don't officially support it. So you would want to shut down the nodes to do a cordon, do a drain, shut down the node, adjust the resources assigned to it, and then turn it back on and let it come back into the cluster. So for worker nodes, that basically works exactly as you'd expect. You know, it's going to fall in line with all of the parameters. You know, we need to do that cordon and drain. It'll make sure that the pod disruption budgets and all that other stuff right here, too, et cetera. Control plane nodes are a little different as always. Yeah. So this KCS, you know, our control plane node resizing a supported day to operation. So with UPI and non-integrated installs effectively, and you can see here a rolling date for update for resizing the control plane nodes is possible. So if you do like vSphere UPI, absolutely, shut them down one at a time, resize them, turn them back on and then return them to service. For other platforms, so think AWS, Azure, et cetera, where it's not as simple as right-clicking the virtual machine and saying add CPU, where you have to replace it with a whole different machine size. The suggested method is to remove the node. So you're effectively failing a control plane node and then reprovisioning a new one and having it rejoin the cluster. So if we go to docs, and if we look at 4.8 and scroll all the way down here to backup and restore. And we have to go to the not intimidating at all disaster recovery section. And essentially, we're going in here and we're restoring that cluster, right? Restoring the STD instance back to the way that it was. Oh, here, replacing unhealthy STD member. So this would be the process that you want to follow. So, you know, it's a little intimidating. It is some work in order to do that, but it is possible. And that's literally just the safest way that we can do it. Might it work if you were to just go in and change the instance size or something like that? Maybe. But it's one of those like, we want to take the safest method possible or suggest the safest method possible so that way you don't encounter any issues. You know, especially if it's a production system, that's bad. So can you explain what is a non-integrated install? Yeah, so you've for a long time, we used to say a bare metal installation. Right. And I'm saying we referring to kind of red hat at large, we would say a bare metal installation or sometimes a bare metal UPI installation. And what that was referring to is effectively any open shift deployments where in the install config, it was set to platform equals none. So what that means is there's no cloud provider integration. It's not using, you know, the vSphere cloud provider. It's not using the Azure cloud provider, whatever that happens to be. I'm going to check and see if I can. Let me share this real quick. And the goal being so with a with a cloud provider integrated install, it means that the cluster has additional awareness of the underlying platform with a non-integrated or platform agnostic or what we used to call bare metal UPI installation. Again, that that infrastructure integration isn't there. So things like machine sets or IPI don't exist. I can't provision new nodes because it doesn't know. It doesn't understand what that underlying infrastructure is. Infrastructure cluster. So if I look inside of here, so all I did here was OC get infrastructure and then the instance is cluster. So here you said, you can see the OC get infrastructure. There's one instance cluster. And then I did an output to YAML so that I can see the full output down here in the spec. You can see that my platform here is Azure. That's to be expected. I deployed this cluster to Azure. If I were to create a. So I think this will work. I'm trying to remember if I have an install config that doesn't have. So this one, yeah, that worked well. I got the eye. Yeah. Oh, it's capital. So this one has a platform none. So this this would deploy a non integrated cluster. Whereas normally this OWV cluster that I have here, normally I deploy that onto a vSphere platform. And that vSphere platform would then have, you know, the provider integration. So being able to do things like consume PVCs through the storage provisioner, either the entry or the CSI, all that type of stuff. This is why Sully needs to learn YQ. I have a hard enough time with JQ. Yeah, I didn't even realize it was a YQ. But yeah, yeah, I'm aware of it again. It's just yeah, you know, it's like fancy stuff that came after we knew everything. Right, like JQ is great to an extent. So don't get me wrong. Now I like you. That also implies that I remember the structure of the of the file. You know, that the air here, Christian, is I didn't have a capital S. So it wouldn't have mattered because I still forgot that it was a capital S. Anyways, all right, we will stop sharing that. All right. And very quickly, I want to get this done in like less than a minute. So it was highlighted that if you're using an image content source policy to redirect your images to, for example, an offline mirror. So remember an ICSB image content source policy base basically says, hey, if you're looking for an image here, redirect automatically over to this other place. So we use that especially with a disconnected install, right, where you mirror all of the content to your offline registry instance. And then on the disconnected network in your install config, you have a image content source policy stanza. So one thing to note is if you have, if that registry instance requires authentication, you have to provide those credentials in your cluster level pull secret, right? So in the pull secret, the one that I just quickly scrolled up past, it's in your install config.yaml. That's where those credentials would be. It is also cluster wide. So pod level pull secrets, depending on what you're doing, may or may not get applied. So you may need to change that cluster level pull secret to include or exclude different characteristics of that install registry. So do note though. And Mark Russell, who is, he's been on the stream here a couple of times and he pointed out that the replacing the pull secret is no longer an action that requires a node reboot. So it just, yeah, it just refreshes cryo cryo. Again, I'm going to use both of those because I'm an equal opportunity offender. So, you know, it's coop cuddle, coop control, coop CTL. Quakey. Yeah, Quakey. Yeah, Quakey. Cubby, Cubby Coodle, you know, I can come up with these all day. Cubby, Cubby, yeah. Anyways, so yes, it does not require, it no longer requires a node reboot as of four, five, four, six, it's been a while. So, yeah, it just reloads cryo on the node and then you are on the nodes, I should say, and then you're good to go with that updated pull secret. See, I see a question here from Hetz. It's four marks. Yeah, so previously you showed how you installed OpenShift 4.7 using Libvert and showed a page you wrote about it, but I didn't find the URL. How you installed OpenShift 4.7 using Libvert. It's a blog, right? Is that ringing a bell mark? It is not. So, I know, I did an OKD hack fest. So, myself and Justin Pittman did one a while ago where I showed deploying OKD to Libvert and I put that into a gist. I'll share that here. So, there I posted that to the chat. Christian, maybe helper nodes. The answer to everything is. It was without the assistant installer stuff. Okay. So, four, six, maybe. Yeah. So, installing to Libvert is really no different than installing to any other non-integrated platform. Yeah. So, one thing to note and let me share the browser again so I can show this. Share. And now I need to grab my bookmark from over here. So, you can deploy OpenShift to any platform that REL is supported on. So, what I mean by that is OpenShift, and we just saw in the documentation here. Go back. You move. Go back. So, in the documentation here underneath this installing, we explicitly test OpenShift deployed to AWS, Azure, Google, physical servers, IBM Z, Power, all of these things. But if you want to install OpenShift to other places, you can. Anything that is a certified or a supported deployment location for REL is also a valid deployment location for OpenShift. So, two that come to mind immediately. Libvert, of course, if you're deploying as virtual machines straight on REL or something like that. Hyper-V is another one that comes up. So, if you go to, oops. Go back. So, if we go to REL supported hypervisors, right, and if I click on this guy, as soon as the internet brings up the page here, we can see that there's a number of different things available here. It kind of lists all of these. Redhead, Enterprise Linux 8, so on and so forth. OpenStack platform all the way back to 10. You know, I think OpenShift only supports 13 and later, maybe 16 and later. Mark, Mark, you're the OpenStack expert. So, you can deploy OpenShift onto these. And so long as you use that non-integrated, that platform agnostic deployment method, then you are supported within the description and I'm not logged in. Within the description of this particular KCS, which basically says, you're fully supported, but if we attribute the error to something with the underlying platform, we may ask you to move to a tested platform so that way we can eliminate that as a possibility. So, you want OpenShift on Hyper-V? Absolutely, yes. It is a supported platform for REL. You can deploy it inside of there using the non-integrated method. Just be aware of those caveats. And if you haven't posted this, I'll post that in there. Yep. Yeah, thank you. Sorry. No worries. So, Libvert falls into that category as well. All right. I know I went, that was like six minutes instead of one minute. So, that's okay. It was questions and we love questions. Yes, we do. So, again, don't hesitate to ask whatever questions are top of mind for you all. Again, we're talking about Edge today, but don't let that limit your imagination and the things you want to ask about. So, Mark, Edge, I hear it's a thing these days. It is a thing these days. Yeah. So, I have some slides. See if I can't share my screen. Shouldn't take too long to go through these. I'll stop sharing mine then. Okay. Yep. Yeah, sometimes it's just easier to talk with slides. It is. It might be easier. I'm trying to figure out how I can do a, like a drawing. Like live. I think. Yeah. Cause I have. My daughter who is an artist. We recently got her an iPad. So she's been using, she's been drawing with the Apple pencil on that. Right. But she has one of those USB. USB, USB, USB, USB, USB, USB. So I'm trying to figure out if there's a way I can like, Yeah. If I can plug it into, into here and, you know, do drawings on the fly? Not that I'm an artist, but I'm pretty good at drawing squares and arrows. Yeah. Most diagrams. You can always pull up. Was it not the power point to Google Sheets thingy and then there's draw that I oh Zoom is giving me difficulties here. Do you want to zoom is being difficult like yeah So another question from heads where to actually report bugs with single node open shift bugs over another place So yeah, BZ would be perfectly fine It'll make it to the right team. You can also use To the account team they can open JIRA tickets Yeah, it's not letting me allow zoom Privacy settings oh Yeah, we've got the test screen sharing before yeah, it's okay. Yeah. Yeah, so if you're some polo zap Well, it's pretty quickly. It's in the zoom chat Chris. Yep. Thank you Oh, I can find zoom chat. Oh my god I told you see before we started I said this is the Mondayist Wednesday that I've had It has been a really weird morning for me. So yeah, all right. I have them And I'm going to share them. I promise Had someone see if I can find the appropriate sub project for you in bugzilla. Yeah Well, Chris is doing that Thank you, sorry Oh, of course the sudden change my screen. Okay, here we go. Hopefully this is it Tell me what you see you guys. Hey, here we go. Yeah, let me present and off we go. Yep, and then I'll just yeah Okay, yay Yeah, so first things we'll just talk real high-level business goals and then some of the challenges associated with Edge and then we'll talk a bit about open shift and some edge computing architectures and what we have on the table today We'll talk a couple of edge computing use cases and then if there's any Question and answer. I also have some technical detail towards the end. I am in technical marketing so we'll start with fluffy stuff first and then Talk about, you know, what actually happens when stuff hits the fan from an edge perspective And how that may impact how you deploy your applications out there so with that Well, again fluffy stuff first Helping customers meet their business goals with the edge computing Anytime you're processing data closer to where it's produced it's going to give customers faster insights to take actions faster I just lost the slides. Oh shoot. Sorry. I was trying to move something around. That's okay So anyway, you can probably roll to the next slide as well Okay, yep and some of the key challenges for edge computing at the edge Security is always a big concern For customers whether, you know, it's a legacy infrastructure or you know, any time your attack surface moves out of the data center It's going to cause concerns, right connectivity is also Of concern, right and we'll talk a little bit about either, you know sporadic or disconnected environments for workloads at the edge and then the other huge thing that you need to be aware about the scale of The number of actual nodes that you're working with when we talk about edge deployments You know data centers usually are in the hundreds of nodes and It can go up to the tens and tens of thousands of nodes in terms of real Customer workloads out the edge and maintenance and monitoring of those is is a key factor to consider when Thinking about edge Computing and what your architecture is going to look like Go ahead come on So yep, so edge computing this is kind of what we what we view Edge at in a tiered approach, right so starting with the provider Core is the data center typical data center or regional data center moving farther out. We see a provider edge which you know Is much more kind of a telco Edge Look and feel and then all the way out to end user premise edge And let me caveat. This is an open-shift only Presentation looking at the edge The farther you go left The farther rel becomes a real possibility as well And I know those guys have done a ton of work in 83 and then coming up in 84 Really and that's another stream that should be considered as well at the edge so The open-shift focus is kind of the edge endpoint edge gateway edge server, you know end user premise edge and you know next slide I'll show you kind of what exactly this means from a Hardware perspective and a scale perspective as well, right some of that tier one Toco kind of user is going to be a larger 16 core, you know 128 gig you'll have hundreds of hundreds of those Not really considered, you know data center, but not far from it And then we see a tier two is more of a data aggregation Again, this just gets into kind of the declining hardware capacity and the scale going up at this point, right so in points Toco workloads would be a really good example of this Except they're going to go with a way more core and memory at for that particular workload But you're talking tens of thousands of nodes near near the far edge deployment Okay, next slide. Oh There's animation. I I did not know Okay, no worries. So yeah, so overchief gives you kind of that consistent platform to you know develop once and deploy anywhere And whether that's on a small bare metal footprint or virtualized infra It gives you, you know a consistent operation at scale as well with The ACM product that we have advanced cluster manager There's been a ton of work done on that front as well from from an edge perspective. So So that's that's a real key as well We can go ahead and go to the next Slide so this is kind of where we're at now from a product perspective at Red Hat Starting I think four or five we offered a three node cluster. It's fully HA capable It gives you the ability to schedule, you know workloads on control plane notes essentially And we'll get into a little bit more of that in place upgrades are available here as well More recently I think for seven we offered a remote worker node environment which has individual work worker nodes Displaced from the control plane That are reasonably reliably connected We'll talk a bit more about What that looks like and and what happens when things go wrong between that worker node and the the control plane and Available this year is the single node edge server for low bandwidth and disconnected sites So if you don't need fully HA You know open ship node and In-place upgrades is a work in progress there as well So gives you the ability to shrink your footprint even more down to a single mode Well, it's GA in 4.9 Yeah, I've heard I've heard that before but I've also heard maybe it's gotten pushed out as well Well, it's true too. It depends on the customer. I'm assuming as well, so specific releases are hard sometimes true So I want to interrupt you there mark and I want to I Ask slash point out like a lot of times We talk about edge in the context like if you go back a slide or two You don't have to Chris, but if we think back a couple of slides, you know, like this, you know, the tier three And it's like tens of thousands of nodes You don't have to have that type of scale or that type of you know a manufacturing plant with PLC Better, right? So you have an edge type of deployment You know retail is another one, right? You think about you know a store chain or a restaurant chain that has you know hundreds thousands of nodes that you don't have to have that type of deployment. It could be as simple as you know, uh, I'm gonna use the word conglomerate a conglomerate of like Lawyer's offices, right? You know where you've got a dozen offices and each one has a little you know infrastructure node out in it and you can use something like a remote worker node To offer them those capabilities Or a single node open shift, you know, depending on what it is that happens to be going on out there You know, what what type of apps do they need local access to that type of stuff? you know remote worker nodes to me are Some of the most interesting but also some of the most confusing When it comes to that use case and I don't remember if you have a slide on it or not around remote worker nodes, but yeah, there's a couple Yeah, okay They're fun to me because it's effectively. It's just like any other work or my worker node. There's nothing special about it There's nothing unique about it. It's literally like it's just happens to be way out at the other end of a link with probably high latency And yeah, you just want to take some Extra considerations with how the workload is scheduled to those notes exactly and and how You need that node to behave if something does go wrong within kind of the configuration parameters of Cluster itself as well. So yeah, we can we can talk through a little bit of that too Towards the end. But yeah, it's essentially the same. Yep. Khalid asks. Can we run? Open shift single node on-premises not only at the edge. I guess why yeah In terms of that So I will say Yes, technically, there's nothing you deploy single-note open shift wherever you would like and I know partners I know of a couple of partners who do this where they're testing, right? They integrated it into their CI pipeline so that whenever they do a run It deploys a single-note open shift or in one case like 10 single-note open shifts And then they run through their suite of tests that way. Yeah, absolutely. You can that's not the only use case So, yeah, yep, absolutely concur completely Can code ready container sorry to interrupt you mark can code ready containers be considered single-note clusters for edge cases? I would say no Only because code ready is really Considered kind of a development platform at this point. I guess would you guys agree with that? Yeah It's one of those like technically Yes, you can squint and consider it a deployment but Really no so one there's a complete lack of support there Like he I don't even know how much community support you would get for something like that And you know single-note open shift is intended to become a fully supported deployment option Today it's in dead preview or tech preview or whatever it is But it won't be long and it'll be fully supported whereas code ready containers will never be supported for a production type of use case It's always meant to be for developers, right? So, yeah, that would be one major reason the other one being it's because it's targeted to developers If you have to jump through extra hoops to enable external access You know stuff like that so It's maybe if you'll wanted to do like purely local Processing that had no high availability and like lots of other caveats and stuff like that Maybe you could justify it, but again, it would be completely unsupported Christian Christian had a much stronger reaction Yeah, okay Throw through yeah behind the scenes there it is technically a single load of open shifts that's running inside of there. Yep Next slide yep, please Yeah, so this is kind of just walking through the different options to deploy your three node IPI UPI and most recently with the assisted installer Basically The only thing you're changing from a regular OpenShift install as you're setting your worker replicas to zero in the install config And that's basically it Minimums for each of those nodes would be six ECPU 24 gig RAM 200 gig storage And then you know that cluster can be expand expanded at will and You can add storage nodes. You can remove your worker role from the original control plane if necessary and Those nodes can also be remote workers if need be as well. So Yeah, there's a couple of things I want to talk about on this slide So so one going back to the question we had earlier about, you know, what's platform agnostic or what's the difference with Set in here. Maybe that was in a separate somebody asked me about that earlier today. Anyways, so With three node cluster, even though when we announced it back in for six or something like that like the slide had this big You know bare metal only that's referring to the installation method Yeah, you know the platform agnostic platform equals none deployment type doesn't matter if it's physical or virtual So as you have on the slide here, right the minimum system resources and six CPUs 24 gigs of RAM That can be virtual. It just can't have a cloud provider integration inside of Somebody else asked me Recently of you know, how do we determine what them, you know, if I'm combining roles What's the minimum amount of resources and it's literally you just add them together? So I think ODF or OCS with 4.8 now supports a compact cluster deployment Yeah, so you would want to if you only have three nodes in the cluster You would want to sum all of those resource requirements together. So For a control plane node, it's two or four CPUs and 16 gigabytes of RAM for a worker node It's two CPUs and eight gigabytes of RAM, which is how we get to six and 24 here, right for OCS It's I don't remember what it is, but it would basically be just add those on top and you reach your minimums Exactly the other thing I wanted to highlight and you kind of already said this is Three node cluster you can absolutely add additional worker nodes after the fact and you can even go in and you can and you Can mark the control plane as non-schedulable after the fact So that's you know, hey, I started out with this small cluster Things got out of control now. I have a 50 node cluster and I just I don't want the control plane to be scheduling workload anymore I wanted to be focusing on doing control plane things. So it's very good. Yeah Single node open shift on the other hand, you cannot add nodes to not yet. No, it's it's One of the things that we're looking at Nha, I think is another thing a two node Two node ha single node is another thing that that's also being considered but but right now You know if that node goes down that node goes down and you replace it then you start from from ground zero, so Here's an interesting thought for you mark. Can I could I deploy a standard cluster? like a vSphere IPI and Then day two mark the control plane as schedulable and then scale down the worker nodes That's a good question. I think that would definitely require a node reboot, but yeah, see I don't know Do we test this? Absolutely not Yeah Could that work? I think that possibly could work. Yes So that would be one way to get a compact cluster even though it's not compacted deployment Right with a cloud provider integration if you wanted to use like this. Yeah provision or something like that. That's true. Yeah, yeah Very true. Okay Next slide single node This is available for Use or install only with the assisted installer currently, right? So you go to cloud.redhat.com and you can try this today And via the bare metal assisted installer It's depth preview for seven and I think in four eight as well. All you need to do is check the box Current minimum requirements and this may have changed within the last few weeks is eight vCPUs and 32 giga RAM The ultimate goal is to get this down to two two and two to see if you can use and two giga RAM Man, that's a that's a lot of work lofty goal. Yeah To work it's writing a blog post on the the core node reduction of single node and it's Man, I don't think I had enough physics in college. Yeah It's it's a lot so Anyway So right now single node assisted installer only There's your minimums And I've seen it done on you know Pretty small things like super micro E 800s I think And I know the public sector has been playing with single run out single node, but Compact clusters on HP like you know a thousand Thousand yeah the four node kind of looking thing that they're adding additional security features to to bring that up for Some government agencies so Cool. Yep. Next slide. Yeah Okay, ACM advanced cluster management So it's managing, you know the edge just like the core, right? So it's multi lifecycle or multi cluster lifecycle management some policy Driven government governance and risk compliance and then advanced application Application lifecycle management as well. If you want to roll to the next slide real quick Get a little bit more in depth look at what ACM looks like from an edge perspective that you can kind of see Far edge What's in coupe? Kubernetes node control versus the cluster management You know as well as you know, if you've got ACM at the core, you know also add ACS There as well for you know vulnerability analysis and image assurance This you know is going to solve a lot of headaches from a security perspective for those nodes at the edge So this is just kind of a high-level look of what it looks like to manage kind of Edge clusters from a core central data center running ACM One other thing in ACM tech preview and the latest ACM 2.3 is zero touch provisioning It's aimed at the regional Distributed and on-prem deployment perspective So it's going to be an automated path You know from getting you know uninstalled infrastructure to getting applications on on an open-shift cluster So you start with site plan you have your manifest and get and then you deploy via ZTP So integrates out the existing technology stack for ACM which is high middle-cubed and The work being done on the assisted installer The goal for zero touch is minimal prerequisites, right? So Hopefully we can get it to the point where our untrained technicians can scan a barcode trigger and install, right? That's the goal and the Deployments can be customized as well here. So it's you know connected disconnected IPv4 v6 and then DHCP or static IP and then Supports the UPI and IPI deployment framework as well Mark, I don't I don't want to cut you off, but You're at time. Yeah, Chris reminded me. We do have a hard stop for the Yeah, so yeah, so thank you Mark really appreciate you coming on today So I will I will share the blog post with you so that we can make sure we capture all of this information when we publish the blog post Later this week Chris good luck with that screen share thing there That was really weird So to our audience, thank you so much for joining us today really great conversation really appreciate you and all your questions Please don't hesitate to reach out at any time you can contact me at andrew.sullivan at redhat.com or on Twitter at Practical Andrews, so just like you've seen my username on twitch there or rebrandcast across all the others So don't hesitate to reach out if you have any questions if there's anything we didn't get to today And with all of that being said, thank you so much We will see you next week where we will have Christian on to talk about Bring your own Windows host for Windows containers Nice All right. Thanks everybody. See you guys. See you soon