 Hello, wherever you are in the world, I'm Johnny Rickard, I'm going to be the host today of the Ask an OpenShift admin live stream. I'm joined today with Reese Oxenham, I'll let him introduce himself in just a second, but just wanted to say hope that you're having a great Red Hat day, we got a lot of things going on in the Red Hat world since last night, so things are a little bit crazy, but yeah, so just to get it kicked off, if you're not familiar with the Office Hours series, the Ask an OpenShift admin live stream is part of the Red Hat Office Hours series, so like any other Office Hours, if you have any questions, we're here to answer them and hopefully get anything, any outstanding things that you may have about OpenShift and you know just Kubernetes and Journal, let us know and we'll try and get them knocked out for you. So real quick, let's go over the top of mind topics. So I'll save the biggest one for last, but one of the, or next week on July 21st at 10 a.m. Eastern, the What's New in OpenShift 4.11 stream is going to be out to talk about what's coming out in the 4.11 release of OpenShift, so make sure that you tune in for that to see what features are going to be there and you know, because more insights about like what we're going to be putting out there for you. The next big thing was that a project Kyverno, which is a policy generator for Kubernetes, it moved to the incubator status on under the CNCF umbrella. This is going to be an awesome, awesome project. So if you're familiar with Gatekeeper, it's very similar to Gatekeeper, but instead of you have no learn, like I think it's Opal or something like that, it's using the native tooling from Kubernetes, so you don't really have to learn anything else about the policy generator, except for just, you know, generating the policies that you need to keep your cluster secure. And then the last one is the big news that came out last night about Matt Hicks is stepping up as the CEO of Red Hat. So there is an announcement made yesterday on our website. I don't know if Stephanie, if you have the link in our chat, if you could post it for those that aren't familiar, but Paul is going to be stepping over to the chairman role and Matt Hicks is going to be the CEO. I think it's going to be pretty awesome. He's part of the OpenShift OG team. So, you know, I think that, you know, having that experience and then being a developer for a long time, I think that, you know, it's going to be good for Red Hat. So it'll be interesting to see how things move forward. So, all right, so with all that cruising us out of the way, let's talk about, you know, Edge and Data Center and let's introduce our guest Rhys. So Rhys, can you tell us a little bit about yourself? Yeah, absolutely. Thank you so much for having me back. So my name is Rhys Oxonham. I'm a director in the customer field engagement team. So we work very closely with some of Red Hat's largest customers worldwide, trying to make sure that they are successful. But we also try and take a lot of the lessons learned, understanding some of the gaps, the features, the bugs, the issues that customers are running into and try and help our product teams make better decisions. How do they prioritize features and how do we build them out? My team, we work on a little of the core platform side of things. So everything from getting servers up and running, deployment, infrastructure, storage networking, virtualization. And we primarily do all of those things with OpenShift and Kubernetes. We got a lot of sort of experience around some of the other technologies in the hybrid platform's portfolio. So historically, we've done a lot of work with OpenStack, for example. And I'm delighted to be here to talk about, as we've called this, from going from the data center right out to the edge, because my team does a lot of work in the edge space as well, understanding how our customers are trying to leverage some of the new edge tougher topologies. Yeah, awesome. And in our pre-show, we were talking about this a little bit. There's a lot of tooling that's coming out, the Red Hats developing and putting out there that are really built around this use case of edge to the data center, from data center to edge and back. And so yeah, I think for a lot of our me and Andrew specifically, and by the way, Andrew's in training. So I did forget to mention that. He's in training, he's out, taking a day off, trying to get his learn on. Yeah, I'm sorry, Christian, you have to make do with me. Yeah, suck it up, Christian, you'll be OK. But yeah, so when we talk about the edge, right, I think a big question that comes up all the time is like, what is edge? So can you give, what's your explanation of what edge computing is? Yeah, sure. So when I think of the edge, what we're really talking about is shifting data applications and the processing of that data and the location of those applications right quite close to where the actual subscriber or the user of that data is. So for decades, organizations, they've been deploying infrastructure in centralized data centers. And those data centers, they've been distributed for resiliency and things like that, but they've still been quote unquote centralized somewhere, be that at Colos or within the organizations themselves. But what we're really talking about is pushing that data much closer to the subscriber where they might want to get access to it. Now, I've used this example quite a few times when you look at an example like content distribution, right? You want to almost, rather than relying on a bunch of centralized data centers and having the cost of the bandwidth and the latency associated to it to get that data out to the subscriber, what if you were able to push that content much, much closer and have tens, hundreds, thousands of individual distribution centers much, much closer to that subscriber. So the cost of that data transfer and the latency of accessing that data is much, much smaller. It makes your network much, much more efficient and makes it much easier to get access to that data. And we also think about some more extreme examples. You could think of, well, what if I want to put infrastructure on a cargo ship or a train? Or when you look at maybe like a military example, putting infrastructure in a soldier, right? As in they will have a wearable or some sorts. And so we at Red Hat and for our software platforms, we really have to think about, well, how do we have to change or adapt our solutions to fit into all of these wide variety of new use cases and designs? Yeah, I actually love that about the soldier because I thought about it like a level higher. I've actually worked on a couple of projects where we're putting like Kubernetes on airplanes and we're talking about putting on military vehicles and stuff like that when it was over in NAPS. But yeah, to take it a level further and put it on a soldier, that's pretty interesting to think about. I can't wait to see something like that. That's going to be awesome. Yeah, absolutely. And we try and adapt our software. And we always have done to maximize its reach. There's no one size fits all solution for our software. And so Edge and like many other verticals or markets, Telco is a perfect one historically. It's pushed us into new areas. We've really had to explore quite extreme requirements and expectations. And Edge is just kind of almost to a certain extent an extension of the requirements for Telco. I can see you smirking. Yeah, absolutely. No, no, I thought Christian would be being difficult already. So, you know, we've really had to think about some really extreme expectations and requirements and trying to build in some of that resiliency and the flexibility in our software to accommodate it. You know, Christian jokes, right? But there may well be a situation where there will be a server or a piece of hardware running in a remote environment that our customers expect that it has the resilience requires as close as possible to zero human intervention that it has all of the automation it requires to rebuild itself in the event of a failed upgrade. It has no reliance on some core infrastructure for pretty much anything. And it can be as autonomous and standalone as possible. And so we are having to build software platforms or modify our existing software platforms to cope with those environments. And, you know, you likely saw the announcement that we had this week with ABB talking about, you know, some of the work we're doing with them and some of the requirements they have from an industrial perspective. And, you know, it's projects like that that give birth to projects like MicroShift. And that's the absolute smallest possible footprint of, you know, an OpenShift or Kubernetes-like implementation that we're working on. So lots of flexibility. Yeah, it's awesome. I think, like, I was, you know, this morning's been back to back calls nonstop, but like one of the calls earlier was that just kind of talking about like Red Hat is great at making products, right? Like we've done a really great job of doing that. Sure. But then a lot of the conversation around solutioning is where it's like, okay, well, then there's probably some room there. And I feel like this is where we're kind of, we're starting to like fit that mold, right? Now we're starting to solution, you know, we're starting to put solutions out there for people. Like, hey, we've actually, here's how we've solved this problem. Here's how you can go do this. And even with my team, what we're doing with the validated patterns is we're saying, hey, here's the art of the reality now. Is that this like white boarded, you know, art of the possible? It's like, okay, we're actually putting, you know, foot to the pavement right now and we're actually getting stuff done. And here's how you can actually take the same thing and do it. That's, yeah, that's pretty good. Yeah, no, absolutely. Standardization, consistency, but also flexibility, you know, is front and center with pretty much everything we do, right? As I said, we recognize there's no one size fits all, but we also have to try and provide, as you say, solutions to customers that have a wide variety of expectations. Yeah, awesome. So real quick, before we get started, or get too far onto it, make sure you like and subscribe on YouTube guys. So if you're on YouTube or if you're on Twitch, make sure that you like the channel so that way we can keep this content coming to you. You know, try to get Rhys on here more often. They come talk about some more of the awesome stuff he's working on. So Rhys, I know you're here to talk about some, you know, just, we kind of brought you in on like this, like big umbrella conversation, you know, so. Yeah, I didn't quite know what to prepare for. So let's see what sort of questions we get in the chat when we start talking a bit more about some of the technologies we've got in our portfolio. Yeah, right on. So do you want to go ahead and start kicking off some of your demos or some of the things that you want to talk about? Yeah, absolutely. So, yeah, let me have a think about maybe sharing or showing micro shift at a really sort of simplistic level. And I've already alluded to micro shift being this super thin weight, thin weight, lightweight, super thin, lightweight implementation of OpenShift, right? And you can kind of think of it just as a single binary with very minimal hardware expectations targeting multiple architectures that you can pretty much run on anything. It can be as small as possible. And I'm just going to very briefly share my screen and I hope this works. Will you let me know if it doesn't? Yep, it does. If you could just make it a little bit bigger. I'm not sure I can because I'm in presentation mode. Okay, well then it should be fine. Are your slides public? Well, yes and no. So this comes from a public presentation that a colleague of mine did at the OpenShift Commons last week. So I think we're absolutely fine in sharing it. So maybe we can put the link somewhere or maybe I can just zoom in here and show you what I'm kind of, I'm not really sure how this is really gonna work. Awesome. Yeah, it's not working at all. That's the one tweet that I saw the other day where this guy was like, whoever does your demos in presentation mode or in viewer mode like this, like who hurt you? It's funny as hell. I'm sorry. No worries at all. So when we think about the various different edged apologies that we can essentially support with OpenShift, right? If we think about it right the way from the bottom, we have supported for a number of different releases this concept of a three node cluster. So you can kind of think of this as a converged master and worker configuration. So three nodes. So you have your resiliency inside of the cluster. It can work autonomously at the kind of the edge site and you can converge storage on there as well. So for all intents and purposes you can throw this out there at your edge site and it'll work and be absolutely fine. This is more of a kind of a full implementation of OpenShift. So it has everything you can possibly need and want just in three individual servers, right? If we take that up again, we have something called single node OpenShift. Again, single node OpenShift has been available for quite some time. And it's that same full fat OpenShift implementation just on one machine. So the challenge with that of course is well how do you make that highly available? Well, it's not really designed to be highly available within a single machine. But for situations where you have perhaps the smallest physical footprint and you just want to have a single server at that site then single node OpenShift gives you that gives you that full OpenShift experience in one single machine. We also have the concept of remote worker nodes. So this again is pushing infrastructure to the edge but still being reliant on a sort of a centralized data center. So you still have that management capability from a centralized data center but you're just pushing the worker nodes directly out to the edge. And then for very small footprint we have a couple of different options. We've got Reddit Enterprise Linux for Edge and we have something called MicroShift. And MicroShift is what I want to briefly demonstrate to you. So I'm just going to move that out of the way and I'm going to pull this over. So MicroShift, as I mentioned, it is simply a container image or a binary that has a very small lightweight version of OpenShift included in it. So all I have here, this is just a Fedora machine. So if I do that, you can see this is a Fedora 36 virtual machine and if I list the containers that are running on this, you'll see that it is literally just a MicroShift instance. So everything is contained within this container image. And when it starts up, it's just runs the, a MicroShift binary. So if I was to do a hard man locks for this, you just literally, you see all of the output of the Kubernetes implementation inside of MicroShift, just there, just as if it was any other process on the system. Now, that basically means standing up MicroShift is really as simple as pulling the container image and essentially starting it up. And of course, we provide documentation on how you can do this. Now, we of course, because this is OpenShift, we have all of the OpenShift command line tooling. So you get nodes and we have that single machine, right? We can get all of the pods on the machine and you'll see this is a very stripped down version of OpenShift, you know, there's no web console for it. There's no operators. It's very, very simple and lightweight because it's designed to simply run inside of those environments where you want a minimal configuration that requires very little CPU memory and things like that. And just allows you to get started really, really quickly. Now, just because this is, you know, this is a lightweight version of OpenShift, I can still deploy and create pods and do deployments. There's also, you know, aspects for doing things like virtualization inside of these environments. I can deploy metal LBs if I want to have access to some of these virtual machines, makes it super easy to do. So if I, I've already deployed a couple of applications here. So if I was to do get service on my test one, you'll see that I've got a couple of different parts here. The one I'm using to expose engine X with load balancer and I've actually exposed this over metal LB. And this is my IP address. Again, it's just on a live network. So I should be able to just hit up this and I can. So that's just going over metal LB to my, my micro shift instance. And again, because it's OpenShift, I can do, I can get my routes and I've created a route there. Hello, microshift.ardioxsim.com and I should also be able to navigate to this, which I can. So there's a few different ways of, you know, getting access to these clusters just because it's micro shift doesn't mean you're really hindered from a workload or a networking perspective. So I'm going to stop sharing. I don't want to see if there's any questions on, on anything that I was doing. So we do have one question. One was, can you, can I run micro shift on Windows? As far as I'm aware, no, you can't. So I believe it's shipped as a container image today. There may be, you can, you can build the binary separate, I believe, but no, I don't think you could run it on Windows. Okay, awesome. And then Christian was asking about the, if the router, if there was the fix for the router change to keep it from being dot local. And then he was saying it looks like it's, he just sent a link saying it looks like it's still open. So, yeah, so for exposing networking routes, we typically use MDNS. So essentially we can broadcast available a route. So, you know, you can be on that network and it'll essentially resolve those. You're very similar to how, you know, on your phone, you want to, you know, listen to Spotify on your sonar, so whatever you click a broad, I want to choose my non-bluetube as a network speaker. And it shows up at the list. That's using MDNS, right? We do a very similar thing for exposing routes instead of the network. And by default, Christian, you're absolutely right. It does have a dot local domain by default. But you'll see that when I created my network, my route, I specified a dash dash hostname and I can kind of override the route that the router ingress is having to identify. And so long as my external DNS understands that that has to go to my micro shift instance, it resolves just like any other application. All right, awesome. And then Dwayne was asking, so Dwayne's doing some training, like some self-study trying to get us learning to open shift to micro shift. So he's asking, so now I need to plan a micro shift for his home lab, assuming Antibole and SDN task are linked to micro shift. The first part of that is Antibole no, but the SDN. It's using, is it using OVN or is it using like the open shift SDN? I believe it's using like simple Flannel based network. Oh, nice. Yeah, so I don't think there's, looking on this machine, I don't think there's OVN. I just certainly don't think there's OVN. It's just it really, we try to really compress everything to the smallest possible footprint that it can take up. And that's why, of course, it has some limitations, but it's still that consistent, from an open shift perspective, it's very familiar to use just in a really lightweight package. Exactly, exactly. That's awesome. And so, and this has parity with like K3s, right? Like this would be like a, okay. Yeah, absolutely. So yeah, container image, very minimal configuration, spins up very, very quickly, very, very simply. Okay, awesome. And then Adele just asked, what's the difference between micro shift and open shift local, which I was corrected last week. Open shift local is the new branded name for CRC, so. Yeah, so CRC is kind of a portable or sort of, yeah, portable is probably the best word. A portable way of standing up an open shift instance on your local machine. And so we essentially download a QCOW2 image. We turn it on and we do some ways of getting into that cluster via some of the local networking. But that's a full-blown open shift instance. It takes up a fair amount of resources because it's still a full-blown open shift instance. Micro shift, we're talking about a few megabytes. It's tiny and it's designed to be put on something as, I don't want to use the term Raspberry Pi, but I'm sure it could probably run there. We're talking about things like NVIDIA Jetsons, right? Really small, yet capable devices, somewhat the size of a Raspberry Pi, they'll run right on an edge location where you just need a familiar Kubernetes API that you could interact with and has that simple mechanism for updating, has the sort of the notion of being able to have rapid recovery of the platform in failed upgrades and things like that. So capable, but also limited in its capabilities because you wouldn't necessarily want to run your development pipeline there. You're not gonna need to have the operator hub running there or for it to do any additional tasks other than literally run the applications it needs to at that site. Yeah, exactly. And the one thing you're saying it is the familiar API, right? So it's like you, for all your sports ball fans out there, you practice how you play. So like when you are at the far edge with this Pi or Jetson or whatever, and you're running almost with K3s, but if you're running MicroShift, right? Like that interaction at the far edge is gonna be the exact same interaction that you'd have minus the operators that you'd have back in your data center with your full line cluster. Yeah, exactly. So then the way to ask another question in terms of a lot, I assume Fedora IoT works or is there a list of hardware like Raspberry Pi? Is there a hardware compatibility list for the MicroShift? I don't know whether we have anything built, but I've used Fedora IoT just because, well, for two reasons. First of all, it's super easy just to set up Fedora IoT. But secondly, because it's somewhat mimics how we expect customers will deploy MicroShift. So Fedora IoT uses RPMOS tree, right? So it's a largely immutable file system, has built-in recovery. So let's say, for example, we upgrade the base OS. If it detects that there's a failure in the machine coming back up, it'll reboot and it'll revert back to the previous one. Whereas a typical other operating system wouldn't do that and it expects some kind of user intervention. So it's an operating system like Fedora IoT or REL for Edge that we would expect customers will deploy MicroShift onto. But MicroShift is just a container image, right? So you could run it on pretty much anywhere, which gives it great portability, great flexibility, and the hardware requirements are minute. All right, awesome. And then one question that came in was, how can zero trust practices exist on MicroShift or in MicroShift use? Maybe Duane can elaborate on what he really means there. Yeah, so Duane, if you have an example of what you're looking at, let us know. So with MicroShift, like we'd seen before in a previous stream, right? Like we had some of the MicroShift team on, but you can do some of this, like the management of the cluster, right through ACM and stuff like that. So is that the goal to move all of the management of the MicroShift cluster or to manage all of your MicroShift clusters at the edge and beyond through ACM? Or is there going to be another tool? Yeah, so we're looking at sort of fleet management solution, right? Because if you've got lots of these devices out there, you obviously need to think about patch management, right? How do I build out not just the underlying operating system, but also how do I re-spin the Kubernetes image or essentially the MicroShift image, how you do that? So we are absolutely looking at fleet management tooling and how you actually look after that. But yeah, a lot of it is, well, ultimately it's still an open shift or a Kubernetes implementation. And so when you're looking at distributing your applications, you can still essentially hook into it from an ACM perspective, an open cluster management perspective. So provisioning applications to it is just like provisioning anything to a, quote, unquote, normal open shift implementation. So again, it's about consistency, right? It's the same, but just in a very small package. Gotcha, gotcha. So fleet manager, is fleet manager like, is that the rebranding of something or is that like an actual, is that another project? That is a good question. I am not totally sure on all of the details around the fleet management stuff, but it is something that is actually being worked on. Okay, awesome. Yeah, because there's, I think when Andrew and I were looking at this, right, we just noticed there's all of these things that are coming out here and they seem similar, but then when you start digging in, like they all have a purpose, right? Like we talked about the central infrastructure manager and then we have ACM and then we have fleet manager and then there's some, I don't know if I should say this a lot, I feel like I'll probably get in trouble for saying something, but there's some of these other projects that are coming out, like where it's, you know, kind of like these single node Kubernetes, not quite micro shift, but maybe like a bit bigger than micro shift, but not as big as a CRC or I'm sorry, she's open shift local. You know what I mean? So like there's, there's just all these like things that are coming out here to manage the edge and the far edge and stuff like that. So it's interesting, it's an interesting space because it's growing like crazy. It is and as you said a number of times, right? There's no one size fits all. We can't just say, okay, every customer that wants to do edge has to do single node open shift, right? Because single node open shift is still a full fat open shift implementation, right? You boot it up, it still has the same requirements and expectations as any other open shift nodes in any other cluster type, right? So it's, you still have to have a fairly comprehensive machine at each of the edge sites. Well, what if the edge site is, you know, we want to provision to Jetsons or we want to, as we're going back to that example, the soldier has to carry this infrastructure. It has to be, you know, really, really low power, you know, and have all of these expectations around self recovery, fault tolerance and things like that that we have to try and address. And that's where solutions like Microsoft comes in. Yeah, a hundred percent, a hundred percent. I think it's awesome. I really do. So like when I was, I was on a project where we, like I said, we put Kubernetes on the plane and you know, I think that had Microsoft, Microsoft was very early. Like it's very early in its, you know, like public knowledge, you know, or awareness of it within Red Hat, I think. And I think if it had been ready, this would have been the perfect solution because we had, we had OpenShift back in the data center. We wanted to keep that same workflow and pipeline and tooling all the way out to the plane. And yeah, it would have just been, it would have been amazing to put it out there. And I've been on a couple of projects where we've been like this close to where it would have just been awesome timing loss, you know, so. Well, talking about putting things on planes, if you all want to go and have a look at the endurance project, there is actually an instance of MicroShift that went out into space. Oh, I did see that, yes. Yeah. So, you know, plane, we're a little bit higher than a typical plane might fly. So that's actually really cool. So MicroShift has made it into orbit. Yeah, it's pretty awesome. Again, pushing boundaries for where our software has to be able to operate, I think, is really cool. Yeah, I totally agree. So Duane asked if there's a plan for MicroShift container to be a UBI or certified image. I believe it's already built on UBI, but. Yeah, I think it is absolutely built on UBI so it should be absolutely compatible for that. Do you know, and I don't, this may not be public knowledge yet, but is, do you know when the transition from project to product or, you know, whatever, whatever that's labeled is going to be? No, I don't think we can talk about anything like that. Okay, cool. All right, awesome. So then the other part is Christian is, you know, he thinks it's pretty awesome that you're drinking out of a whiskey glass, drinking water out of a whiskey glass. Yeah, everyone asks that. I mean, I think this is somewhat normal. Maybe it's not normal. I don't know. It's a European thing. But it seems like a normal glass to me, but everyone seems to call it a whiskey glass. I don't know. Well, you're getting a bunch of street bread for drinking water out of a whiskey glass with us. There we go. So we kind of, we briefly talked about like the central infrastructure manager. Yeah. Did you want to talk about that and show some of that off? Yeah, absolutely. So there's, so we've been working on a number of different things, as you say, right? So single and open shift, MicroShift, HyperShift, which is something we can perhaps come on to later down the line. But also looking at next generation ways of deploying infrastructure on premise. And, you know, we touch on the concepts of things like zero touch provisioning and zero touch provisioning for factory workflows. So I did want to briefly demonstrate some of the ways that we can do things like zero touch provisioning through the sort of central infrastructure manager, which is a component as part of ACM or MCE for, you know, a bit more of a lighter weight option for customers who don't necessarily need all of the additional capabilities of ACM. And its ability to drive some of the zero touch provisioning type workflows. So what do we mean by zero touch provisioning? Well, I as an administrator want to have the ability to define my nodes and through a few clicks, you know, defining, okay, this is what I want my cluster to look like. These are all the credentials you need. It go away and deploy open shift, right? So it's talking about, well, from a true zero touch provisioning perspective, powering the machine on, attaching a virtual media, you know, an ISO of virtual media, the cluster bootstrapping itself. I don't have to worry about external requirements for things like DHCP. I don't have to worry about external load balances or anything. It is a programmatic way of pretty much deploying and controlling the entire end to end model. So, vodka on the rocks. Indeed. No, not indeed. So let me share my screen again and I'll just show you some of the workflow you can do here. Share screen, my entire screen. And I hope that this is going to be bigger, big enough. I'm not sure I can make it much bigger without it kind of distorting the UI. So I just have an ACM, so advanced cluster manager for Kubernetes environment here. And we have this concept called infrastructure environments. Hang on just a second, Reese. We need to get Stephanie to share the screen for you. I apologize. You're all good. So this is what everybody knows. Stephanie is like kind of like straddling two meetings right now. So like we've got her super extended. Thanks to Stephanie. All right, we get to go? Yep, good to go. Awesome. Okay. So I'll define an infrastructure environment and this kind of roughly allows me to kind of group cluster. I don't want to use the word cluster. It allows me to group a set of hosts together. And that sort of grouping could be, well, it's that data center or it's this particular region. And then I can then go ahead and build out clusters from it. So I'll go ahead and say create infrastructure environment. I'm going to give it a name and I'm going to call this SNO3. Sorry, SNO2. Because I'm going to deploy a single node in an OpenShift environment within this cluster. So I'm going to say location here is Cardiff, which is where I am. Yeah, sorry. I think I missed this. Network type only use DHCP or static so you can define all of your networking information statically. And I just realized I'm going to have to stop sharing my screen to put my pulse secret in here. Otherwise I'm going to, this is funny. I actually managed to leak my pulse secret in another stream. I think the first time that I did this and then I had to go through all of the motions of revoking it because it essentially would allow anyone to deploy a cluster as me. So I'll quickly run through these options then I'll eventually click next. So I put my pulse secret in there, which as I'm sure everyone that's watching essentially knows that this is a way of kind of authenticating myself so I can pull container images and, you know, integrate within the Red Hat console. Red Hat.com and, you know, attaches everything through the cluster manager or things like that. So I'll just quickly stop sharing. I'll pull this information in and I'll reshare after I've hit the create button. So bear with me. Yep. And trust me, we've all done it. I think both Andrew and I have both gone through the modem and I have a new pulse secret. Yeah, absolutely. And I really don't want to go doing that again. It wants my secure shop here as well. So I'll pop that in there and I should be good to go. Yeah. Okay. Quick look at the chat. You've exposed your AWS secrets. Oh yeah, that's pro and they're much smaller strengths, right? So you could really quickly grab those and provision something probably before the stream has even finished. So yeah, I'll go ahead and reshare my screen now that I've pulled them in. And Steph, I apologize for that. We get to go again. I will. So I'm just sending Stephanie to chat real quick. No worries. Stephanie, if you're listening, could you enable the screen share again? I think the cool thing too is that like looking at it through the ACM menu and stuff like that, like everything that you're going through, it looks very similar to the automated installer, the assisted installer interface as well, right? So the whole consistency story that you've been talking about, right? I think it's good. Absolutely. And that's actually a really good segue. So a lot of the experiences and lessons learned that we've got from the assisted installer and actually the workflow and the wizard when the screen sharing comes back on, you kind of see a lot of familiarity building up and defining your cluster in a very similar way to the assisted installer. It makes things so much more simple, right? And our customers really like that approach. And so, yeah, being able to drive it in exactly the same way, same consistent way is absolutely key. Yeah. Awesome. All right. So we might be in a little bit of limbo here waiting on the screen share. So no worries. How long does it take generally for the, because I know what the assisted installer probably takes, I mean, 15 minutes, maybe 20. Is it about the same? So there are so many variables, unfortunately, right? If you're doing real bare metal, then some of these machines can take 10 minutes just to get to be ready. And there's a couple of reboots involved during some of those steps. So via AI or via the zero touch provisioning model, typically with some bare metal machines, it can take an hour, right? There's also, well, where are the container images being served from? Is it on a local registry or are you pulling from the internet? What sort of bandwidth do you have? Is there a proxy involved? There's quite a lot of variables, so I can't say, okay, it's going to take about 20 minutes. But, you know, I would say about an hour might be typical on real metal. I'm going to use a VM in this example. I mean, there's only 20 minutes left of the stream anyway. So we'll probably be able to kick it off, but I don't think we're going to be around to see the cluster actually finish. But I'm pretty sure it depends. Yeah. So I kind of teed that up there a little bit because that question, it comes up a lot. Is it going to make things faster? What's the timeline? But it really does. It's variable as variable as variable can be. Exactly. But I think what it likely does really help with is, you know, historically, we've offered our bare metal IPI workflow, right? Which is, okay, I download the OpenShift installer. I define all of my nodes and start up by install convig. I run it and it goes out there and, you know, provisions on many of my nodes. The way that we do it with ZeroTouch provisioning is slightly different in that we don't require a separate bootstrap machine for the initial stage. We actually turn one of the nodes into the temporary bootstrap machine and we do what's called a bootstrap in place. So that actually simplifies the deployment somewhat. And with, you know, the mechanism is showing you here, it's all UI, right? Of course you can define it via, you know, via the API and you can define pretty much everything I'm doing here in YAML files. And if you want to drive some of this, I know, Christian, you'll love to hear this. But if you want to do a lot of this through, you know, through GitHub, then you absolutely can. I'm just kind of showing you from a demo perspective via that UI. Yeah, 100%. And just so you know, the screen shares back up. Oh, brilliant. Okay, cool. All right. So I've now created this SN02 and it's just a single node OpenShift environment. I called it SN02 purely because I'm going to do a single node OpenShift deployment and we've already got stuff registered in DNS for this. So it's looking good, but there's no hosts. And again, for those of you that are familiar with the Assisted Installer UI, this is probably looking very familiar. So we can go in here and we can say, well, I want to add a host and paper. So we have a discovery ISO ready for us. So if we wanted to, you know, attach these images directly to our machines and boot them up, we could do it very much like an Assisted Installer perspective. However, if I also wanted to do, you know, more of a zero touch model, I can attach. I can go straight in. I can say, well, hey, you've got access to the BMCs, which is like ILO, DRAC, XClarity and things like that. You can go directly to these machines. And in here I've got, let me get rid of that. I've got just some of these credentials stored. So I'm going to put my host name in there, my management controller address in here. And it certainly doesn't matter if we leak these. This is, these are all VMs. My MAC address that it wants, the username, I think is Red Hat and a super secret password of password. And is that everything I need? Yeah, I think it is because we selected DHCP. So I'll go ahead and I'll create that. And it's going through a number of different stages. It's going to start doing what we call an inspection. So it's going to attempt to a provision this machine. And just like the assisted installer would do, it's going to fill out some of the, some of this information. So it says bare metal host. It's not really, but it's going to go away and it's going to fill out all of this. It's going to get information about memory and this layout networking configuration. Basically get it ready to, to work. Now this is, you know, very good from a, we use, we use Redfish and virtual media. So we got a little bit of a sort of standardized way of deploying various different bare metal platforms. So pretty much anything that supports Redfish, we can do deployments with. So this makes it really straightforward and we can power these machines on. We can attach the, the ISO directly. You see that I didn't have to go into the console of this machine to attach this ISO. It did this and it is very quickly again. This is a virtual machine. It's not a real bare metal machine. That's why I did it so quickly, but we're able to, you know, really go ahead and use this. So, yeah, so, yeah. So we've now created this infrastructure environment. We've got this node registered. And what I can then go in and do is I can go in and create a cluster and we use this on-premise, which is where the sort of the SIM comes in. We'll go and select this. We can go in here and do Snow 2, cluster set. We can choose it if we want to. Base domain. So this has to match on my cluster name and my base domain has to match. I'm going to say I want to install single and overchief because I've only got one. And of course, it gives us this typical limitation. There's not going to be a HA. And we can't currently add additional nodes. This is a slightly older environment here. So it doesn't know about OpenShift 4.10, but it shouldn't matter for the purposes of our demo. It's got my full secret down there. Hopefully it'll let me get rid of around that. Yeah, okay, good to go. And this is reviewing it. Now we get to choose our hosts. One host matching, which I suspect is our node that we just did. There's my VM, okay? So that's the one we just made before. So it's going to go ahead and it's going to... So we've got here, it's finally filled out some more information about our virtual machine. KVM, 30 gigs of memory. Got more information about the available disks. We've got a number of disks here because we want to do some... Perhaps we want to do some OpenShift data foundation storage. And it's just got one NIC that's on our corporate network. After select a subnet, I can use advanced networking. If I want to, I can go in here and I can say, I want to override the defaultsiders. It's going to want my secure shell key, which I can... That's pretty awesome. So I think we've talked about the infrastructure manager before about how you just pull these machines in there just like they're just ready and waiting. I think that that is so awesome. Yeah, it is. And it's very familiar again to those of you that have used the Assisted Installer. Driving this is super simple. And as you say, you've got this pool of hardware. So the administrator can go ahead and define all of the hardware that's within a given infrastructure environment. So again, that infrastructure environment could be a data center. It could be a region. It's down to organizations to define. But then I as a maybe an infrastructure or sorry, a cluster administrator, I need a new cluster and it has to be on bare metal for lots of different reasons. I have these various different pools available to me. I selected single node and it just looked for an available host and it just used the one that we just created, right? But you could be pulling from a pool of tens, hundreds of available machines. And so yeah, this is just going to simply go ahead and it's going to go and provision. And for those of you that are familiar with the Assisted Installer, you will simply see it's already shown me the cluster API address console URL and shortly we'll have a kubecon, the kubehap impassory available to us. And remember, this is being driven through ACM, right? So when this comes up, it's just another cluster that's already available through ACM. So all of your typical policies you could apply, you can manage it just like any other cluster. That is, yeah. So Christian made a comment in the chat that it's really cool. Man, I agree completely. This is really awesome. And that's all you kind of click into it a little bit ago. And we're going to bring some of the Ansible team and ACM team on to talk about like integrating with AAP and Ansible and stuff like that. So then a lot of your post config you can do through. Oh yeah. Yeah. This is going to be amazing. Yeah, sorry. I can't wait to see this like in like a full on. Yeah, I'd probably click through that bit a little bit quick in the interest of time. But yeah, absolutely post, post deployment hooks into AAP. Yeah, you can absolutely do those things. You know, one of the things we've really worked on from an ACM perspective is making sure that it is tightly integrated and that you can just go ahead and call those things. Yeah. That is, man, I cannot wait to see that. Like just like a full complete inaction workflow. I think that that's going to be amazing. And then Christian asked, is this managed by Ansible on the back end or like what's doing all of the, what's doing all the back end management? So we're not using Ansible to drive this deployment. So what we essentially do here is we have an agent service. It's essentially the assisted installer agent service that gets injected into an ISO that we attach to these machines. Now the agent service reports back into, you know, the ACM instance that we have here and receives its instructions. And so we've been able to drive down, okay, this is the configuration for the machine. It's able to report back all of the various different events and what's going on, right? So you can see here it's downloading the container image and this will continue to increase as it, you know, writes the CoreOS disk image to the hard drive of the machine and continue to report in some of the updates. So this is reusing a lot of the work that we did through the assisted installer and simply exposing that via ACM. So what's the, what's the maximum, I don't know if you know this or not, but what's the maximum number of nodes that you can have pulled? I don't know off the top of my head, I'm afraid. Okay. Just this whole capability here, right? Like the, going back to like your slides earlier, right, about the remote workers and stuff like that. Like I was on a project where it was essentially clusters as a service where, you know, we had the customer was saying, hey, I've got these developers that are come on, that they're going to come on and they're going to be at their own site, you know, they're going to have their own account within AWS or Azure or GCP or whatever. And they're going to want to spin up their cluster that could be one or in amount of nodes. Man, it's so awesome. It's bittersweet, right? Because it was just like a timing fill. But right now it's like, man, this is the answer. Yeah, absolutely. And you can see here, I mean, again, for those of you that know assisted installer, it chooses one of the nodes to be the bootstrap machine, right? Which is really cool because you don't need a separate one. You can start off with, I literally have one machine and I need you to provision OpenShift on that without any other machines being on there. And that's kind of where we're going from a true zero touch provisioning model where I want to simply create an ISO and I want to attach that ISO to a machine and it has to bootstrap everything in place. And it has to do everything without the reliance on an external one. So this is sort of the first sort of incarnation of that. So we choose one node to be the bootstrap machine. It gets the cluster up and running, reboots, and it becomes a fully fledged cluster node. When you have three, like a non-single node OpenShift implementation, one of the machines becomes the bootstrap. The other two machines stand up a two node temporary cluster and once that cluster has been established, the third node reboots and pivots and essentially joins that cluster. So again, you don't need that fourth bootstrap node like you do with an IPI install today or even a UPI install today. Man, this is so awesome. Yeah, this is coming along pretty quick actually. So yeah, I still don't think it's going to finish in the next five minutes, but... Yeah, yeah, no worries. I was just reading the comment. So Pete Lauterback made a comment says, though the maximum nodes is interesting, the proper workflow is to bring up a basic cluster then scale the cluster by adding the nodes that you need. Deploying a massive cluster is a Kate's anti-pattern. Yeah, I totally understand that, Pete. It's just if you've been on any customer call, you know, like the one question, well, how many nodes can I have pulled at one time? So, yeah, I get it. No, this is exciting, you know, so... Yeah. Wow. Yeah, it is really cool. And if you allow me to just carry on ranting away... Yeah, go for it, please. Another thing that we've been really working on is zero touch provisioning for factory workflows. So the idea behind this is, well, we have customers and it can be for lots of different use cases, but the edge is one of the big ones. Where you might start off, you kind of want to provision your OpenShift infrastructure at the factory and then ship it to an edge location or to your end customer and have it embedded inside of their environment. Well, traditionally OpenShift, I shouldn't really say OpenShift, more a Kubernetes thing is when you want to change, you know, IP addresses, cluster names, any names that it knows about, it's just not something that's simply feasible to do. And so the zero touch provisioning for factory workflows is an implementation that basically allows you to provision clusters at the factory, get them ready to go within a kind of a base configuration from a networking perspective, ship it out to an edge location and add in additional kind of doorways into the cluster and have it become sort of part of that... that edge location of the networking construct. So what I think would be a great follow-up topic is to maybe talk about some of the work that we're doing there and how organizations can deploy some of those clusters in a very similar way to what I showed you here, you know, at the factory and then ship them out and then do some slight reconfiguration when it gets, you know, set up at the edge location, you know, very minimal and just becomes like it was almost installed there. So yeah, I think that's an interesting thing we need to do. Yeah, I definitely agree, that sounds awesome because we see that a lot too over, like I don't want to like just single naps out, but like we've had projects where, you know, a customer will be deploying high, right? Or they'll build low, deploy high. And so it goes the same for the... or their clusters as well, right? They'd want to build on the low side while it's connected because it's much easier and then essentially unplug, move over to their air gap network and plug in and they'd be good to go, you know? So I mean, this kind of fits right into that. Exactly, right. Exactly, right. That's awesome. Yeah, so this is pending user action, right? So this is actually really cool. What has happened here is the process is expecting the host to boot from disk, but it booted the installation image. Now, this is a boot order problem with this virtual machine. In reality, this should never happen with a real bare metal host. Because when this virtual machine was defined, we must have had CNI wrong as the highest one always rather than, yeah. So I'm not going to go fix this now, but essentially all I need to do is go in there, modify this virtual machine, swap it to boot from the hard drive and things would continue. But it's just making it clear that, look, you've got your boot order wrong here and it's booted up off the ISO again. Yeah, it's saying this is a you problem and a me problem. Exactly, right. Right on, man. So is there, we're kind of coming up at the top of the hour here. Yeah. Is there more that you want to kind of, you know, talk about or, I mean, this is super interesting, man. I could sit here and pick a brain all day. I mean, this is really what you showed today is some of the things that we've been kind of wishing for for a long time. Yeah, absolutely. Yeah, I know we're right up against the time or that just clicked over five o'clock here. So I think the only other thing that I would want to talk about and get people really interested in is hypershift, right? So hypershift is, how do I define it really, really quickly? You can essentially think of it about it in a way where you are divorcing the control plane from the data plane. So historically when you think about an open shift cluster and I think this is really important for those that are looking at, you know, from a multi-tenancy perspective and lots of different clusters is when you deploy a cluster, you are deploying three master nodes and however many worker nodes that you typically want inside of the environment and typically all of those require individual nodes, right? You have your three master nodes and X number of worker nodes but they all take up individual machines or virtual machines or instances on a public cloud. Now, perhaps from a bare metal perspective if we think about it from that way just to sort of simplify things, sometimes customers are said to us, well, I'm kind of wasting an entire physical machine from a master perspective and you can kind of understand where they're coming from, right? Hypershift allows organizations to deploy what we call hosted control planes. Hosted control planes meaning, well, I'm going to put my control plane infrastructure rather than deploying all of these as individual machines, deploy them as pods as workloads on an existing cluster or distributed cluster. So every time you want to have additional clusters you can simply spin up new control planes as workers, as pods and you simply deploy worker pools and that's where you would typically require a full machine or a full virtual machine or a full public cloud instance. So it has a number of different benefits you can think of well, being able to spin up new worker pools really, really quickly not having to necessarily waste infrastructure and I don't necessarily think it's always a waste but for some customers it's certainly a waste and it also gives the ability to kind of guarantee that multi-tenancy, you know, you're providing additional clusters for your consumers very, very quickly. So I know I know I'm over time. You're good, you're good. Yeah, I think it would be great to have a follow-up on micro shift. I can see that Peter Lattobach is here. So we've been doing some work with HyperShift and OpenShift virtualization. So this is really cool where let's say you have a bare metal cluster and you want to deploy lots of multiple, lots of clusters on top and you want to do that for, you know, developer needs an OpenShift cluster. You don't necessarily want to create them a project. No, you want them to have their whole own cluster. Well, with Coop, we have the ability to integrate with, sorry, with HyperShift we have the ability to integrate with Coopvert. So you could deploy OpenShift clusters in OpenShift virtualization where you're just spinning up virtual machines for your data plane and your control plane is just pods on the underlying bare metal infrastructure. So you can happily give Coop, yeah, you can happily give Coop admin to those developers knowing they've got their own cluster so they can do whatever they want with it and you don't have to worry about creating these whole new infrastructure nodes that you can integrate and tenant. So there's a lot of power in doing that and again, I think it's a great topic for another day. Yeah, I love it. And Reece, you're more than welcome to come back. You know, I mean, like this has been outstanding. I mean, just the amount of knowledge that you dropped on us today has been, it's been, it's excellent. Awesome, well, I appreciate that and the pleasure is all mine would be more than happy to come back. Yeah, awesome. And you have Christian? Oh, I'm sorry, go ahead. I was just going to say, I hope I've been an adequate stand-in for Sally Christian. I hope he would have left disappointed. Well, hopefully I don't get fired after this since Andrew didn't make it, you know, and oh God, please don't ever let this guy do it by himself again. But yeah, just for anybody that's listening to, you know, we do have KCP on the agenda. We do have HyperShift on the agenda, so we're going to try and get somebody to come out here and talk about that stuff because we know it's interesting and it's hopefully we want to nerd out with them too and figure out like what we can do with it just that workflow that you just talked about with HyperShift and OpenShift Virtualization that sounds pretty amazing. Yeah, really cool. So yeah, so I know that it's late for you. So I'll give you a chance if you want any last words or anything like that or if you're ready to go. I don't think I have any last words. I'm really excited by some of the direction that we're going in here. I've worked for a long time on provisioning and virtualization and things like that. So we're really investing in some of these areas and excited to see where it goes. So yeah, love to come back and continue to share where we're going and do some more demos of things that people are really interested in. Yeah, nice. Yeah, thank you. Thank you so much. I'm definitely interested. I love it where we're going. This direction is amazing. Like I said earlier, it's some of the stuff that we've been talking about for a long time and to see it finally in action, it's amazing. And somebody had made a comment earlier about Ansible not being part of the OpenShift Plus subscription. That is correct. It's really just like ACM, ACS, Quay, ODF, and OpenShift. Ansible Automation Platform is not part of that. There's a subscription guide that I'll paste a link to it. One last note is we are standing up our GitHub repo with a blog and everything. So that way we'll give you a chance as the viewers to give ideas on what you'd like to see us put on the show. So to have guys like Rhys come out and talk about topics and stuff like that. So Rhys, dude, seriously, thank you so much, man. This was awesome. Andrew, I can't wait for you to get back. This is nerve-wracking. I hope that your training is worth it because you sold me out and maybe do this by myself. But I think this has been great and we'll call it a day. Awesome. Thanks so much. Cheers, folks. Yeah, see ya.