 Let's get into the big topic of open source, something that we actually say about. This is so awesome. We are an open culture that is actually already set up. What is that process that a developer or let's say, as the Kubernetes ecosystem really brings? Hello and welcome to this week's Ask an OpenShift admin office hours live stream. I am the host, Andrew Sullivan, technical marketing manager. I guess now I'm a manager manager for the stream. And I am joined as always by my co-host, Mr. Johnny Ricard. How are you, sir? I am good, man. How are you? You know, I can't complain. I was just giggling a little bit because watching that intro movie or the little thing about the live streaming, there's one of the little clips of me. I always think like they must have like made me look thinner or something. Either that or I've like, you know, the COVID-25 has hit me harder than I expected. Because, yeah. It's a COVID-15 itself. Yeah. But yeah, anyways, you know, it's mid-July. It's hot kind of everywhere right now. I guess at least in the Northern Hemisphere, you know, our two guests today are both over in Europe. We'll introduce them in just a moment. But yeah, it's hot. You're in Texas and I think it's always hot there. Yeah. I don't know that. Mostly always. Yeah. And even when it's cold, it's still just, yeah, it's, yeah. It's Texas. Anyways. Exactly. Yeah. So, you know, some are doldrums. I was gone last week. Congratulations on the show. I didn't watch the whole thing, but what I saw was really, really good. Yeah, Reese is a great guest, man. He came on and, you know, just kind of took control, let it happen. You know, he's got so much good information, though. So it was easy to, you know, you just ask a question. He's just got so much and he could just answer a bunch of things. So it was great to have him on. Yeah. Yeah, I was, you know, you and I talked beforehand, of course, and I told you, I'm like Reese's, Reese is easy. It's, you know, I can say the same thing about our two guests today. Right. Y'all are easy. I don't like, we don't have to like extract information from you. We don't, it's, you're happy to volunteer it. So, yeah, but yeah, some are doldrums. Last week I was in training, crucial conversations, which it's always fun to me to go to those things because it's, it's manager training, right? I'm a new manager and all that. So it was always, it's interesting to go because it's, it's reassuring, right? Cause it gives you this framework to kind of repeatably do things that have always come intuitively, as opposed to like happening on it by chance or, you know, at least for me, like, I didn't know that. Like I was, I feel like I should have known that about my employees or how to work with my employees or something like that. Instead it's been nothing new, nothing earth shattering. Just, you know, here's how to do that better or here's how to do that repeatedly. But yeah, I guess I should take a moment to mention that we'll be off air next week, which is partially my fault and partially the world's fault. So I'll be, we'll be having our off site, our first face-to-face off site. Everybody of the whole extended teams is coming together here in Raleigh. So I will be unavailable for our topic next week, which it, it's a really important one and it's a really fun one. So we wanted to make sure that we had the whole team available here. And that's talking about hosted control planes or hyper shifts. And we joined by Adele. So Adele is the Luke who has been on the stream before. Yeah, he came on for the nested containers. Yep. Yep. Or, uh, no, it wasn't nested container. Sandbox containers. Sandbox. There we go. Thank you Jesus. Yeah, it was going to happen. 76 episodes in Johnny. We're, you know, it's okay if we forget one or two. Oh man. All right. Well, hello to our audience. Thank you everybody for joining us today. We really appreciate it. You know, taking time out of your day. So this is one of the office hours series of live streams here on red hat live streaming, which means that we are here for you, whatever questions are on your mind, whatever things that you might, uh, might have that you want to ask us. That's our job here. So regardless of the topic, feel free to ask those questions and chat. We'll do our best to answer those questions. If we can't answer them here on the stream, then we'll follow up and, you know, maybe in the next stream or via blog post or social media, whatever that happens to be, we're happy to get those answered for you. If you're not watching us live, we know that there's a bunch of folks who watch the recorded streams on, on YouTube and all that other stuff. Feel free to reach out to us. So you can reach me via email, Andrew dot Sullivan at red hat.com. You can reach us on social media, practical Andrew, all one word across all the various things. Whether it's Twitter or Reddit or all of that everywhere that there is social media that's known to me. And then, of course, the same for Johnny as well. J-O-N-N-Y, J-O-N-N-Y. If I don't mumble my words here or my, my, at red hat.com JROC TX1 on all social media. We also love to hear from and, and I see regularly some folks here and cats that stream from the community as well. So the Kubernetes Slack. So if you have a Kubernetes Slack.com, there is an open ship that is, you know, a couple thousand folks who are all like-minded and care about what's going on in open ship to an OKB to close the stream. So both of us participate there as long as the number of others have closed. Hey, Andrew, your, your audio is kind of going in and out. It sounds like you're... Well, bring the microphone closer to me. Yeah, it goes in and out. It sounds like it's kind of like in a, like you're in a tunnel a little bit. Okay. That's much better now. I blame Mac OS. There we go. Me too. Yeah. Yeah, it's perfectly fine now. Yeah. It's, it's never my fault. That's the... No. You know, just like my 14 year old. It's never his fault. Nothing is ever his fault. That is always someone else's fault. Yeah. And just for Christians where everything else was fine, it's actually just Christian settings. So he just needs to, he needs to mess with this stuff. It's probably that, you know, I saw his tweet the other day where he's using an M1 Mac to like remotely connect into Linux machines. So he's turned a fancy, you know, laptop into what is basically a dumb terminal, right? It's got like a portable VT 100 in his lap. That's all you got to do. Yeah. Perfect hardware for it. So when, you know, don't, don't put that evil on me. COVID's fault, knock on wood. I'm not going to actually knock here because it'll reverberate through the microphone and everything. Actually, my family has almost entirely avoided COVID. Only one of my children got it. She self quarantined and the rest of us have managed so far. So let's, let's hope that it stays that way. Yeah, that's awesome. Yeah. Yeah. Let's see. Jason from full bar. Can you change CPUs that CFS period us on? Okay. Do you open shift? If so, where's that setting managed? I think you probably can using something like a performance profile. Let me share my screen here. Share screen. Window. Oops. Come on. Quit jumping around on me. So let's share this guy. I want to go to the 4.10 docs here and I want to go to the performance. Really? No. Oh, cause I misspelled it. Performance. If we go to this performance profile and the, uh, performance profile, um, which is part of where is it down here? Anyway, so we can use these performance profiles to set low level things like that. So I think you probably technically can, but I would suggest checking with your account team, checking with support to make sure that it's something that is, uh, if you're using open shift proper, uh, and not okay D to make sure that it is something that is supported. Right. Cause, uh, we, we can, you know, Coro s at the end of the day, or red head enterprise Linux Coro s is called rail Coro s for a reason because it's rel. It just happens to be using our PMOS tree. So you, you almost certainly can change that. The question is whether or not it's something that, uh, is supported with open shift proper. So, and I think it's a performance profile. Um, performance add-on operator. Yeah. That's what I was thinking. The two are related, but I was thinking that performance add-on operator, um, which allows you to go in and make like super level, super low level changes to all those types of things. So, um, let's see. So I do want to get to our top of mind topics, um, and all of that other stuff. So, uh, uh, Chandu will address your question here real quick. I think did I miss one or do I see you answering it, Johnny? Yeah. Oh, why operators fail an IPI process of installing open shift cluster on vSphere. Um, so there could be any number of reasons for that. I'll say that the most common one is when you see like the authentication and the ingress operator fail to deploy. That's because no worker nodes are able to deploy. Um, so I'm, I'm going to call that your question specifically, Chandu, because, um, one of my top of mind topics for today is vSphere IPI and kind of why some of the things that it does. So machine API, and which is really run on the control plane nodes, it needs to be able to talk to vCenter. So if you have like a separate network, there's firewalls or anything like that, just make sure that those nodes can communicate with vCenter because with IPI, it's the control plane machine API that will talk to vCenter to provision those worker nodes, not open shift install. Open shift install does communicate with vCenter to create the VMs for bootstrap and for control plane. It also needs to communicate with, uh, ESXi hosts because it will upload the template VM, the OVA directly to an ESXi host. But after deployment, I believe you can reclose that firewall, um, port if you opened it up. Uh, but yeah, please keep the questions coming in. We'll, uh, we'll talk top of mind topics while those are coming in and we'll address any questions that come up, um, as, as we can. Uh, let's see. So we talked about no stream next week. Um, so tomorrow at 10 a.m. Eastern is the what's new in open shift for dot 11 product management live stream. Uh, so Johnny and I will be here. I think Michael Foster will be here along with a number of other folks helping to answer any questions that you all happen to have about that's and anything that we can't answer. Uh, we take those and we send them over to the product management team so that they can answer them. So if you're interested in what's new in four dot 11, um, which that should also indicate to you that four dot 11 release is coming up and then not too distant feature. Uh, be sure to tune in. John and I will also be spending a number of episodes coming up. We haven't fully planted out yet talking about some new features in four dot 11. Uh, so we've got, I don't know, we're up to like four or five different things that we want to talk about now. Um, so we'll figure out whether or not we need like dedicated episodes for some things or we can combine two, three, four of them together and figure out how that looks. But, uh, yeah, look forward to our what's new in four dot 11 for administrators live stream, just like we follow up all of the what's new streams with that type of information. Um, let's see. And thanks to our hope nine for posting the link. Yeah. For tomorrow. So it'll be good. Yeah. Yeah. I saw our hope nine. I saw you also posted on reddits and all that other stuff. I think I've seen you on Twitter as well. So yeah, we're, um, thank you for your help with all the social media. That's, that's important. And I'll admit that, uh, I am. Johnny was laughing at me earlier because, uh, we both were sending each other messages at like 10, 20 this morning, right? Just a few minutes before the stream like, Hey, did you do social media? No. You want me to do? I'll do the others. So yeah, we do appreciate the help and, uh, that's not to minimize the work that, um, you know, Stephanie has worked, um, very hard with our corporate social media teams, um, as well as with the BU social media teams, um, Colleen who runs the, uh, open shift Twitter account and, uh, LinkedIn and all other stuff. She does a great job too. But yeah, it's just funny to me how Johnny and I are always like the laggards and you would think that with, you know, we're the hosts, we should be on top of that, right? Yeah. It's that celebrity mindset, you know, it's starting to wear on us. Sure. That's what we'll know. Um, and so Stephanie's like kicking this into shins right now, Andrew. So we, we knew that, um, make sure you guys like and subscribe on YouTube and Twitch so that way you get the updates and, um, you know, just make sure that you're, you're doing that so you know what's coming out and you get the content that we're doing. Yeah. Yeah. Especially for the, uh, the upcoming streams. We've been trying to be better about updating the streaming calendar. So if you go to red.ht slash stream, uh, that'll take you to a landing page that has a calendar, a Google calendar that you can subscribe to. And, uh, we try to update all of the episodes with what the topic is. Um, you know, it's not always, do we have a, a lot of, uh, uh, advanced planning, we'll say. Sometimes we don't, we don't figure out the topic until like the day before. Um, but when we do know it ahead of time, we try and do our best to update those. All right. Uh, let's see other top of the line topics. I think I had one more and you had a couple, Johnny. Um, so I wanted to note that, um, and I actually saw this over on Reddit. Um, so gateway API is, or has been officially released as beta and upstream Kubernetes. I don't know. We normally don't talk about upstream things. Um, you know, it's, it will be a while before any of it is relevant to open shift and stuff like that. Because, you know, once it hits beta is usually when we begin evaluating how to productize it, how to incorporate it into open shift and all that other stuff. But this one I feel is going to be an important one and it's a pretty significant change in how ingress and a number of other things are done. So I would definitely encourage, you know, anyone who is, is interested who cares about ingress and all that with their cluster to start paying attention to what's going on with gateway API. So I will, um, let's see, we find Twitch here. So I will post this link to the Kubernetes SIG for gateway API into the Twitch chat. Um, so that way it gets rebroadcast across all the others. And I will also post a link to the blog post over on kubernetes.io that talks about it. Um, so, you know, kind of nothing for right now, more of a, hey, heads up. This is going to be something that's big and important in the future. And, uh, I think that we should start paying attention to it. Um, and when the time is appropriate, of course, Johnny and I will, uh, we'll dig, dig into it and dive deep. Um, let's see, there's the last few things here are, I think, uh, from you, Johnny. Yeah, just, just updates. Like I was looking out on the, um, the blog website this morning and, uh, I noticed that there was a blog for what's new and red hat ACS. Um, so for all of you security minded folks, um, it's a pretty good blog. Just kind of going over to some of the new features that are coming out. And a shame was plugged for an episode that we have coming up in the first or second week of August where we're going to have somebody from the ACS team come out and demo ACS and talk about the, the capabilities that are coming with that product. Um, so just keep that in mind. If you, if you're not sure what ACS is, it's, it's a, um, container, uh, scanning software. It does a lot more than I'm, I'm really doing a terrible job of, you know, selling this, but it's, it's a, um, it's a very, uh, uh, inclusive product that comes with red hat as part of the, the, um, the platform plus subscription, but just keep an eye out for that, that stream that's coming out and check out this blog for, um, for, so you can see what's going on with it. Uh, and then the other one, and I'm sorry, I'm trying to talk and do things at the same time, which we know is a struggle. Um, the other one is, uh, there's a, another blog on open ID connect. So if you watched a couple of weeks ago where, um, we did the open ID connect in the, um, open shift, I, you know, using key cloak as well as just a GitHub. Uh, there's a couple of blogs out there that are showing, um, just more detail on that and we'll have our own blog on what we did, uh, as well coming up. So those were the two things and the one, one light thing that I just thought of as I was looking at, uh, Christian's, uh, response earlier is that his book is, is out now. So if you, um, if you're interested in learning about, I think it's called getting get ops. Uh, it's, it's available now for download at developers.redhead.com. So go out and check that out. And congratulations, Christian. That's a pretty awesome achievement. Yeah. I, uh, I think I've said before that having, having also co-authored a book before that is a, uh, it's quite the undertaking. Um, and I, while I, uh, uh, looking back rate rose colored glasses and all of those other things, it, you know, it's been slightly romanticized now after a six years, seven years. Um, I also remember the massive amount of work that it was. So congrats, Christian. All right. Uh, so I, um, I realized that I kind of jumped ahead here, right? Nor, normally we, um, Johnny and I banter a little bit and then we, we introduced the topic and we introduced our guests and then we do the top of mind topics. And, uh, so my, my apologies to our guests today. I alluded to y'all earlier. Um, so I'm very happy to be joined by two of the folks from my extended team. Uh, so Henri and Diego. Uh, so I'll, I'll, Henri, I'll ask you to introduce yourself first. Thanks, Andrew. So I'm Henri Genmonton. I'm a film product manager. Uh, we are indeed from the same extended team. Uh, I take care, uh, about all the edge and hardware enablement topics. And, uh, Henri and I have been working together now for, uh, like a year or so. And, and I usually rely on Henri for like all the super hard, like ansible type things. And he created a whole set of scripts to deploy, like an open shift all in one cluster, like take a single physical node and deploy a whole bunch of, you know, either, or I say a whole bunch, like either one single node open shift or a full open shift cluster and like automate the deployment of ODF and all this other stuff. So, uh, Henri don't understate, uh, I, I would encourage you not to understate the amount of effort that you put into those things. It's super helpful. Well, to be fair, I come from consulting, so I, I do like, uh, playing with labs and that kind of stuff. So, yeah, yeah, that's kind of my good stuff. Well, thank you for your help there. Uh, so we are also joined by Diego. Um, and Henri, you're in France and Diego, I believe you're in Spain. So, I'm, my apologies for asking you earlier. Like, I, you know, it was kind of noisy and we said, oh, do you mind turning off the fan and then like, oh, but it's so hot there. Well, I feel so guilty asking you to do that. Yeah, yeah, no worries. You know, it's difficult to work here with the windows closed, but you know, it's something that you need to do. So, yeah, thanks for having us today here and let us not, let us, uh, just came here and talk about microchip. So, I'm Diego Alvarez. I'm from Spain. As you said, uh, I'm part of the customer and film engagement team as a technical product manager. Um, I'm focused on edge technologies. Um, yeah, I don't know what else to say. Well, I joined Red Hat six months ago, so I think I'm still brand new here in the company. But, yeah, I studied telecommunications engineering here in Spain too. Um, previously I've been working on the cybersecurity side, also with identity and access management. Um, yeah, that's pretty much all. Yeah, well, uh, well, welcome to the stream. We're happy to have you here and, uh, six months in, so it's still, uh, still a bit of a fire hose, I would imagine. Yeah, yes, I'm trying to, you know, learn some new things. There are a lot of different technologies products, so you know, we are getting into it. Yeah, good luck. I think it took me, uh, the better part of a year before I really started feeling like I understood what was going on, right? Oh, yeah. Yeah. Well, uh, so as we kind of mentioned before, um, and I think as both of you have said, today's topic is around micro shifts. And, you know, we've talked about micro shift before. We had a couple of the engineering folks on, I think Rhys talked about it last week briefly, right, as kind of how it fits into the overall strategy. And I thought, you know, when the opportunity to have you all on to showcase this, Johnny and I both, um, I think almost literally jumped up in excitement because it's really one of those like the proof is in the putting type of things. Um, I hope that that turn of phrase translates well. Um, so it's a proof of concept, right? It's a visible way that we can see exactly what we mean when we talk about micro shift and how it fits in and all that other stuff. So I don't want to spoil anything here, but I know that we're, you know, we're going to be looking at a machine learning. I say machine learning. I don't know if it's actually machine learning or AI but an application that's running on an edge device using micro shift. So I'll kind of hand it over to you all from there to tell us about it. Okay. So, yeah, actually we have prepared some slides. So if you want, I can share my screen quickly. Let me know if you can see it. Yep, looks good. Can you go? Okay. Perfect. So, well, this set of slides was presented by Henry a couple of weeks ago, I think, on the OpenCiv Commons in London. But we can use it to illustrate what we want to say. So, you know, the main part here will be the demonstration but we wanted to give you some background about micro shift and why it's important in our architecture. So, yeah, I don't want to be here just talking. So if you want to add something or if there are some comments or whatever, just press your hand and we will answer them. Okay. So, basically, I want to start talking about edge computing with OpenCiv. Well, nothing new here. Basically, OpenCiv is about delivering applications anywhere. And when we talk about edge, we are also talking about edge. So, it was built on top of three main pillars. One of them was consistency for IT or developer teams. You know, they have their existing architecture and they want to scale it so it's quite easy with OpenCiv. Also, another one is flexibility. When we are talking about edge, sometimes it's difficult to give a specific definition for it. So, for example, edge could be different from one customer to another, from one person to another one. So, we needed something that gives our users the flexibility to choose what they worked. Well, as you can see, the third one is choice. Here, we want to give the opportunity to choose on what Red Hat and our partner ecosystem which product or technology works better for deploying applications. So, yeah, nothing new here. I don't know if you have any questions. I like that you pointed out there that edge can be many different things. I've worked for organizations before where there was kind of a central location that was the core, right, the data center and then everything else they considered the edge. Even though in some cases, those edge sites were as large or larger from a compute and resource perspective than that core data center. And then, of course, it went all the way out to like, you know, in vehicle things and stuff like that, you know, that were truly, you know, remote disconnected edge type of things. So, edge doesn't have to be, you know, ultra small disconnected, you know, type of scenarios. It can be a whole lot of different things. Yeah, exactly. It could be whatever you want to compute some data. You don't want to send it to your central core cluster. So, yeah, everything that is, you know, processing data on the field could be edge devices. Yeah. So, well, here I saw last episode and I think that was talking about the open sieve architecture for the edge. So, we can skip this part really quickly. So, we have three different options depending on the use case. One of them is single node edge servers that gives you a really low bandwidth and works pretty well with disconnected sites. So, here what we are doing is just deploying a single node on the far edge with both control and worker capabilities at the same time. Another option also is a remote worker node. So, it works on environments that are very space constrained. So, here we are deploying a worker node but it's managed from the regional data center. And the last one is the three node cluster which gives us the most similar experience that we could have with open sieve on traditional architecture. But here we have three different nodes also with control and worker capabilities. Yeah, that gives you high availability while we are keeping in mind that we need to reduce the footprint because we are talking about edge devices which typically are low in resources. So, I want to pause there if you'll go back for just a second. And I know this is one of the core topics or core principles from last week which is kind of all of these edge architectures. All of these edge things. So, kind of ranging from and I think what's missing here is the very top and the very bottom. So, very top being micro shift. And I think it's missing because well, it's not a red hat product. It's still an open source project. It's still a community project. It's just one that is really interesting and you're seeing a lot of activity from us around. And then at the very bottom, there's the five plus node clusters that exist inside of there. So, Selen had a question a little bit earlier that I wanted to because remote worker nodes is on the screen. I'll take the opportunity to talk about it. I was asking about some best practices or some other documentation around remote worker nodes. So, I don't know that there has been anything published on the blog. Johnny, I don't know if you know of anything off the top of your head. And I'm asking back channel too. So, yeah. Generally speaking, those practices or anything like that are going to be dependent on your application. So, for example, you say here that details on how to keep the VIP traffic running correctly in case the route site loses network connectivity. So, it's going to depend on a number of things. One being what's hosting each one of those. So, is it a hyper scalar? Does that hyper scalar have something like a global load balancer that can simply redirect to one of the other quote-unquote remote sites? Is it are those ingress controllers are they hosted in the same location as the control plane, which means that, you know, there can be other bigger problems that are happening there at the same time. It could be solved other ways. So, for example, having a sharded multiple ingresses with different ones hosted at each one of those remote sites. So, there's a number of different ways you can address that. I wouldn't say any one of them is better or worse than the other. It just depends. And we'll see if we can dig up some information on that. So, sorry to interrupt you today. Yeah, no worries though. So, well, here we have at the right side what you were talking about, the typical core data center. So, here we are trying to map all these architectures into our different edge tiers that we have. So, we're starting on the right side with the core data center with five four plus OpenShef nodes down to the edge server. So, basically on this range you can use different architectures for OpenShef. Even on the edge server we could have maybe a single node OpenShef or a remote working node. Here is what happens when we have smaller devices. For example, we have another two edge tiers. The edge gateway and the edge endpoint. For example, the edge gateway we could have a remote working node but at the edge endpoint we have really little or tiny devices. So, we have to provide a solution for that. Here we have some examples of those really small devices. For example, we have IMX, we have Jetson and on the right side you have internuke. So, what solution we provide for all of them and we are providing REL4H. So, REL4H is the operating system for the enterprise edge. Among some other benefits I highlighted four of them. So, the first one is quick image generation where basically you are using image builder tools for building your own operating system built on purpose for your architectural challenge that you are facing on the edge. Another one could be of course management at the edge in a secure way. We have zero touch provisioning. We have also Fido that I'm going to talk about it later. And at the bottom you can see efficient over there updates and brilliant rollbacks. This is an advantage that came from using RPM Austria. So, it's way easier to perform an upgrade or a rollback if something is going wrong. Can you talk about any of the differences between REL4H and RELCoreOS? Well, I don't know exactly. Maybe I cannot give you an answer for that. So, I don't know if Henry has some background. I can talk at a high level about it but it would be interesting to know some more. Johnny, maybe you and I can follow up on that. So, one I think the important thing to note is, so REL4H is REL. It is managed, it is deployed as using RPMOS tree which sounds and looks and tastes a lot like how CorOS is managed. I think the biggest difference is that CorOS everything is a container. Basically it is only managed as a part of that base operating system through RPMOS tree is only enough to get Kubelet and Creo and ContainerD or Cryo. Everybody always corrects me to Cryo, I don't know why. Basically only enough to get containers running and then literally everything is a container inside of that operating system. REL4H, you can certainly do that through something like Podman and creating system D units to start containers and stuff like that but it is not intended to be done like that. It is more like a traditional REL operating system. You need to install packages, you install them into the RPMOS tree layer and then you can deploy it out to that using whatever tools are available to you. I will also add that and I think we will probably talk about this more in a moment. The management principles are different. You use the fleet management tools, you use Ansible Automation Platform, you use all of those types of things to manage it in kind of a very close to traditional REL type of method whereas CoroS is only available through OpenShift and it is intended to be managed through OpenShift using machine config operator and all of the other mechanisms inside of there. Thanks for the information I didn't know exactly the difference so yeah that was useful. I mentioned RPMOS tree RPMOS tree it is inside REL for edge. It is not something new we already implemented it a long time ago even before CoroS so I think it was with with atomic operating system I think so. Basically here we have operating system based on immutable binaries and libraries so it is also based on transactional updates from A to B which means that if you want to perform an upgrade you need to make it in one single step all together so it makes everything more easy also we don't have in between state and during the update this means that you can for example perform an upgrade in the background and then you can apply it just rebooting notes and also another good thing is that now migrations from REL 8 to 9 are supported and it enables seamless major release upgrades and rollbacks and the last thing that I want to show you is Fido device onboarding also known as FDO we were talking about it when I said that we can manage everything at the edge so this solves the problem of late binding device to your edge platform basically you have CO-Touch provisioning by deploying devices on your factory plan then you can pre-configure everything and then ship them into the edge and just plug into your management platform and it works so this was implemented in REL 8.6 and also it's available of course in REL 9 and here I wanted to just summarize what I've been talking of course this slide is quite similar to the one that I saw at the beginning but if you see at the top with the small footprint edge operating system that's REL 4.8 so it works for memory constrained servers and just needs one core and two GIFs so if we compare it to the smallest one that we had before was the remote working with two cores and eight GIFs so yeah, quite low the requirement for that but maybe you're asking where is Microsoft here you asked that before so that's why we are here because maybe we could have a remote worker node on the edge but we don't have the connectivity to the data center another option could be using single node but our devices are not enough talking about hardware resources so that's when we wanted to take OpenC further into the edge with Microsoft so yeah, I don't know if you have any questions on the chat maybe English asked about two node cluster which we answered in chat so short version is on the roadmap there is going to be the opportunity to take a single node OpenShift which is a one node control plane and worker and then add additional worker nodes but it would still be a single control plane node with additional worker nodes but there is no intention as far as I'm aware to have like a cluster that has two control plane nodes because at CD just wouldn't function right it can't establish Quorum okay so Henry do you want to continue with talking about Microsoft? sure so basically this is where Red Hat Emerging Technology comes into play so Red Hat Emerging Technologies is part of our office of CTO they're working on technologies that are still taking shape in the enterprise and research communities you can find all their projects on next.redhat.com all the work they do is completely upstream and community based they often work on existing community projects they also often create their own which are made available on their public meetup not everything they do is followed by products there's actually quite a lot of things they've been working on that never never happened as products they have also multiple arrays of focus as you can see so there's cloud storage edge security data and AI quantum computing even all kind of stuff next slide please all I'm hearing is OpenShift on quantum computers? what? that should come pretty soon I thought I remember somebody talking about IBM had an API where you could reach out and make requests schedule things on quantum computers and somebody was like is there an operator for that? no hopefully we'll get there at some point so what exactly is MicroShift? MicroShift is an experimental flavor of OpenShift that's optimized for the device edge it's basically OpenShift not just Podman running on Ralph for Edge that provides use with standardization for your software life cycle consistency for your devops across all your footprints and it can be of course managed by an orchestrator such as open question management or patents question it's also designed to leverage and optimize Linux OS all the advantages that Diego just talked about it is meant to be using Fido device onboarding to be configured and set up with zero touch provisioning it's meant to use the RPM OS tree transactional updates model it is distributed as a single binary that can be deployed either as a package or as a container on your existing container devices basically container running devices so it allows you to make fully customizable OS image already embedding your MicroShift it really aims for a very low resourceful footprint basically it aims for two cores and two gigs of RAM as a minimum for your devices it does support currently both AMD64 and ARM64 but it can easily be extended to RISC or other kind of devices it's very small in terms of payload so you don't have much to download or update over the wire and it got very low storage requirements right now I believe it's in the four gigs space so a couple of things here so is the best way to describe MicroShift something like kind Kubernetes in Docker where it provides the Kubernetes API as a container image that is then run inside of Docker so it looks like sorry for banging the microphone it would be the same or similar to deploying KubeLit and all those components directly on the operating system but instead they're in containers or is this different than that well it kind of provides the option to run like KID with because what you can have it as a container that you then run with privileges and then you can manage a cryo etc we also have the option to have it as a binary so then you don't have to have a privilege-enabled container running there and and I also I always like to point out that MicroShift isn't OpenShift it is but it isn't right in that and the reason why we don't call it MicroOpenShift or anything like that is because it doesn't have the full set of things that make OpenShift OpenShift so for example there's no OLM that's where I was coming actually with my next slide what actually differs between OpenShift and MicroShift so OpenShift is really meant to run on cloud providers bare metal VMs you name it but it really needs to be able to have some control over both the operating system which is ROC or OS and the underlying hardware so this is basically provided by cluster operators that we decided to tear down from MicroShift because A, we don't need to manage the underlying systems because that will make it well first we are using small devices there into the edge we don't have any VMC we don't have stuff to manage there per se so there's no point in having these cluster operators that takes a lot of memory etc when you don't have anything to give them to manage also you don't obviously you're not we don't see MicroShift as something that you would go and install on AWS Azure etc it's really meant for small edge devices really to manage also those edge devices usually need to have very specific driver sets which makes it pretty difficult to have them integrated with ROC because you then need to modify too much of the underlying operating system for it to really stay manageable by OpenShift so again no point in having a ROC that would manage with cluster operators so MicroShift really is just the Kubernetes components that you need at minimal to have a Kubernetes cluster basically one cluster and we've got a couple questions go ahead Johnny I was just going to say the same thing so Jason from FullBore asks are MicroShift nodes under the control of a normal OpenShift cluster managed by ACM so they would in this case be separate clusters because if you have several nodes then you have several clusters and you would of course be able to manage workloads with ACM but you don't necessarily have to have ACM to manage that cluster right like MicroShift could be completely independent then on its own right yeah it can be completely independent and manage as any Kubernetes cluster basically and Christian says that he wants a multi cluster MicroShift well that might happen someday but not right now Christian I know you've already gone through the pain of deploying OLM on MicroShift so I'm expecting to see a pull request for multi cluster from you anytime indeed and Duane also asks about any compliance needs for MicroShifts I don't get the question I think and Duane please feel free to correct or clarify I think you're asking whether or not there is specific compliance requirements or compliance needs associated with MicroShift or maybe even compliance profiles my interpretation is that this is the way that it's done today is it's going back to the OpenShift 3 model of you're managing the operating system independently of the Kubernetes on top of it so you would apply hardening rules and all those things for whatever your compliance standards are to rail for edge and then if you need to further harden the Kubernetes MicroShifts on top of that then you would take actions at that level but I don't think you clarified their profiles yes so I don't think we've published it's not a product yet there's nothing like compliance operator or stuff like that that's available for it yeah does MicroShift need two cores for far edge with worker plus control plane reduced number of required cores I thought we had a minimal requirement for running MicroShift so that embeds the of course worker and cluster which are running on the same note so yeah Mark thank you for the help Mark rail compliance would follow MicroShift that was kind of my thought as well as that would be done at the rail level or rail for edge and to be clear you don't have to use rail for edge with MicroShift you can use regular rail rail not for edge, rail for data centers no let's not go back there actually right now for a community project you can pretty much use any flavor clinics I myself during the demo that's kind of follow using the NVIDIA Linux distribution that comes with the Jetson because that's the only way to get 3D acceleration right now nice I wish I had a couple of Jetsons in my home lab myself but I'll stop interrupting you for anybody please keep asking questions we're happy to answer those so what does MicroShift do also is enabling flexible usage models where you can either have your OS system which would be embedding MicroShift and your operating system which would be connected to the network so you can then talk with device management software and have an application management software as well that would then point it to the Kubernetes applications to run or you can have a model that's fully disconnected where you have your USB key embedding your operating system MicroShift all your Kubernetes application that needs to run and then you can send updates to the factory in the form of USB keys that would be planned and perform the update of the operating system and potentially applications so this MicroShift project actually our friends in IBM liked it so much that they decided to send it into space I believe we have been presented many times already to you guys but yeah this is the endurance project you're welcome to go and register I don't know if they plan to open it to others than educational purpose at some point but you never know and now it's demo time so let me share my screen so this is my little json running MicroShift so right now I only have the core MicroShift running here everyone can read correctly yeah that looks good thank you so this is just planned MicroShift I'm going to start some OpenCV workload there so computer vision I'm using small cameras very tiny one ESP32 type so first I am going to start my wifi server because I need to connect those camps to my cluster let's also have a look at the camera connecting just to make sure some works before we start the test what kind of json are you using are you using a nano or using one of the more higher end well I got the higher end AGX but it would work on a nano as well this time from nano nx as well this is actually all available on the emerging tech you can find the whole demo which is AI4H MicroShift demo it's all there for you to try if you want and there are even needs to make it work on the Raspberry Pi actually if you don't have it yet so now we have our camera connected let's apply the computer vision server because we want to do something with the pictures oops taking too fast so here we should see the camera registering oh no I need to expose it first for those that can't see behind our beautiful faces he's just running OC expose on the service to make sure that the route is populated so now we need create the route and now actually our camera should register so now we can see the camera is being registered and we can grab a browser window should now be getting our camera feed and as you can see he can now recognize me he could actually well I don't see your faces anymore how can I recognize you it's oh Stephanie can you can you turn back on our images on the on the side because I said on social media that Johnny and I provided our images so let's see if we break the model please don't break it please don't break it it's a bit small oh here it is so OOBE your question there would you mind issuing an OC get co so that actually won't return anything because there are no cluster operators hey it recognizes me so there is no operator life cycle manager there is no CVO that's deployed or cluster version operator that's deployed with micro shift so all of those operators that show up with OC get co are the ones that are managed by cluster version operator and since there's no CVO micro shifts therefore those don't show up I don't think that's we already even exist that's it the resource type doesn't even exist yeah let's see Shrikant if there are any upgrades to binary then do we need to wire the whole binary to edge devices maybe so I think in this instance the application is deployed as a pod I don't want to speak for you Henry or Diego so in that instance you would be when you go to update the application you would be pulling down a new container image and whatever is inside of here so in my case I'm using the binary version so basically the binary itself as you can see is only 144 that's what you would need to update it will of course it will download well actually no it won't download anything this binary can work offline so it's not so big so yes maybe but I believe there should be a way to actually update the image of all the pods individually I don't know that it's currently called though I think we're going to see some more about how to manage REL for edge devices, micro shifts and the applications and services that are running in there coming out of various teams inside of Red Hat as time goes on especially as we get more comfortable, more familiar with how to do this for very remote, very low bandwidth type of scenarios to your point 144 megabytes isn't a lot unless you're on like a satellite link and you're getting dial up speeds because that's reality in some places I'm thinking of cruise ships or other things out at sea that have very little bandwidth and you still want to be able to do those in it and that's also where the possibility to have it fully disconnected and managed by an RPM S3 that we can have on a USB key or SD card or whatever makes it also convenient for those kind of use cases I mean if you can send someone with like small SD card or sending by post or whatever then you can sort those issues as well but yeah they will need to be a mean to get those 100 megs Yeah, the Ansible team Ansible Automation team had a really cool demo that I saw recently where that's kind of what they're doing they're using OpenShift in a full deployment to do app dev and sort of outer loop activities where they generate a container image with the application and then they take that and stick it on a thumb drive that container image and then they take it out to one of those locations and it uses Ansible Automation platform to go through and push configuration updates and platform updates and application updates out to all of those nodes and the tech doesn't have to really do anything they plug it in, they start the interface for Ansible Automation platform they click the task and they get a coffee for 20 minutes, come back pull the thumb drive and move on to the next one Yeah and that's exactly what we're going to do further Alright, so this has been fascinating to me it's always always always always interesting to me to see these different kinds of use cases and I know we've had on the marketing side of things because for a long time I have technically had marketing in my title so I try and stay aware of what's going on We've got a lot of like industrial edge and stuff like that use cases that we've talked about and Johnny there's even isn't there a validated pattern that uses something like this to do image recognition for different things Yeah, there is we've done one for medical diagnosis which is doing object detection for pneumonia we've done industrial edge which is like sensor monitoring and stuff like that so yeah there's a bunch of capabilities that's kind of what you're talking about coming on and doing this, this is awesome Yeah, and I'm thinking of all the ways that we're we're effectively modernizing application not just deployments but development in many cases and that's really cool to me so this has been really enlightening to me for our audience it is a little bit after the top of the hour I want to be respectful of Diego and Henri who are in Europe and it is both the end of their day between them and dinner or maybe a pre-dinner beverage and as well as it's hot and I don't want you all to be cooped up longer than necessary more like getting some freshness have a lemonade or something like that I won't make any you know us Americans love to have be stereotypical about Europeans and ice like Americans love ice in our drinks and Europeans it's not very common I won't be stereotypical if I could I would just like to ask one question about the application if you could give like a 30 second like hey this is what it is and mainly like we talked about it a little bit yesterday and you're saying it's based off of like clear vision right can you kind of give like the elevator pitch of like what the app is on the back I mean obviously it's doing image recognition through the camera and stuff like that but like how's the app put together like how's the modeling working how are you doing all the training and that's why I'm trying to have you do it quickly but well let me share my screen again it is I mean it is really using some samples of computer vision in bison it is clear vision sorry so most of the logic actually all of the logic is just there so we have a first step where we do basically fit it with the pictures to teach him to recognize different people and then we have just a little bit of code first we resize the picture because well even with GPU acceleration treating full size pictures is a bit too much for this kind of smaller devices also we wanted to have it working potentially with Raspberry Pi with acceleration so here we have the cnn basically tells it to use the hardware accelerated version of the face location from OpenCV and we're just basically using basic OpenCV methods to try and recognize there is a face first then match it against the faces we know and then we do add the small name box the other components of that are basically the wifi access points that will provide first and most ap configuration basically and that will push some information into mdns to the camera to provide it to a point where to register and there's the third component which is some code running on the camera itself because that's ESP32 and I don't see type of from your setup there show it to you but we can look at this so this is responsible for basically telling the camera that it needs to connect to the micro shift cam server using the information that will be getting from the wifi ap server and that's pretty much it the reason why I wanted you to walk through it because I think it's important for people to understand what you've done is you've done one model that's doing image recognition on faces and pictures right but then if you think about it you've got a camera hooked up so you can have a camera hooked up outside of your door and looking for wildlife so there's certain things so there's all these things that you can do that don't necessarily have to be facial recognition but they can be object detection or something else and then you can stream that in and you can put your model and train your model and then use a tool like micro shift at the far edge to execute that model and then essentially report back and then retrain so there's this whole workflow between open shift at the data center and then micro shift at the edge just like this nice continuous loop of the model being deployed and then the model being trained and then ultimately fed back into the edge so I think it's really awesome what you guys are doing with this it's insane yeah that is indeed quite a good example of using the camera which is basically an IoT device that you can program as you will to just send pictures that could also do some transformations directly on the camera if we wanted to like turning the image around left or right or upside down if your camera is positioned in an unstandard way that's basically proper edge scenario where you have these edge devices that's providing connectivity to your IoT device getting data from it processing the data locally and then you can do whatever you want with the data this is exciting because now we're opening the can so we've had these ideas for a long time like man it would be cool if I could do this and now we're getting that capability and we're able to put it in people's hands and I think that this is just going to be awesome I think the doors that are opening with micro shift I'm really high on micro shift by the way I'm super excited about this but yeah I think the doors that are opening with this are going to be awesome and the tech that's going to come out is just going to be incredible to see Duane asks about whether it can be done on a Raspberry Pi and yeah, micro shift will run on a Raspberry Pi I don't know about the machine learning part because there's no acceleration like with the GP model jet so this demo does work on a Raspberry Pi it's a bit slow I mean you wouldn't get as many frames per second that I did just get with the Jetson using GPU but it does work and it does work pretty well actually yeah I'm curious or well you can do micro shifts on Raspberry Pi and stuff like that usually when folks ask about that at least on our stream it's usually because well I want to learn open shift I want to be an open shift administrator and the micro shift admin experience is I'll just say nothing like open shift proper just a fair warning there it's not to say it's not worth learning how to do like rail for edge management getting familiar with Ansible Automation Platform micro shift and ACM and how all of that works together there's tons of stuff to learn there it's just it's not open shift open shift in all fairness it is till Kubernetes so that's still some stuff to be learned this way yeah that's absolutely true fundamentals right if you're trying to get your baseline like your deployments and all that stuff then yeah definitely I get exactly where you're coming from if you're trying to learn how to deploy operators and stuff like that yeah so just a few minutes left again I want to be respectful of Diego and Henri's time so any questions that you all have out in the audience please feel free to send those in we'll do our best to address them if anything comes up afterwards if we missed your question we didn't answer your question well enough if again if you're not watching us live and something comes to mind don't hesitate to reach out so if you're on YouTube or one of those you can always leave us a comment that runs a couple of times a day that tells me when new comments appear so don't hesitate to leave comments there and we'll do our best to answer those or you can always connect with us via email andrew.sullivan at redhat.com or johnny.jonny at redhat.com and of course social media so at practical Andrew on Twitter on reddit on all those and at jrock tx1 for Johnny and last but not least the Slack team so I've been trying to get better about encouraging folks to go over to the Kubernetes Slack channel and look for OpenShift users over there because that community really is great and it's not just Johnny and I there's a ton of folks over there who are willing to help not just with upstream OKD but also with or upstream OKD and Kubernetes but also with OpenShift as a whole so don't be afraid to take advantage of all of those tools with that being said if you haven't and just a gentle reminder please subscribe to the channel it's a great way to stay in touch and be aware of what it is that we're going to be talking about and when as well as some other things that are happening across live streaming so get up sky to the Galaxy and the level up our and well the what's new and what's next right the what's happening and the roadmap presentations for OpenShift are all broadcast through there so and I don't see any other chat coming in so yeah that's all I got Johnny or excuse me Diego I'm saying Diego and looking at Johnny so that's why my brain didn't agree so Diego Henry thank you so much for joining us today you know we really appreciate it this has been just absolutely fascinating like Johnny said I'm a fan of MicroShift and all of the possibilities that are going on there and I'm constantly learning about it as well so this has been really helpful for me to really see an implementation to see what it looks like and see how we can take advantage of all those things so thank you again for joining us we really do appreciate it thanks a lot for having us thank you for treating me as always thank you in the background for Stephanie for all of your help in keeping us organized keeping us on task and on targets and Johnny thank you for joining me every week and for covering for me when I'm gone yeah man so with that I'll leave you with last words Johnny yeah so just to know what you said about Stephanie Stephanie you make you make us awesome but I also want to say thanks to Reece too for actually positioning Diego and Henry for us because Reece kind of approached us and was like hey you guys got to listen to this I'm so glad that he did because this was great I am super excited with you Andrew I think that once MicroShift gets out there and people start using it like the doors are just going to open up the tech that comes out of this so thanks again guys for coming on this is really awesome and I'm going to try and get my nano out of the box and install MicroShift on it alright well we will see you all in two weeks but don't forget that tomorrow is the what's new in OpenShift 4.11 live stream at 10am eastern