 Let's get into the big topic of open source, something that we actually say about. This is so awesome. We are an open culture that means so much. What is that process that a developer or, let's say... How's the Kubernetes ecosystem really going down? Welcome to this week's Ask an OpenShift admin office hours live stream. I am, as always, your host, Andrew Sullivan. So as the ticker down at the bottom states, you can reach me on social media at Practical Andrew on Twitter, or you can always reach me via email at Andrew.Sullivan at Red Hat.com. And as always, I am joined by my... I'm going to use the word effervescent today, Johnny, because you are very effervescent this morning. My effervescent co-host, Johnny Ricard. So good morning, Chief. Good morning to you. How are you? You know, I can't complain. It's another beautiful Red Hat day. Indeed, indeed. It's never a dull moment, and today's topic certainly reflects that. I think it's something that when the blog post and the other stuff was first released about it, was first talked about, there was a tremendous amount of excitement around MicroShift. And the possibilities of having an OpenShift or an OpenShift-like environment in very, very small footprints. So we jumped at the opportunity to have our two guests join us today, Miguel and Ricardo. And we're going to spend some time talking about that today. So I'll hand it off. I'll go to Miguel first, if you don't mind introducing yourself. Hey, hello. Thank you very much for having us and spreading the word about MicroShift. So I'm Miguel. I've been working at Red Hat for eight years and this year I joined the city office on the Edge team for emerging technologies. And yeah, they were already working on MicroShift. I joined them. I have a little bit of a background in networking and also a small embedded systems from my past before Red Hat. So everything was aligning. I tend to think that it was lurking in Red Hat waiting again for the moment for the small systems to... Well, now those small systems are much capable than they were. Turns out that Edge is something that it's kind of a big deal, right? So I think we're going to see more and more stuff from Red Hat as a whole and especially OpenShift in that respect. And then Ricardo. Hello, everyone. Yeah, first of all, thanks for having us. I joined Red Hat six years ago. I come from the telco world. I'm a telecommunications engineer. And about two years ago I joined the emerging technologies group in the Edge computing team. And yeah, about a year ago, something like that, we started to think how we could feed a small Kubernetes and OpenShift into these really small devices. And we... Yeah, MicroShift was born. So very glad to be here. Yeah, and it's... I think this is the first time that we've had somebody from the office of the CTO on. So welcome to both of you. Johnny, we're almost famous now, right? We're getting there, man. We're slowly climbing the ladder horizontally, you know? You know, YouTube famous, right? YouTube famous. That's right. We have tens of followers. Tens of followers. My wife on the other hand, who is... I think I've said before, she's an exercise instructor. Everybody around town knows who she is. I can't go to the grocery store with her. I can't go to a restaurant without people stopping to say hi. We just have the internet. We have all of you lovely people. That doesn't happen to me, so... All right, so as always here in the... And Khalid, thank you for joining us so late in the day. I think both of our guests today, you're in Europe, right? I think you're both in Spain, maybe? Yeah, in Madrid. Yeah, so thank you both for joining us later in the day for you as well. If you ever come by, let us know. We will take you out to eat some tapas and good wine. So you say that. Don't threaten me with a good time. Actually, next KubeCon is in Valencia. It's like three hours from here. We will be there. But if you have the opportunity, we can meet up there. I tried. My request was not granted, so I will be missing Valencia or I will be participating virtually, I should say. But I've been to Barcelona many times, and Barcelona is lovely and wonderful, and I would love to see more of the country, but not this year anyways. So, yeah, again, thank you very much to both of you for joining late in your day and for all of our audience. And so something that's kind of bubbling in the back of Johnny and I's mind is, you know, would it be worthwhile for us to maybe vary times a little bit every once in a while? You know, maybe once a quarter have something that is closer to midday for EMEA. So, you know, early morning for me and earlier morning for Johnny, or maybe even APAC hours, right? So something that's more friendly to the Australia, New Zealand, Japan, China, India time zones. So if anybody has any thoughts or any input on that, please don't hesitate to reach out, especially if you're watching this, you know, recorded, you know, because it's not your working hours right now when we're doing this live, don't hesitate to reach out. Again, you know, social media at practical Andrew or email Andrew dots all of them at redhat.com because we'd like to know, right? We want to be accommodating. We want to include everybody as much as we can. And if that means, you know, varying our time zone a little bit, I'm okay with that. So we do have a, what I'm going to call a reoccurring segment. I keep saying that as though it's a new thing. I haven't been doing this for 62 episodes now, which we call the top of mind topics. So top of mind topics are things that John and I have found over the last week or so, you know, since the last stream that we think are going to be important or things that we want to share with you, our audience. So let's get started with those. And I actually have the number one top of mind topic that I'm sure everybody on the stream is dying to know about. And it's the picture over your left shoulder. Oh yeah. I replaced it. It's, I've been trying to have a more regular schedule of changing out my artwork. So yesterday was the first day of spring here in the US. Well, I guess everywhere spring spring is the same everywhere. Right. So I decided to change out my artwork. And then I was, I was, I won't say I was up late. I was up late for me like 1030, you know, looking at other artwork that I can use in the future. So I like that one a lot. That's a, that's a favorite so far. Yeah. The business T rex, right? So it's, it's very spring like. So first top of mind topic for today. So Johnny and I got pulled into a conversation with a customer recently. And it was a big conversation, right? I'm using big and air quotes, but it was big conversation around auto scaling. And, you know, we talked about things like the horizontal and vertical pod auto scalars. We talked about machine and cluster auto scalars. And we happened to have Gaurav, the product manager for scaling in open shift. Join us. And I learned something new. So we've been asked about, you know, a lot of people have been asking about being able to do a cluster scaling based off of custom metrics or pod scaling rather. And I remember hearing about this like six versions ago. And nothing is ever, right? It kind of popped up and then never, nothing ever came of it. It turns out that there's a whole project for this. Keeda KEDA. So let me share my screen here real quick. Nope. Not add source. Share screen. So the Keeda project Kubernetes event driven auto scaling is exactly what the name implies, right? And you can see down here under scalars, we can set a whole bunch of differences for metrics to do things like determine when I want to scale up or scale down. So if we come down here, there's like Prometheus, right? I can point it at a one or more Prometheus metrics and say things like, you know, hey, when my web server, you know, shows more than X number of connections per second or responses per second or something like that scale up. So it's an upstream project now, but I did check the what's new, what's next slides. And it is on the roadmap. So I think it'll be a tech preview feature in one of the next releases. I don't think it's in 4.10. But we do have, and that reminds me, that we do have a what's next. So that's a roadmap presentation coming up in early February sometime. I should have checked on this beforehand, but it just now occurred to me. Early April, yes. I don't know where it came from. We're well past February. So I don't know what that was all about. So yeah. Anyway, so we do have a what's next, a roadmap presentation coming up. So keep an eye on the stream calendar. So if you are not subscribed, if you don't have the alerts for the stream coming up, I would also encourage you to go to red.ht slash live streaming. And that will have a link to our calendar. So in the calendar there you can, and I can bring that up actually, red.ht slash live stream. It takes us to this lovely page. And if we scroll down here, we have this episode calendar. And I've been trying to do a better job. You can see here, we've got like today's topic in here for episode 62. But if we scroll down through here, you can see all of the other topics that are here. And if you click this little plus down in the corner, you can add that to your calendar. So that way you can see whenever we have streams and stuff like that. And we do try and remove these. Like if we cancel a stream or anything like that, we'll remove them from that calendar. All right. So Kita, interesting. Great news. I'm looking forward to seeing more about this. I think that I'm going to put in a request. Johnny and I will put in our quest with Gaurav to see if he can come on to the stream and talk about Kita and the possibilities that are there. We do hit a tech preview status with that. So moving on. One of the other things I wanted to talk about, and this came up recently, it actually came up this morning when we were chatting with some other folks internally, which is the question was, what's the difference between a stretch cluster and a cluster that has remote worker nodes? And the reason why I think that came up is because, you know, we have some, I'm not going to say recommendations for or against, but there's some caveats when you're doing a, you know, a cluster that is stretched. So what does stretched actually mean? And the consensus was more or less that it's a stretched cluster when, and I'm looking for the link at the same time for this KCS. So it's a stretched cluster when we are distributing the control plane nodes across multiple locations. And of course I'm not logged in. So I just posted the link to this KCS. This KCS has, you know, some, some guidance, some things to consider, some things to be aware of when you are deploying a cluster, specifically the control plane nodes across multiple sites. Whereas remote worker nodes are where your control plane is in one place, but your worker nodes can be, you know, on the other end of a high latency, you know, low bandwidth link, I think we say 200 milliseconds and 15 megabits or something like that is the 10 megabits, something like that is the minimum requirements there. So the reason why I bring this up is because stretch clusters, while they are supported, they can be complex and they don't often meet the needs that you're looking for, which is, you know, most of the time people want high availability. They have, you know, two or three data centers. I want to spread out my cluster across them. So that way of one data center fails, you know, the cluster, the open shifts applications keep running. But, you know, my opinion, there's so much that can go wrong there that you often end up with lower total availability of the cluster and the applications because, you know, some, you know, the lawnmower guy ran over the wrong, you know, sprinkler head and it's caused water to flood into the, you know, the fiber box that's running underground, which caused the short, which caused, you know, the campus link to go down and now the two data centers are isolated, right? So it's one of those, there's a lot of potential for failure. So it's just used with caution, I think is where I'll end that with. Johnny, any thoughts? Yeah, for sure. No, we, you know, it's a conversation that comes up a lot with our customers about, you know, disparate environments where they want to, they want to put these things everywhere. And yeah, I totally agree with you. I don't know if the, you know, the juice is worth the squeeze. Yeah. Yeah. And again, anybody who has any thoughts or comments, questions, feel free to post those into chat. So I've got two other things here. So two, hopefully quick things. So one, I happen to stumble across a set of open shift projects. So you can see this comes from Red Hat sys engineering. So this is stuff that was created by Red Hat folks, but it is not officially supported by Red Hat. So you can see here, this is an unofficial tool, right? So it is not a supported thing, but it is created by Red Hat folks. And the goal here is it's a set of things to kind of check and validate whether or not your cluster, the cluster that you've deployed is working the way it should out of the box type of thing. So proves this repo, you can see here, there's a whole bunch of checks down here for things like, you know, amount of entropy for doing encryption and, you know, is there OVN pod port thrashing? Are PVCs that aren't bound, blah, blah, blah, right? There's lots of stuff inside of here. I know we've had a lot of folks ask about, you know, is there something I can use to check the health or the status of my open shift cluster, aside from the console, the overview, right? With green light, red light type stuff. So this is one way that you can potentially do that. And I want to make sure that we share this link. And where is my, there we go. Twitch window. My bad, Andrew, sorry. No, no, you're good. I was just going to give Matthias some props for his acronym there, the stoneth acronym. I'm not familiar with that one. Shoot the other node in the head. It's a fencing acronym. Yeah. Go blast it. All right. Last one for me very quickly. So open shift 4.10, the bits shipped, I don't know, two weeks ago, something like that. But marketing being marketing, they didn't release all of the press releases, blog posts, all that other stuff until yesterday. So even though you've had access to the bits for at least a week, if you were to look at cloud.redhat.com today, you'll see a number of blog posts that talked about all the stuff that's in 4.10. So I'll link this blog post also included in our blog post, the one that we follow up with here. Where's the right window? Links to the other episodes, the other streams that we've done on open shift 4.10. And then, so that's the last top of my topic. I had Johnny, you had one or two, I think. Yeah, I have two quick ones. The first one is just kind of like continuation from our conversation last week with the service mess. A good friend of mine, Kat, and her cohort, Kat Cosgrove from Pulumi, did a live stream on IAC for Kong using Pulumi. Pulumi is a lot like, I don't want to say it's Terraform, but it's an IAC tool like Terraform. But it's more from like a programmer's perspective. So you can use different languages like, you know, Go, Python, you know, whatever. And you can, you know, completely configure a deployment. So I'm posting the link here if you don't mind sharing that link. Yeah, if you guys just go check it out. And then, you know, Pulumi seems like a pretty cool tool. Both the cats had a good time doing it. It was a pretty good presentation. So go check it out if you have some time. And then the last one is Ansible Automates is tomorrow. It's a free event that Red Hat puts on. The link should be popping up pretty soon here. So if you get some time, go check out what's going on with Ansible. And yeah, hopefully, hopefully you'd learn. That's it. Don't mind me. I'm just trying not to sneeze into my microphone over here. Bless you. Yeah. What's the thing normally you can, you're supposed to look at the sun or something and it makes you, makes you sneeze anyways. Or you're trying to try and call it up before it happens or to run it. Yeah. Yeah. So, and I'm, I'm just realizing that it's kind of dark in this room too. It's a, it's cloudy outside today. Storms, storms later today, although whether that was coming through Texas, what yesterday or the day before is, it's coming through North Carolina now. Yeah. We got lucky it went right over us, man. It hit just north of us really hard. So. All right. Well, so today's topic. So as, as we started the stream with today, our topic is micro shift and Stephen Reeves, you're a comment there about multi node and some other things. So I wanted to kick this off by handing over to Ricardo and Miguel to talk about, you know, kind of one, what is micro shift? What, what makes micro shift, micro shift and not open shifts. And, you know, kind of go from there and talk about use cases, you know, where, where we think or where we expect people will be using it. And very much to your point there, Stephen. Yeah, I'm interested in using it in my home lab too. So let's see how appropriate that is. Okay. I can start. We've prepared a couple of slides. I think I will share them and go through them very quickly. And it's better to have interactions and questions and more informal kind of session. Let me. Yeah. The audience is not, not shy. I can see that. So anybody who has any questions about today's topic or about anything open shift related, please feel free to put those into the chat. We'll, we'll address those and get your questions answered. Let me know if you see my screen. Yeah, looks good. Okay. So I have a couple of tabs here. Let's go through the presentation. It's, you know, just very quickly. Okay. So what's Microsoft's, right? The, there you go. So, you know, edge computing is, is been a topic for some years, right? Now, and inside Red Hat with build open shift, which, which is an awesome product. It's capable of doing a lot of stuff. It's amazing, you know, how many features it has and so on. And, but open shift has been designed from the beginning for, for the data center, right? For creating an on-prem cloud or deployed remotely, but basically it provides cloud native capabilities. Then for, for edge computing, Red Hat has launched Red for Edge. Red for Edge is basically our rail version with RPM OS 3, let's say a structure. And the goal is to deploy applications using potman, containerized applications using potman. So the thing is where, what if we want to use in, in this edge computing use cases in this edge computing scenarios, we want to use Kubernetes applications that the same applications that we have designed or built for, for open shift. And this is where micro shift started. We, we created micro shift to fill that gap in, in the middle. So then what is micro shift is a small form factor open shift optimized for field deploy devices. We will explain later what we mean about field deploy devices. What are these devices? And what we wanted is to provide a minimal open shift experience. So the goal basically is to be able to run something like open shift in very small devices. We can deploy applications. Sorry, is there a question? Yeah. So there's actually a couple of questions that when, when you feel like you're at a good pausing points, then we can ask those. Yeah. It can be now. Okay. So I wanted to make a couple of comments and then there's a couple of questions. So one rail for edge is really interesting to me. I've done a little bit of investigating and playing with it and it kind of reminds me of what we used to call atomic. Right. It's RPMO is tree based. It's really optimized for containers. And I saw a really interesting demo recently of using open shift pipelines to build an application and do things like, you know, use ACS to do container image scanning and all of that other stuff and then push that to a registry and then use Ansible automation platform to push that out to rail for edge nodes, you know, and have it run on or via a pod man out at those nodes. So I thought that was really interesting and kind of a very much to your point here of it's, it's rel. So you're managing it in the railway as opposed to open shift or Kubernetes where you're managing it in the Kubernetes way, which is I think the primary difference between, you know, rail for edge and micro shift. Right. So, so questions. So Sean asks, what do you mean when you say edge computing? I think we use that term pretty broadly. So maybe we can be or clarify or maybe be a little more precise. Yeah. It's really difficult to be more precise because it's computing is a, you know, a very loaded word and it can mean a different thing for, you know, a different person. So, yeah, for, for, yeah, maybe I can take this one. I was trying to write something today to explain what edge means to people who is not familiar with it. So it's an, and please Ricardo extend or correct because as you said, it can mean different things to different people, but it's, it's an architecture of how do you design and deploy your, your applications. It means that you are going to put the computing power and sometimes storage as close to the source of that data or the consumer of the data as you can, maybe on the same network or the same area. This is useful for, for many things like for example, you know, vehicles, farming, entertainment, cameras for surveillance, Ricardo and I did a small demo with Microsoft, maybe we can share the link later. Anything, anything where you can deploy applications to be very close to the consumer or the user is, is the edge in the end. Yeah, and I think it's, it's hard to narrow down because the use cases are almost unlimited, right? Cause I mean, you can consider, you know, a retail scenario, right? Where, you know, I'll pick on, you know, pick any big box retail store, you know, Walmart, Target, etc. If they have IT resources in the stores, you could consider that edge. The same thing with like restaurants, right? McDonald's. If they have IT resources in those stores, you could consider that an edge. You know, I think one that a lot of people don't think about is things like the electronic billboard signs that we see here in the US on the side of the highway, like those are edge devices, right? So it's really almost unlimited different use cases. Yeah, I mean, even on a store, you could go and deploy your applications on, on the digital scales that you have where you're going to put your fruit, your whatever you're buying. I mean, almost anything close to the user. Yeah, something I like to think about is for the past, I don't know, 15 years, the IT industry has been trying to centralize everything, right? To put everything in data centers and create big, big warehouses with servers, with, you know, thousands of servers and so on. And then we've realized that the more devices are connected to the internet, the more data we produce, and that is completely, that is very expensive to send all this data through the wire, you know, to the central location. So we need to start putting computing power next to where the data is created. And that can be an autonomous car, you know, a 5G antenna, a retail store, things like this. So, you know, it's basically trying to reduce the data that you send over the wire and process that data before. Yeah. Some very cool examples that I've seen lately are like, recently I went in a taxi and they had like a small device with ads for the people on the taxi. And as you came into the taxi, there was a warning, like for GPDR, like explaining that they are collecting data on the ad viewers, like there is an AA model that on that computer is going to identify, I mean, like your gender, characteristics of you and which ads are you paying attention to. So that's a very clear example. Seems like advertising drives a lot of innovations in our industry these days. So there's a slew of questions that I'm going to lump into, you know, one, and they're all kind of around, you know, is micro shifts, you know, a replacement or an equivalent to things like micro Kubernetes or micro Kates, or let's see what's the other one. Tell me out here, Johnny. Minicube and like CRC. Minicube, yeah, CRC, so on and so forth. You know, Frostmageddon here, is it Frostmageddon? And I'm losing track of my chat here. Frostmageddon, what's the difference between OKD and micro shift, right? OKD is the upstream version of open shift. So it's a full open shift in that respect. So I think we're getting ready to answer those questions in the slides that you had up. Sure. We have designed micro shift with the goal of fulfilling the edge computing use cases that we, you know, that we have in mind that minicube, micro Kates and so on. The first idea, it's more like a development environment or for your home lab or to get some Kubernetes in a resource constrained environment. That micro shift can do as well, but our target, our main goal, it's been always edge computing. Let me finish very quickly the slides. Micro shift as well as open shift can be managed by ACM, you know, advanced cluster manager. And the way that we have built micro shift is basically a single binary that contains all the control plane, the Kubernetes control plane, open shift APIs, kubelet, kube proxy, all the components in a single binary. Then when you, and we will see it in the demo, when you start micro shift, there will be some extra services that will be deployed like Ingress, you know, plugin and so on that, you know, makes your life easier. But basically it's just a single binary. And we have different deployment models. It's an RPM package or container image. And we have an extra, which is an all in one container image that contains our cryo, the container runtime. So it can also work in macOS, windows and so on. We've built micro shift in different architectures, basically x86 and ARM, but also RISC-5 and PowerPC and so on. So field deploy devices are basically the typical systems on chip that are very different from servers, right? These small devices usually have very unstable network connectivity or they have none. Usually there's no, there are like headless devices. There is no a screen or keyboard to access. They are in theory so cheap that if someone, if one of them breaks, we will have to replace them and not trying to fix them. And yeah, it's, everybody knows the Raspberry Pi, right? And now NVIDIA has the Jetson family, which are small platforms and provide some capabilities. They have I2C buses to connect to, for example, server controllers, SPI cameras via CSI. There are a bunch of interfaces and devices that can be used from these platforms. I don't know, Miguel is our embedded systems expert. If you want to add something here. I think all this makes a lot of sense. Okay. Well, this is the deployment workflow that we picture to deploy Microsoft, but let's compare it with Open. This is one of the questions I see in the chat. So in the left side, we have OpenShift. And the way OpenShift has been designed is that it's like a whole stack. So we have Rail Core OS as the operating system, and it's a fundamental part of OpenShifting in the end. Then we have Kubernetes plus the OpenShift APIs that we provide. And we have a bunch of cluster operators that will manage the cluster. For example, the cluster version operator that will manage the versions of the operating system and the components, the machine API operator can manage the infrastructure itself that is underlying OpenShift. Well, you are OpenShift admins, so you know this much better than me. And then on top, we will deploy Kubernetes applications and OpenShift applications. Microsoft has been designed to be completely the couple from the operating system, and we want to treat it as another application, as another RPM package in our operating system or as another container running on it. We like to use RAIL4H as the operating system, but of course it's not mandatory. We can use it anywhere where we have a container runtime and we can deploy containerized applications as well. So a couple of things here. So one observation for me is, this kind of goes back to the model that we had with OpenShift 3, right, of the underlying operating system and its management is decoupled from the micro shift management, right? So in this model, you don't have machine config operator, for example, that's going in and pushing down, you know, a RAIL4H Edge RPMOS tree updates. Instead, you would manage that operating system the way that you always have. You just have that micro shift, you know, OpenShift API on top to deploy and manage your applications. That's correct. Okay. So there's a handful of questions here. So Alan asks, is there an upstream version of micro shift, right? Is it still OKD or is there something different? Micro shift is an open source project. It's not, let's say, a product. Our goal is to get it productized, but so far it's just a project from the office of the CTO we launched. And if you let me go here, just too much information here. But if you look at how we built micro shift, basically we are pulling the objects from OKD. So this is, as you mentioned, the upstream version of OpenShift. And since micro shift is an open source and upstream project so far, we've pulled versions from OKD. So every component that is embedded in the micro shift binary, all the extra services that are running as pods, scheduled pods are also container images pulled from the OKD repositories. So the latest version of micro shift is pointing at 4.8 OKD. And we are working on the rebase to 4.10. I mean, a rebase shouldn't take too long to perform, but we had other priorities and for us it was better just to keep one stable version and try to add more functionality to micro shift. And Kalia to ask whether or not we can deploy operators to micro shift? Micro shift does not provide cluster operators, but we can deploy operators on top. So we've tried, for example, Submariner, we've tried CMD, Cubevert, we've tried, yeah, there are plenty of operators that you can deploy. You can even deploy OLM in case you want to have the catalog and so on and deploy from OLM the application operators, let's say. Yeah, and I know Christian Hernandez was, he and I were chatting about that a week or two ago because he wanted to deploy the GitOps operator and needed OLM before he could do that. Yeah, he's very active in the Slack channel. We have, it's everything in the website, but we have a Slack channel, everybody is invited there you have questions and so on. Yeah, I'm not going to give Christian a hard time this week, he's on PTO, so I usually only do it when he's here. And I know he shouldn't be listening this week because he better be on and enjoying his PTO. Yeah, he needs it. Yeah, so I also saw Thomas, Thomas all asked about what is the recommended tooling for managing the underlying operating system and hardware with micro shifts. And Miguel, I think that was something you were going to address. Yeah. So we are working on the, I mean, along with Microsoft, we are working on the whole end-to-end history about how to manage devices on the edge. So, I mean, our goal for Microsoft was its devices. So its devices, normally you manufacture them, you configure them and then you put them on the field or a variation of this. Or you put them on the field and as you put them on the field they are going to configure the way you wanted them to be. So, this is one of the reasons why we are building on rail forage because it uses OS tree. And with OS tree, it's very easy to get your device connected to an OS tree repository and get atomic updates of your operating system. That operating system could be an image that contains the rail forage, minimal tools, micro shifts, and even if your device is not going to have like a very stable connection or you want to pre-configure the device before you send it into the field, you could even, I mean, we are developing the mechanisms to embed the container images into the OS tree image. So as soon as you boot, you have Microsoft, the container images, your end applications in container images too and the deployment details for your application. So Microsoft boots, so rail forage boots, Microsoft boots inside rail forage and your applications start inside Microsoft. So we are trying to define the whole end-to-end deployment story around that. But, I mean, there are many pieces like you could manage them, the applications via something like ACM. We are also looking for scaling that. So as you know, ACM was designed to handle lots of clusters, but maybe in the edge use cases we are looking into the thousands or hundreds of thousands of devices and we need to make that scale. We are mostly assigning around managing everything via the OS tree in rail forage, ACM and yeah. Yeah, and I'm thinking back to the what's new in OpenShift 4.10 presentation where we announced to the world that we're working with OEMs to pre-install OpenShift onto hardware. And so very much like what you have in that upper left-hand block there, put MicroShift on there and ship it out and turn it on and it can connect up to the world and fall underneath management automatically. Very much that hundreds of thousands of nodes thing. That's right. And with ImageBuilder you can create your own customized version of rail forage. You can build rail forage with, for example, cryo in it in the RPM OS tree. If you want to include also MicroShift, you would be able to do that. And then once you have your image, you can send it to your manufacturer. It will be image into 2000 devices and send to the remote locations. And once you connect them, it will be all automatic. So if you want, we can do the demo very quickly and I'm sure there will be more questions. There's a ton of questions. Everybody's very curious. Just for everybody, we have a website called MicroShift.io and the GitHub repo is in the Red Hat ET organization, Emerging Technologies. And it's a public repo. So you can open issues, not too many full requests and have a look at the code. Let me open the terminal. Okay. So I've created a Fedora, a Vagrant, Fedora VM. And what I'm going to do, do you see both sites? Yes? Yes. Okay. So I'm going to follow the official documentation to see that it's working. We have, as mentioned, two different ways of deployment. RPM-based and container image-based. Let's do the RPM-based, which is, let's say, the easiest. First of all, we need to install Cryo. Let's do that. And I have to install as well Firewall-D because I don't think in Vagrant it comes by default. So while we're waiting for that, I do have a quick question. Sure. Is MicroShift, are we intending to submit that to CNCF or is it, like, since it's a derivative of OpenShift, it's already there? I don't think we plan to push it to CNCF. As a personal opinion, I don't know if it makes sense to have distributions in CNCF because the core components are basically the same. It's just a matter of how you package them. Well, yeah, for MicroShift, I don't think so far it's in our plans, let's say. Yeah, in the end it's a version of OpenShift for OKB. Yeah, that was my thought is because OpenShift is already a CNCF-certified Kubernetes. I would think that it inherits maybe, but I don't know the processes or the requirements there. So it's an interesting question. So here we are enabling the copper repo where we store the MicroShift RPMs. We are installing MicroShift, adding some firewall rules to open some ports and launching MicroShift, basically. The RPM package deploys a system D service and once MicroShift is up and running, you will see the logs in the journal and so on. It's like another application, right? So Khalid asks if we can add a rel for edge node to an existing OpenShift cluster and I believe the answer there is no. So OpenShift proper only supports full rel 8 nodes as of OpenShift 4.10. OpenShift 4.9 will support rel 8 or rel 7. Although it's deprecate, rel 7 is deprecated and then OpenShift 4.8 and below only support rel 7 compute nodes. So I don't think rel for edge is a supported operating system for OpenShift itself. Sorry, I forgot to start. I appreciate that you have a mechanical keyboard. I know, I wanted to say something. I was biting my tongue on it though. Thank you Andrew. No, it's perfect. It's just the cherry red, it's not too clicky but I love the theme. Yeah, Mike Murphy. Pies are hard to come by. Thank you Global Chip Shortage, Pandemic, Supply Chain Issues, all that other nonsense. I've been trying to get some for months for various reasons and just I've been unable to without paying exorbitant prices that I'm unwilling to pay. We see the system disservice that Bikershift is running. We will just install the OC client and QCTL and well, I'm going to show the journal but it's too crowded, let's say. Is there anything that keeps us from working in a disconnected or air-gapped environment? It should be just downloading the binaries and then pulling the images over, right? Yeah, for disconnected environments, I mean, you need to put the container images that we use into the system. So it's not on the repository yet. We need to finish a few things because it was a complex thing but we managed to package the container images into another RPM. So if you want, you can install that RPM and it's going to go to Cryo and configure like a local read-only storage for those containers and then there is no more download that needs to happen. You can just install Microshift, install the package with the containers and then it's completely, I mean, it could run completely offline. Gotcha. Sorry, go ahead. Go ahead, go ahead. Just to mention, when Microshift starts, the RPM version of Microshift, it will create a folder in slash var slash lib Microshift and there you have all the resources, all the KubeConfig files, certificates, configuration for each of the components and so on. And the way that the user interacts with Microshift is basically using the KubeConfig file for the Kube Admin user. So we've copied it to the default location and here in var lib Microshift, you can see all these resources, let's say. It takes a little bit to download the images, but I think everything is up and running. After two minutes, Microshift, you install the RPM, you start the service and after two minutes these extra services are already up and running. One note about the images that we use at least for the Intel platform today, those are just the OKD images. So we don't rebuild them, those images are already tested and it's an advantage. For ARM platforms, I think we don't have ARM 64 yet, for example, in OKD. I think it's something that is being worked out. So we build our own images for that. And something that I've noticed is that the images that we manage to build are much smaller than the ones from OpenShift. So I was thinking about taking... providing some feedback about how to hopefully optimize those images. It will mean removing things that are on the system, I don't know, maybe there are technical reasons for that to be there, but I think that we could use that feedback to improve the size of the OpenShift images. So Khalid asks another question. What's the difference between Microshift and simply using Podman on the host? And I think the answer or I think why that might be slightly confusing is because if I remember correctly, Podman, you can output what are effectively unit files that look like a Kubernetes pod definition and have it interpret that. I think that his question is more what is the difference between the deployment model that Ricardo was following and the deployment model where you deploy Microshift with Podman. Yeah, so we have these two different deployment models. One of them has some advantages and disadvantages and so on. For example, we have always in mind the edge computing use case. So in case of Red4Ed, you go, you install the image builder and you want to create your own Red4Ed customized version of Red4Ed, right? And you will include in the RPM OS 3, you will include Cryo, for example. And then you have two options. You have the option of including as an RPM Microshift or you will deploy Podman on top of Red4Ed. What are the benefits of, for example, running Microshift as a container? In case that we want to upgrade Microshift without impacting the workloads that are running our applications, it's just as simple as creating a new tag, like upgrade a container. That will not have any impact on the applications that are running on Microshift because in the end they are running on top of Cryo. In case you want to have everything included in the RPM OS 3, you will use the RPM to have an immutable operating system and when you upgrade the whole OS 3, you will have to reboot. That will have some workload downtime, right? So it depends on how you want to deploy your devices and manage your devices. Well, I'm glad that you corrected me on that one, Miguel because I definitely misunderstood that question, so thank you. There's a question from before in my apologies. I don't remember who asked it about multi-node cluster support. Right now it's only single node. We have a PR ongoing for multi-node starting with adding new worker nodes, but it's not finished yet, so I would say it's working progress. Yeah, it's functional, but I mean, we are talking to other teams and we are trying to figure out what is the best way to do it in a way that we don't change how security works in Opensift. So everything that is already validated for Opensift will be validated also for Microshift. That makes sense. You don't want to have to redo work. And there's another question from Mike Murphy about essentially he was asking, like once you have the Microshift realist or theoretically you could use an internal registry if you wanted to, right? Like I'm assuming he's talking from another like OC cluster, but like if you wanted to run a registry on Microshift, it's essentially just running either, you know, like the Docker registry and then exposing it that way or just any other deployment type for a registry, right? Right, right. Any image that you can load into cryo, for example, you will be able to use it from Microshift. Gotcha. And then the way that you upload images now, because it doesn't have a registry, right? Like it would either go directly into the binary or you would just stand something up on the outside and let it pull it into the runtime, right? Yeah, for the components that are embedded into the binary, we basically use the libraries. We don't pull any image from anywhere. It's basically using, you know, from the binary from a central service manager that we have implemented, you will use the different versions of the components. It's basically using Go modules and so on. And for the infra services that you can see here that are running, we point at certain tags from OKD images and for the disconnected environment, we package those images into an RPM and that's basically... Yeah, I shared the link to our image repository. But if you... I mean, in there we create multi-architecture tags for the images. But if you deploy Microshift on an Intel platform and you look at the pods, you will notice that we are not using that repository. We are just using the OKD image tags from the OKD repository. We wanted to make it... Even for the Intel version in this repository, we just clone the image from OKD, but we wanted to make it very clear that we are not rebuilding them. We are using the ones already built and tested. Gotcha. Awesome. Thank you. Let me continue with the demo because it takes a while to download some of the components. So we have an OpenShift instance running in our lab and it has ACM, the Advanced Cluster Manager. What we are going to do is basically go to clusters and import a new cluster, which is Microshift. We haven't done any work on ACM as such in terms of identifying that this cluster is a Microshift cluster and so on, but it works in any way. So we go to import cluster. Let's say Microshift divide, for example, and we generate the command. We copy the command here and we execute it. It will deploy... It will install the CRDs and basically the clusterlet, which is the agent from ACM that will allow the communication and management from ACM. So... So awesome. And once it joins ACM, it's just like any other cluster. So any tags that exist to have it, things deployed to it automatically, all of that other stuff applies, just the same. Of course, there are some constraints because, for example, if we don't have the cluster version operator, ACM will not be able to upgrade the cluster or if we don't have the machine API operator, we will not manage the infrastructure that is below. But in terms of policies, applications, that all works or should work. Yeah, I mean, something important to keep in mind is that if you are managing applications that run on different architectures, you will need to provide images for those architectures or images which are multi-architectures. Mm-hmm. And I would assume the limitations that you were just describing, Ricardo, are going to apply to any Kubernetes or any non-open shift Kubernetes that is managed by ACM as well. Yeah, that's correct. That's correct. So now it will try to install... So we have the clusterlet running. ACM has recognized MicroShift as an imported cluster. Now some add-ons will be deployed. This takes a little bit, but we can continue with creating our own applications and once the cluster is fully imported, let's say, or with all the add-ons and so on, it will apply and install our application. Let me remove some tags that maybe we don't need here. For example, the AIM30 policy control. Okay, let's give it like this. This, as many add-ons as we add, there will be less room for applications, right? So I don't know if the add-on controller is not yet there, but let's go to the Applications tab in ACM, Applications. And I'm going to install a very simple application. It's an NGINX with a service. We create application, subscription. If I go too fast, let me know. And if you have questions later, I can always address them. Alan did just slightly tangential. Alan did just ask a good question, which is, is there an upstream version of ACM? Like is there an OKD equivalent to ACM? The answer to that is yes. I recently learned this myself. It's an open-cluster-management.io. So it doesn't have all of the GUI stuff, right? Because that's all the Red Hat stuff that they add on. But all of the functionality, all of the stuff that you do through the Kubernetes API, because remember, everything is just a Kubernetes CRD inside of there. All of that stuff is there. So I found that out. I was working with Jimmy Alvarez recently and found that out. So one thing that's interesting to me is you're using OSI Git pods and other OSI commands here. Can you, from the host, if you were to do just a podman PS, would you see all of those containers running there? Or is it through Creo or Cryo or something like that? So would you use cry control? Yeah. It's basically, I didn't install cry CTL. It's basically running on Cryo. Let me install cry tools, is it? So it's very much like OpenShift, right? If you were to debug to an OpenShift, a CoreOS node, and if you do a sudo or suover to root, you can use crycutl, crycontrol. And just like we see here, see all of the container images that are running inside of there. So that's really cool. Yeah, the container runtime is the same as OpenShift. And we can use the same tools. Which also means that you can do troubleshooting and stuff like that by looking at pod logs directly through or container logs directly through cry control. Right. It kind of came up earlier in the conversation about using the same workflow and same tooling from the IoT device or the Edge device back to your cluster and stuff like that. And I think that that's probably one of the biggest things that when we were talking about this early on with some of our customers, is like that workflow is going to be very similar using MicroShift as it would be like using OC, right? Like the same thing you're talking about, like debugging pods or debugging containers using the OC command line and APIs and stuff like that. This is super exciting. So what we have done here is basically to create an application. The application is very simple as let me show it here. It's just a deployment, the images and Unix. And we use a service to expose, a not port service to expose the application. Here, ACM through the wizard or the GUI, it creates a subscription and the placement rules. I've clicked on deployed on all online clusters. So this is the placement rule and it will take a little bit until MicroShift is done. MicroShift is detected as up and running them fully imported and then it will deploy the application. Let's see how it goes. What's the worst that happens? It can't be any worse than Johnny's demos. So is there a equivalent of an admin console or a developer console with MicroShift or is it all CLI, slash ACM? It's all CLI. Yeah, one of our design constraints was basically to, you know, not to blow the binary and not to make it very big. And of course that comes with the problem of not having some functionality. So no console, it's only CLI or ACM. Frostmageddon. All I see right now is the dollar signs and need to upgrade switch and obtain a bunch of new pies for a mega cluster. So that does ask or bring up a couple of interesting questions and Frostmageddon, I think you asked earlier. So one, is there, when multi-node becomes a thing, will we follow the same control plane requirement, node count requirements that OpenShift does? So one or three or can we have five or seven or nine? And then also is there like a minimum requirements for MicroShift? Like can I use a Raspberry Pi with, you know, two gigs of RAM? Do I need to use a Pi 4 with eight gigs of RAM? That type of stuff. In terms of requires, our goal was to fit it into one CPU and one gig of memory. So for example, with two gigs of memory and well, yeah, two CPUs, it would be enough. Everything beyond that is for applications. Yeah. Got it. And for the operating system. Yeah, that's refreshingly lightweight, of course, compared to things like code ready containers and single node OpenShift. Look, the edge application that we created in ACM is now getting deployed. I forgot to answer the other question about control plane requirements. We want to follow and to have the users the same and that means the same experience as OpenShift. So we will follow the same patterns in terms of only one or three. That's the goal. Yeah, we've talked about before here on the stream how going beyond three control plane nodes is often a counterintuitive anti-pattern because you add so much complexity that it can make things more fragile. Mike Murphy, is there a recommended operating system when running in, for example, a Raspberry Pi? So I don't, is a Red Hat operating system or Red Hat derivative operating system, so Fedora, rail, rail for edge, et cetera, is that a requirement or can you do things like, you know, the, oh gosh, what is the name of the Raspberry Pi operating system? It's Debian based, whatever it is. Yeah, yeah. Of course, an RPM won't work on those environments, but you can download the code and compile it and if Cryo is installed, it would work. Also, the container image would work if you can deploy it on Docker, for example. In the main website, there is also what we call an application development microchip deployment model, which is this all-in-one image. So you can deploy a container in your Mac OS or Windows and it will have microchip plus Cryo plus all the dependencies. We support only Red Hat-like operating systems, but you can deploy it, I think. Yeah, mostly because sometimes you depend on things which are going to be on the kernel, like a specific APT-VOL versions or NF-Tables or sometimes we found that, yeah, it was, yeah, non-stable. If you don't have a little bit of control over that, but I mean, you can do it. It will work as Ricardo said. So I think we're probably coming up towards the end of our time. So I just want to go ahead. The cherry on the top of the pie. The application has been deployed, so let's access the services. It's just an engine X, so nothing fancy, but the IP of the device, like the host IP, is this one. And the port is 30303. Well, so to be clear, I work for you all for the next however long you want to be on the stream. So I want to be respectful of your evenings. I know it's getting kind of late the day for you. So I don't want to eat into too much of that time. So for our audience, if you have any questions or anything like that, please feel free to submit those in. We'll do our best to address all of those and try and keep to a reasonable time. I see we've got a few pending questions as well, and I see that our application deployed successfully, which is amazing. I love it when it works exactly the way that we intend it to and the way that it's supposed to, right? Yeah. It's so funny, right? Because I've been recently having some conversations with folks around, like, you know, should we expect or be surprised when Kubernetes works the way it's supposed to? It's a seven-year-old project or, you know, Kubernetes 1.0 is May of 2015, I think. So, you know, it's seven-plus years that it's been around, and sometimes it still feels like people are surprised when Kubernetes does exactly what it is supposed to do. Like, no, we shouldn't be surprised. That's what we should expect. So a couple of questions in here. Let's see. I know we're talking about MicroShift, but can I use SingleNote OpenShift for Edge? Yes, you absolutely can. So the big difference there is SingleNote OpenShift is going to be a quote-unquote full OpenShift, right? It's going to have things like the metric service and operator lifecycle manager and machine config and all of those other things inside of there. Whereas MicroShift, and please correct me if I'm wrong, MicroShift is really targeted at providing that OpenShift API for deploying containerized applications to those ultra-low resource Edge devices. It doesn't have to be ultra-low resource, but that's one of the goals. So it doesn't have a lot of those other things, and it may make sense for a lot of different use cases. Install Podman on your MacBook. They're still working out kinks with Podman on the M1. It works, it works. Yeah, you can do Podman machine and then deploy Pods. Okay. Yeah, we have an issue with MicroShift, which I am trying to investigate because of the amounts of the container volumes, but yeah, it should work. All right, I'm going to have to try it again then, and especially since now I'm thinking of, so if MicroShift already natively runs on ARM, can I run MicroShift on my laptop to then be able to deploy things? That can get interesting. Mathias, is SC Linux a requirement? Not really. I mean, you can disable it and it will work. We wanted to have security in mind when we build MicroShift, so we have all the policies, and there is an RPM package that installs all these policies, but it's not a requirement. Not firewallty as well. Got it. I don't know if anybody else can hear the very loud vacuum truck going by my house. Good. It's that time of year where they go around and vacuum up all the lawn clippings, et cetera. Yeah, so I don't see any, Johnny, keep me honest here. Do we have any other unanswered questions? Cleed was asking, or Cleed was saying he would love it if CRC and MiniShift made babies and had like a lightweight CRC. That's my wording, not his, but like basically he wants to see CRC and MiniShift or MicroShift kind of like conjoined and then have like a lightweight CRC where you have like, I'm assuming because you get the GUI and all that stuff that comes with CRC. And then there was one more. Oh, and this was from Alan where he was saying, he knows it was mentioned earlier about it not being an official product, but like what's the roadmap for MicroShift? The roadmap in terms of the community project, let's say. Yeah, or like where essentially like where MicroShift fits in the Red Hat product line eventually, like if we can talk about it, if not, then, you know, obviously. Yeah, the goal is to get it productized. We are not there yet. And in terms of roadmap, for example, multi-node is one of the PRs that we have open. We are trying to stabilize it as much as possible. Rebase it to 4.10, OKD 4.10. And, yeah, get better support for different use cases. For example, Miguel, if you can copy the DEF CONF YouTube video that we have, we did this demo about AI at the edge where we use MicroShift in one of these NVIDIA Jetson boards. And there we deployed MicroShift and an application, a face recognition application on top. And we used really low-end equipment with five-box cameras, ESP cameras, and NVIDIA Jetson board. So these type of use cases that will be in the real world and these kind of boards will be used in autonomous carts, in drones, in whatever. We are trying to get MicroShift in a better shape for those. I think you just inadvertently answered another question that I had skipped over from Steven, which is, can MicroShift access host devices, storage GPUs, et cetera, for example, from a Jetson? Yes, it can. We have deployed, I mean, regarding the Jetson family, there is this what's called NVIDIA container runtime that you hook up to Cryo and then you can access the GPU, the integrated GPUs. And we have also tried in a discrete GPU environment where you have the carts to install the GPU operator, the same as in OpenShift. And the device plug-in will take the GPU device and the workload can see it. We have tested that. Yeah, but just keep in mind that, I mean, there are still no NVIDIA drivers in Fedora, for example. So in our testing, we were using Linux, the Linux for Tegra distribution that NVIDIA builds with their own kernel and so on. And then we used non-containerized MicroShift inside because we need to use the NVIDIA container runtime. So one option could have been using the MicroShift all-in-one, but that means that you have the container runtime inside the container and then you don't use the container runtime on the host, which is going to be the NVIDIA one. So yeah, I mean, it's... We did it by installing the MicroShift binary, but probably if you install Cryo and then on the Linux for Tegra and then you install the MicroShift container, not the all-in-one, just the MicroShift container, it should theoretically work. Otherwise, the binary works. And we know that NVIDIA is working to have full support for upstream kernels. So I have one last question for you, Miguel. So the flashing lights there to your right, is that your home lab? Do you have a little cluster of MicroShift running there? No, no, no. It's an aquarium. Yeah, it's actually up here, but it's very messy. Maybe I can move the camera around. If you are not... It's a home lab. It's supposed to be messy. If you don't get to start, I'll just reply here and the logic analyzer is gone now and I have more... Oh, that's great. I love the gopher. That's awesome. Do you want to talk about your smart fishes? Smart fishes. I feel like that's one of those... Gosh, when was it that... I think it was DockerCon. They had the thing with Kubernetes objects and Minecraft, where when you imported all of the Kubernetes objects into different animals, I was just having a conversation about that with somebody the other day. So I'm thinking smart fishes, where the fish swimming around, and doing some image analysis, some AIML there to have it do certain things. But anyways, for our audience, if you have any questions, if there's anything we didn't get answered, if there's anything that comes up, if you're not watching this live, please don't hesitate to reach out to us. So you can reach me at practical Andrew on social media. So Twitter, Reddit, all of the things. I have basically the same username. And of course, email as well, andrew.solovanetredhead.com. And Johnny is Johnny, J-O-N-N-Y at redhead.com. And J-Rock-T-X-1. Did I get it right, Johnny? Yes. It's only taken like 20 shows for us, you know, together, for me to remember that. Never too late, man. It's never too late. So thank you so much for joining us, Miguel and Ricardo. This has been really phenomenal. And it has really gotten me excited about a lot of things I can do in my home lab. And my wife is going to be upset because I'm probably going to accelerate my purchasing plans for some raspberry pies. So yeah, this has been really great. I've learned a lot today. Yeah, Microsoft will be a great reason for a divorce. I don't want to take it that far. I like where your head's at, though. I like where your head's at. It wasn't in our main goal was to split people. Yeah. Once it gets productized, then I can justify it as a work expense, right? I got to run it in AWS. We'll just charge, you know, see if I can submit an expense report or something. Right. Thank you for having us. Yeah, thank you. You all are welcome anytime, if there's anything that you ever want to talk about on the stream, I know everybody would love to hear from you. So for our audience, thank you for joining us today. We will be back next week. I don't think we know what our topic is yet, Johnny. Stephanie is behind the scenes, you know, shaking her fist at us because we haven't gotten a tour yet. So we'll figure that out. And then April 6, we're going to have Kirsten Newcomer join us to talk about security and OpenShift. So yeah, that's who we're really good. Yeah, so we'll figure out what we're talking about next week. Keep an eye on the calendar. Keep an eye on all the social media, and we'll publicize that. But yeah, thank you everybody for joining us. And Johnny, I'll leave you with the last word. This was an awesome, awesome demonstration. I appreciate you guys coming on and doing this. You know, like this is, this is exciting for me. And I know it's exciting for our customers out there. So keep up the great work and we'll see you around. Thank you. Bye bye. Bye now. Have a good one.