 Hello everyone, welcome to Devcom season 2022. I hope you're enjoying it so far. My name is Lenka and I'm the moderator for this session. So next up we have Seri and Pearl and their managed workloads on disconnected far edge presentation. There will be time for your questions at the end and you can write the questions in the Q&A section. So Seri and Pearl, let's get started. Thank you. Yep, I'm Sally. I'm a software engineer at Red Hat and I've been working on this project for a few months now on MicroShift. It's really exciting and that's what we're here to tell you about. Hi, I'm Pearl and I work with Sally on the same team and we are very excited to talk about this project. So we'll start with the agenda of just talking a little bit about edge computing and the challenges that come with this ecosystem. Then we're going to introduce a project which is MicroShift and where MicroShift fits in this edge computing ecosystem. Then we are going to dive a little bit deeper into MicroShift, the architecture and what the production deployment looks like and what are the different ways you can deploy MicroShift and have your workloads running. And at last we have something interesting for the developers, which we call MicroShift AIO and we'll talk about that later. So first let's see what is edge computing and where it is used. Edge computing is most of the time when you have very limited storage, very limited network and you want to bring computing close to the storage. That's most of the time we refer as edge computing. It is highly used in micro data centers. Again, when I did say micro data centers, they are small footprint data centers and they have everything that you need to run the application. All the stack and example would be 5G or IoT or static streaming in a content delivery system. The other is the embedded system, which is automated vehicles, your at home devices. And for this presentation specifically what we are going to talk about is field devices that are deployed in far off fields and you can think of them as drone or satellites. So just to get a context of what is field deploy devices, when we refer to field deployed, we are talking about a plug and go provisioning and replacement mechanism. So basically these devices are remotely deployed and they are most of the time not recoverable and they must not break or break easily. They also work on very expensive and sparse network and most of the time they have no physical access security. And these devices are usually single boat computers or system on chip. Think of them as a Raspberry Pi. So whenever we are talking about field deploy devices, we are referring to mass deployment and operations in remote, which are at uncontrolled locations and have highly challenging network connectivity. So think about a camera that is deployed on a ship or on a drone that is deployed far north or south and exploring glaciers. So in contrast, when we are talking about the conventional highly controlled data centers, they are always available on high available infrastructures and they have very stable power and network condition. But the problem is we want to have Kubernetes. This is, I got this. Oh, sorry. Yeah. So the problem, so what's needed is how do you manage update transfer data to and from these remote edge machines and also still use those tried and true and familiar patterns of cloud native deployments like Kubernetes, OpenShift. They are low resource machines. They don't have enough resources to run even a single node Kubernetes cluster. They are sometimes disconnected or undependable connectivity. And you still want to run cloud native workloads. You might need to manage your application and workload separate from the underlying operating system, which is the opposite of what OpenShift does. Or an RPMOS tree operating system where you embed the workload RPM, but again, limited resources and how do you update them cleanly? And so the problem, again, the world, we're in this time period where everything is being instrumented. And further and further from the data center, we need to harness, process, analyze the data from these far off edge devices like drone, smart cars, field devices, oil rigs, sensors. It's a huge opportunity with this instrumented world. We can't even imagine like what problems this is going to solve for our society. But we need to solve this problem of how to utilize this data, how to analyze, how to process, how to get it. And yeah. So the solution is we want best of the both world. We want to have Kubernetes and Kubernetes like orchestrating platform to manage workloads because developer, and even like you want to use Kubernetes or cloud-nating tools to develop your applications for Edge. So we have to bring Kubernetes to Edge devices. But all of the time these Edge devices, they're managed by device management system and that manage your OS and the underlying hardware. So we want to integrate both this application development flexibility using cloud native application based on Kubernetes. And we also want to provide device management functionality that manages and updates the OS. So you can see on the left hand side, you have the niche cases where you have deployments on the devices and they are just managed by the OS. And on the right hand side, you have like high availability cluster with high capacity on which Kubernetes workloads are running. So that can be handled by OpenShift. So where MicroShift exists, and most of the challenges that we are talking about up till now is the challenge in this intersection where you want to have Kubernetes on field devices, but you have very limited options as of now in the industry. So let's introduce this open source project that we've been working on. It's called MicroShift. And it has not been productized so far. It's an explorative project that we work in the Red Hat Office of CTO. And the goal behind this is to repackage OpenShift core components like your router, your DNS, your storage provider. It's CD all into a single tiny binary, which as of today is 160 megabyte without compression. So MicroShift is a monolith that it provides all and nothing start-stop behavior. And it works with SystemD and Podman that enables fast restart and stop time. The downtime is very less, as you will see in our demo. And here is when I say monolith, here's what I'm talking about. So you have a cluster management tool that you can use to manage your Kubernetes application, which in our case is using Kubernetes APIs, basic APIs like its CD, Qube API server, and some OpenShift components. And then you have the MicroShift container that contains everything from your binary. MicroShift binary contains everything. It contains the Kubernetes components, as well as some add-ons OpenShift components. And when you run MicroShift, you're just running it on the OS and your workload is running on the OS itself. They don't work inside MicroShift container. They're directly hooked on the container runtime. In this case, we're using cryo on the OS itself. And then you can have a fleet manager that can manage all your fleet deployed devices. So what we, MicroShift does, it simplifies changes, including updates and rollbacks. And it obviates or it removes a cluster operators to orchestrate components. And mind you, it's only the cluster operators. You can still run operators as workloads on your MicroShift cluster, but we have removed the cluster operators and the lower level operators at the machine config. So as opposed to, so on the other side, there's OpenShift or Kubernetes where all of the components and operators, they're their own stacks. They're managed by Kubernetes, highly resilient. It's, you know, a problem one doesn't take down the whole cluster. And OpenShift takes us to the extreme where everything is managed by operators and even the operators are managed by an operator. So in the red over on the left, OpenShift, you can see it's meant for scale orchestration and MicroShift on the right. It's meant to run Kubernetes workloads, but on extremely limited resources. So digging down a bit further. Next slide, please. You can see the MicroShift container. It includes the MicroShift binary, which embeds the controller, the core controller logic, like SCD, CUBE API. We include OpenShift API. And I can't really read that. Can you guys read that? Controller manager. And it utilizes cryo running on a host machine. Also on the host machine, a podman volume can save the cluster state. And all of this can easily be managed by a system service that wraps a podman command with those volume mounts and permissions. Besides the core controllers, MicroShift binary also includes manifests and references to the add-on components that the MicroShift architects think that most people will need, such as the OverShift service CA controller that manages TLS certs and the OverShift router and like a storage provider. So next slide. And how is this created? Well, MicroShift uses OKD. It uses the OKD release image and the OKD source code. The actual SCD-Cubelet API server logic is rendered into the MicroShift binary. And also, the OKD component manifests that are found in the release image, as well as the digest of the individual add-on images. Those are also included in the MicroShift binary. And yep, I think that's it for that slide. So, so far what we are trying to say that when you are deploying or when you are running workloads in edge computing platform, you have multiple choice and what is the right choice for you. So we have designed a simple decision tree that can help you decide that. So you can have a full, if you want to deploy a target, you have two options. Is it servers in controlled environment? And if that's the case, you have rail cores and OpenShift, that's on the right hand side of this decision tree. And we are not going to cover that because that is not our agenda. So if you are deploying servers in controlled environment, go with OpenShift, be it a single node or the whole OpenShift, that is the choice or the best route to do. But if you are field deployed devices, your option is you're going to use rail for edge. And then you see what kind of workload you're going to run. Are you going to run a workload that comprises OCI containers? Does it need pods? Does it need Kubernetes? Is it using Kubernetes? And if that is your answer, you can use rail for edge plus Microsoft. But if you're just running containers and pods, you can make another diversion in the decision. Do you need the ability to cluster? Do you need horizontal scaling? If, again, if that is your workload would need that, then again, you can go with Microsoft because with Microsoft you can scale. And if it's just a static workload, let's say that is just running a simple process, then you could just use rail for edge with Podman. Yeah. And again, the cloud native ecosystem, you know, Microsoft can use those tools, those add-on tools that you would use with any Kubernetes cluster. So that's also a really important benefit to using Microsoft. So there are two models that you can deploy Microsoft. The first is RPM install, where you install an RPM package and it will set up your SC Linux and it will install your dependencies. And it would manage everything through RPM package manager. The second option is Podman with system D, that's what we are going to cover today. The one caveat with Podman with system D is you need to install cryo yourself. And that is one configuration that you would have to do, which was to trade off for like the entire cluster operators. Like with, with Microsoft, you do have to do some low-level sys-admin configuration, but they're very minimal. And, but that was required to keep the binary as less as 160 megabyte. So the second option is Podman with system D that uses host cryo and we are going to cover that in a, in a demo. So the Podman deployment depends on your immutable OS, which as of now we have Fedora IoT in real for edge. We did the demo on Fedora IoT and uses, it uses Podman to deploy and manage containers using system D. I have attached to the link. If you have access to my slide, you can go to this link, which is a very good blog by Podman team on how you can start, stop, restart, auto-update and rollback containerized application using Podman system D features. And yeah, let me play. Yeah. Yeah. And it doesn't have to run on an RPMOS tree-based operating system. It just, it can and most people in production would, but I always run Microsoft on Fedora. Yeah. It definitely, you can do it in the production scenario. You would definitely do an IoT device, but those who have worked with IoT, they know the challenges of you. I mean, I was trying to do it. It's very difficult to provision through the community SSH key provider. So you have to create your own ISO image. That's, that's all background. So let me see if I can, it's a very short video just for one minute that will show, uh, how you install pod, uh, Microsoft using Podman. So just, uh, I called the binary and I copied it to the, uh, system, the location. And after that, it is just one line where you enable the service. Yeah. I had to make some typo. Otherwise it will look too real, right? It's not fake because they are typos. So I started this Microsoft containerized service that is running a system D as you can see. And, um, now, uh, this, you can also see this as Podman PS would my internet. This is real, real, real, real, not at all fake. It was just the minute, right? You can, you can say what was in it because hopefully, yeah, exactly. So you can also use Podman commands to see it. So you can see that the node is now up. It was, uh, available six seconds ago and, uh, the pods are available. All of them, you can see they are running. And I can, I will also stop the service and restart it to see that the downtime is really low. Like now you can see I cannot find the cluster. So I'll just restart the service and within a minute it's, uh, restart all my workloads. And as, as we mentioned that the workloads are saved on apartment volume. So the state of the workloads are not lost, even when they are stopped. And, uh, as soon as you restart it, it picks up the state from, uh, yeah. So you can see that it was started 80 seconds ago, but the state, it's two minutes. That means it's picking up the same state. I wanted to show the auto update as well, which I have shown, but nothing would happen because my images are already updated by recording this presentation. I did a system from very bad idea. So basically what happens when you do podman auto update dry run, but it will see, it will show you what is the digest of the con container image on your local system. And what, if that image, uh, the, if that tag with that, uh, if the digest of the tag that you have on your local system is different than the digest of the same tag remote, then podman auto update would just update that image, but right now, yeah, it won't work. Sorry. I just want to, I want to clarify. So there's two options. You can do auto update equals local or auto update equals registry. So podman has the choice and like in the, in the unit file, we use auto update equals registry. So if the digest is different in the registry, it will automatically restart to that new, um, the new updated image, but Pearl's using local on her local system. Yep. So, uh, next slide. Oh, is this me? Yeah. All right. So this is a slide that I just, and I'm glad we have a few minutes left to talk about it because I have, uh, like a side campaign with micro shift where I want to spread the word that it's an, it's a very convenient tool for developers. Uh, there's a flavor of micro shift that along the way of developing this, you know, edge production use case, um, the flavor of micro shift known as Microsoft AIO came about. Um, and it includes everything in the micro shift container that we talked about, but also it containerized cryo. So literally all you need on your host system is the ability to on podman. And, um, when you start the micro shift AIO image, you have the full micro shift, um, environment up and running. So it's really convenient if you're developing an application and you want to quickly spin up an environment to test, um, deploying the application on, especially if it's an over shift specific application because it has the over shift API. So you can use security context constraints or routes or even projects. Um, and another use case that has, has come, come up, um, over the past month, like, cause this is a really new project is the using it in CI. If you don't have that need to run a full over shift or Kubernetes cluster to test your applications, um, using a micro shift AIO in your CI pipeline can save a ton of resources and, and a ton of time actually. Uh, so yeah, I think I had notes from my chair. I said everything. Yeah. You have to share for the demo. Yeah. Yeah. So let me, I have a quick, um, like again, one minute demo to show how fast and, and easy it is to get up and running with AIO. And I hope you all go and try it. Um, where's my sharing? Um, it's that TV. Yeah. And how come I can't share YouTube? How come the YouTube link isn't there? I mean, it's fine, but I mean, you can share your tab, Chrome tab, and then just play. Yeah. Very good idea. Uh, just one second. Yeah. Someone has asked this question, what fleet managers are you using? Micro shift, but we don't use, and I mean, uh, we don't use any fleet manager. Uh, it is usually used to manage the devices, but micro shift would be running on those devices. So we don't have a recommendation. We let the customers decide whatever is your choice, but micro shift does not depend on the kind of fleet manager you would be using. Okay. I'm ready. Sweet. So, um, oh, nice. I can play it from right here. I think very cool. All right. Everyone can see it. Yes, I can. Cool. So this is a, the, the, the video is linked in our slides that we'll share. Um, and there's a bit in the beginning that gives like a step-by-step overview of the pod man command that we use in the unit file. Um, we just don't have time for that. I just want to show you the, um, the footage of running it. So here we're going to start pod man. We're, we're going to start, um, copying the system, the unit file. And you just start the service. And you can see it comes up right away, micro shift, AIO. So now you have your micro shift, AIO container running. And if I exact into it, oh, there's the pod man volume and the mount on your system. If I exact into that container, you can see, and I didn't mention this before, but conveniently OC and cube CTL and the cube config is already set. So again, you just launch this image with pod, with pod man or system D and you're up and running. And when you exact in, uh, you can see that in under a minute, your cluster is up. And again, um, the embedded controllers like Qubelet and API server, you don't see them listed as separate pods because they're all included in the microfinery, but these are the add-on controllers like that server CA controller, our host path provisioner, um, ownership DNS. Yep. Yep. So basically, if you have worked with kind, it is very faster than kind. And if you've worked with CRC, you would know how easy it is to do with micro shift. So it's not a replacement of code ready containers or kind, but it is a very fast way to get start developing your cloud native and Kubernetes application. And then you can just push your images. And if you have micro shift running in your production, uh, with system D that whenever your workloads updated, it would update your, whenever you update your micro shift, your underlying workloads will also be updated. Yep. So another question, can I run install operators and custom API? Yes, you can run operators as a workload. What we meant that you cannot have cluster level operators, uh, but you can always run standard operators and API servers inside micro shift. Anything that is built in Kubernetes, you can run inside micro shift. Uh-huh. And, um, um, in this demo, I want to show you can also access the podman. You can access the Qt config from outside the container. So here I'm on my just local host and I copy the Qt config out and, um, change the ownership. And now I can just run from my local host. Um, you know, if I have my, my get repo with my application, I want to try out here. I'm just showing you that it actually works. And, um, and you can, I think the next step, I'm just going to stop it and restart it. And you can see when I restart it, it instantly starts up because the podman volume is intact. And the test containers that I just launched are there. And yeah, that's it. So when you, um, thank you. Thank you, Seri and Pearl. I'm so sorry to interrupt, but we are running around out of time. But of course you are everyone welcome to join, um, our, uh, awesome speakers, uh, at the work adventure. So thank you. And, um, in a few minutes, uh, there will be, uh, next session about onboarding edge devices.