 Okay, thanks for coming to the open shift commons event My name is Ricardo Noriega. I work in the S computing team as part of the emerging technologies organization within Red Hat's office of the CTO Yeah, I'm Miguel. I also work on the Emerging technologies at department and we're here to present micro shift New I'm very exciting project is getting a lot of attention lately and As we say more or less is a lightweight implementation of open sheet for for the edge So I'm gonna do we are gonna do like a brief introduction some few slides and then a cool demo I I hope the demo gods, you know are good with us and Yeah, to give you a little bit of Explain a little bit the situation For the past over for over a decade the IT industry has been trying to move workloads from Their legacy appliances to to the data center to centralize locations to the cloud But however more and more diverse devices are connected to to the internet those devices are producing huge amount amounts of data and It's getting Important to to get computing power next to where the data is generated which is near those It's computing devices and and those remote locations, but the question is not about putting Computing power there. The question is about if I'm able to Manage the workloads that I run in the cloud in those data centers the same way that I If I can do it the same way in the edge, so this is where micro shift was was born If you look at red hats portfolio In one side of the spectrum we have open shift, which is our it's red hats Kubernetes distribution on asteroids, let's say it's it's well designed for for data centers and for the cloud and Red Hat has been doing a huge effort to accommodate open shift to different topologies and different architectures like we provide Distributed worker nodes we provide three nodes cluster we provide Lately single node open shift, right and in the other side of the spectrum. We have rail forage rail forage is a flavor of red hat enterprise Linux that is optimized for edge computing use cases and when I say optimized is because we pick certain technologies that are very suitable for for these scenarios like RPM OS 3 for immutability automatic upgrades and rollbacks in case something goes wrong Secure onboarding process for devices and Disapprenting system is very well designed for field field deploy devices I will explain later what it is and the recommended way to to deploy applications on on rail forage is by using Putman usually static containerized workflows workloads So micro shift comes in to fill the gap in between right as I mentioned before to try to manage this Our workloads consistently from the cloud to the edge there you go and What is micro shift micro shift is a small form factor open shift optimized for field deploy devices? Our team has been really focused on integrating Micro shift into rail forage It provides a minimal open shift experience and when I say minimal is because we provide a lot of open shift API's like routes security context constraints Etc, but we don't picture micro shift as For example a platform to build your own container images in the edge It doesn't mage make much sense for edge computing use cases It's developed for resource constrained environments where maybe connectivity network connectivity is You know unstable or maybe not present It can be managed as any other Kubernetes cluster by an orchestrator like ACM or advanced cluster manager And it is designed as a single binary that contains I will show you later But contains most of the Kubernetes components and open shift components it is shipped as an RPM or a container image and It's compiled for different architectures. Of course x86 arm, but also power PC risk 5 and so on So what is this field deploy devices? I have Like something to show here. This is a Jetson Nano from NVIDIA. It's really really like a this is the developer key that has some connectivity and Something just fall off, you know, they're very cheap. So and The question is that why are they so different from from servers, right? Like what what are the characteristics that make them special so We all are kind of used to work with servers servers are highly standardized and highly scalable you can for example Plug more memory like more storage accelerator cards and so on but this field deploy devices are Usually like systems on a board systems on cheap or single board computers Sorry that are really pre-integrated and and it's very difficult to you know, put more add-ons on top The scenarios where you deploy this field deploy devices are usually Remote locations with no physical security barriers with network With net the network that is unstable or not present There is no out-of-band management system Probably there is no SSH enable even so the way that we operate these devices compared to servers Is very different and the way that we deploy those devices is is completely, you know The opposite from from servers in a centralized location in a in a data center Very quickly, this is a how we picture the the deployment workflow of a field deploy device So I'm a customer and I want to deploy my edge solution right my application. So Red Hat has a software called image builder through to create customized images of red hat Enterprise linux and rail for edge as well. So I create my own image with my dependencies and for example micro shift, right? Then I go to my hardware vendor and I say I want two thousand devices and you can put this image on on those devices So finally the manufacturer will Put the devices in a box send it to the remote locations and once the devices are there That a technician will unbox the device plug it to the wall Like the network cable power cable and the job is done in theory there will be Device management system somewhere that will listen for registration and Rail is shipping What we call FDO the Fido Device on board, which is a system that you seen Keepers and ownership vouchers will allow a secure registration of those devices Finally once the the device is registering to my system it will for example get Register against my ACM advanced cluster manager Instance and I will have two thousand devices ready to get my applications deployed in a github way So this is somehow the workflow that we picture How we compare open-shift with micro-shift Open-shift you can think about open-shift as this vertically Integrated solution that is for those people that want Kubernetes on On you know in the cloud on a virtualized environment or on physical appliances and Don't want to bother about the infrastructure. There is lying on top their operating system and so on Open-shift ships cluster operators. So Those will be able to manage the infrastructure the operating system the versions of different components and so on however, micro-shift is for Is designed for the edge for those customers at one or those users that one to build their own Image of the operating system and want to manage those devices The way that they want like it could be with red hat fleet manager or some other orchestrator, right? Finally very quickly because the interesting part is the demo. It's gonna be really cool This is the architecture we have a binary that contains all of the Kubernetes control plane and node Components flash plus open-shift APIs We talk to cryo to schedule pods and so on via socket we have the state in a In the file system and we manage we lifecycle manage micro-shift with system D As you can see there is an rpm or stream image when you build rail for edge our PMOS tree will contain The list of RPMs that are part of the system and that is immutable if you want to upgrade the system You will have to build a new image and the upgrade will be done automatically, but it's part of you know, it's All those all that software is completely Immutable you can change it. Let's say and finally just to show you that Micro-shift is about the glue of the components. It's not about the components itself because Rick had her and and the open-shift team has a lot of experience about building Kubernetes and the specific components so we pull every bit from from open-shift and That's more or less it I Hand it over to Miguel and okay. Yeah, I will go with the demo just to Give you an overview because you cannot see it from there. We have a small device in here running an arm 64 This is actually a raspberry by four and Okay So this device may The screen say it's changed. So hopefully yeah, you can see it So yeah, it's still running So I wanted to show you that yeah, Microsoft is running here As a system the service Yeah, it's taking 700 megabytes of RAM and together with the application that we are running now that has some AI models, it's using like one point for gigabytes. We recommend two gigabytes at least So you are able to run the operating system and your application So when when micro-shift Starts for the for the first time it will create At the directory with Yeah, some of the files that it needs the ATC database And resources one of them is the the cube config file You normally will then use that and you will have like a management system on top too And you don't need to connect to the API running on the device But the device will connect to the management system and you manage your workload But yeah, just for the purposes of showing you We can see the pots Running what happened Yeah, you can see the pot running like the bottom ones are Microsoft workloads Like we have a service AI the router core DNS For now we have flannel, but we are still in the process of deciding. What are we going to use there and also? The cube beard hotspot provisioner, but that's still being yeah So as you can see Our application is running Is running here on the default namespace something that we Provide at least to bootstrap an initial workload is using manifests For that and as you can see here we have Simple customization so I'm running an access point a server for some cameras that we have here And we are going to connect in a second Then a regular service And an offensive route that which is going to be announced via mdns for the cameras so the cameras will be connecting to the To the access point running in in Microsoft and They will connect to the camera server and then the application in there is going to be analyzing the The video feed oh, where is my Wanted to show you a little bit of these Well, yeah, so we made the the firmware of those cameras Custom so it does are very cheap cameras a sp 32 base microcontroller ones So they will look for them For the micro shift access point And they will try to to find The the application behind DNS and then register On the camera server and then the camera server connects and gets the video feed and then On the server side we have a Small application like very simple is not optimized at all or anything. We are using the face Recognition Python library that is going to process the video feeds looking for faces and try to look for Ricardo me and put Put the name and yeah, so let's try to do the The demo part With that running in there. Okay, maybe you can grab one They can grab another one The cameras, yeah, let me So if I log here then The camera server. Yeah, he's camera already connected to them to the server and the server probably connected back hopefully to To the camera and when I look this one this one is also connected to the access point running in micro shift and then Into these service and now we have a video feed here Microsoft yeah, hopefully over the little white Okay, so yeah, the workload in in micro shift is Like processing the yeah, I'm I'm I know Yeah, I don't know if we Yeah, not not not enough resolution, but if you want to try it around hopefully it will get Maybe I don't know how far will it go? Yeah, the resolution is not very good as you can see and And the lighting and so on. Yeah, it's this are very Very cheap cameras. So, okay, you have your application here and the fun of Having something like this is that I mean you can manage your workloads as regular Kubernetes workloads so you can see we Can edit the camera server? Okay, and we can switch To a different version of the image. I Think it's true. Well, I don't remember now. That's right Yeah, so the new version is running the previews has terminated probably I need to Reconnect the video feed Putting up Yeah, probably the cameras also have some time out if they See that the server is not streaming video They will wait for a while and then they will reconnect to the to the server again. Yeah, there they are So our new version has a very advanced deep fake Technology You can see the power of AI. Yeah Yeah, but With with micro sieve and rail for edge Did is that I mean you can you can have the the power to manage your Edge devices you can even embed your application containers into the image of the containers So it I mean it will not need to download the Images of your application containers and maybe you just manage the The image of your device and you tell the device to update and it will do an atomic update and your application is inside and and You can even have of offline devices even that is not very edge, but You could even do that and yeah, that's it Thank you Just a quick question So it's not GA yet, is it is this still beta where are we at with this product II wise Yeah, someone's asking and virtually right now. It's a community project and Hopefully by the end of the year, there will be some limited availability Awesome. Oh, and everybody out go grab your raspberry pies. Go get edgy. Thanks. We'll do that