 Hi everyone, my name is Joni. I work as a solution architect for that and today I thought that I'd show how we can run our application across different footprints. So this scenario I had in mind to demo is that let's say you're a company or organization or an individual and you have some kind of environment. It might be a data center or you know, you're running an application in a cloud context or a public cloud context. But their request comes in to move that application and running on another location, another footprint. Maybe it's an edge device running on another location like a train, a bus or something like that or a factory or remote office or some kind of smaller footprint if you will. So you don't have a full-fledged data center at that location. To do this demo, I used a couple of different components. So in the center, we see OpenShift or actually it's code-rated containers. So code-rated containers is an all-in-one installation of OpenShift and OpenShift is Red Hat's enterprise Kubernetes distribution. And we are also making use of Asable Automation Platform that we will make use to create the automation necessary to deploy our application out to the edge device. And the edge device is then a rel node and on that node, we're going to make use of a couple of different components, one being SystemD that manages the the upstart of the operating system. And we're going to make use of Potman, which is the container runtime. And I'll also show some aspects of Red Hat Insights. As you can see, I have my environment up and running. So to the left, you see my web browser running Asable Automation Platform. And to the right, you see I'm logged into code-rated containers, our OpenShift environment. I do have a demo script that I will run and I'll walk you through what's going on in the background. As you can see, I'm using my local laptop as a demo environment. So CRC there is code-rated containers or OpenShift. And then Asable is, well, Asable. So first, we are going to need an edge device. So I will create a virtual machine. So I'm just using a script to create a VM called EdgeOne. It's going to have 20 gigs of disk, two CPUs, two gigs of RAM, and it's going to be based on rel872. And just to go through what the script does. So first, we're going to create a new project called Minecraft. So our application in this demo is going to be Minecraft. I will also make use of HE Proxy to front the the Minecraft server just to show that it can be a more complicated application. And then we're going to allow anonymous access to the registry so that our node can pull the image. This is more to simplify the demo in a production environment. You would, you know, use some kind of authentication. And then we're going to create a build. So while I go through this, let's just kick off the script here. See here. So now we see to the right that the project is actually created. We can also go into this here and just to see that it's going to create what is known as a build config in a while. So first of all, we added the unauthenticated access so that an anonymous user can log in. And now we have created something that is called a build config. And the build config is basically an object that knows how to combine cover different components into a running container image. So in this case, we're using the Docker strategy and it's going to clone from this GitHub repo to and basically run a Docker build against that cloned content. So we are creating two Docker builds in our Opusheft container Opusheft cluster. So the first one is then the HAProxy Minecraft, which is basically a HAProxy instance that tries to connect to a backend server, which is then Minecraft. So these two containers are now being built on top of the cluster. And then we're going to add a build hook. So this is a post commit build hook. So what happens is that when we trigger this build, it will call out to the Ansible Tower REST API to launch a job that is created in the Ansible Tower instance. So while those builds are running, I'm actually going to continue the script over here. So this is adding the post commit build hook to these build objects. So the next time we create a build at the end of that build, it's going to call to the Ansible Tower instance to start that job. So just to show you guys how that looks, I'm going to log in in my Ansible Tower instance and I'll show you my jobs. So what I have is that I have a template called push Minecraft HAProxy. And this job is what is going to be kicked off when we create or when we launch the builds in an open shift. So the builds are now done. So let's check one of these out. So we now have a container that has been built in open shift. So we could pull this container image now to this environment. But let's kick off this build again. So at the end of this, we're going to see the post commit hook. And it's going to start the Ansible Tower job that is going to deploy the environment. So if you go to build configs, we can go to the Minecraft server and we can click start build in. So now we can also follow along what's what's going to happen. So if we go to the jobs page. So again, the on the right side, we see the build starting. So it's going to clone the GitHub repo down to the build environment. And then it's going to more or less do a Docker build to this content. So it's going to follow the instruction that is in that Docker file to produce a container image. And as the last part of that, before the image is pushed to the registry, it's going to call out to the REST API to start this pushed in Minecraft, HAProxy Ansible job here. Yeah, we're good to go. So while this is running, I will log into what is known as insights or reddit insights. So you go to cloud.reddit.com. Insights is part of the subscriptions of REL. So we can see systems that has been registered to this environment. And insights represents Reddats approach to doing proactive support. So the background of the story around insight is that, you know, someone looked and saw that in, you know, large degree of the instances when someone calls Reddit for support, we were already aware of this challenge or problem. So if someone, someone other customer had this problem and we were already aware of it and it fixed actually existed. So someone came up with the brilliant idea of, hey, what about we proactively notifying our customer about this potential solution to their challenge? Just as a caveat here, if you look to the right here, we see the run curl command that is going to trigger the job. So if you go over here, we're going to see the job starting. And as you can see here, push Minecraft, HAProxy is actually starting. So we see here that, yeah. So it's gathering facts. And we see that it has found our virtual machine here, edge1.home.lab. So let's continue the discussion around insights. So anyway, if you register your Rails server to insights, it will proactively notify you around known problems or issues that we are aware of at Reddit. So it's going to be registered and it's at some point going to show up in your inventory. So the basis for this demo was actually a blog post. So I want to just highlight that. So if you look at, if you Google for SystemD and Podman, one of the first articles is going to be running containers with Podman and shareable system D. So we're deploying to an edge device, which is raw, but we still care about, you know, things like, what if the VM crashes and restarts? How do we make sure that our container is up and running? And if you, if you're aware of how Podman works, you would know that it is a, it's running, it's not running as a demon, so it's demon less. So how can you make sure that the container restarts when the VM comes up? So we're making use of system D to do that. And this blog post goes through to how you can create a generic system D unit file to deploy a container to a system. And that's what we're making use of. As we can see here, the job is progressing. So the first thing that's happening here is also that it's running a registration. So it's registered for an entitlement or a subscription. And once that's done, it's going to go towards the insights. So let's take a quick look at the playbook itself. So and the playbook is really simple. So the deploy HAProxy here, what it does is register the machine. And I'm showing that because I have passwords in that file. And then in the roles section is where it becomes interesting. So it's going to include the role. So this is where the magic happens or magical magic. But anyway, so if you look at tasks and main, this is basically what is going on. So first, what happens when the system is registered is going to install the podman. And then it's going to do basically a Docker pull or podman pull against our open shift environment. And if you see, if you see here, the app's dash CRC dot testing, it actually corresponds to our URL up here in our open shift environment. And then it's going to lay down these system D unit files that will make sure that our container is up and running on the system. And that is basically what goes on. So very simple. Now we see that the insights client has or is installing. And we should be able to if we go to the inventory, we should be able to see that showing up now. And now it's progressing to the next step, which is then installing podman, as I showed in the playbook. But if we go back here and refresh the page, we should be able to see our node up and running or being registered. And there we go. H1. So some more information on insights. So the first information we do is some some overview information. So facts about the systems, you know, what type of capacity, memory, CPU, stuff like that. And then there's a couple of different sections that, you know, all quickly run through. So advisor is, you know, what it sounds like if we have any advice for this specific system, which there aren't any as of now, the vulnerability tab represents security issues or vulnerabilities that we think is important for you to be aware of. So if we, you know, expand one of these, we can read some more information about that specific CD. So the specific one is about cryptography in one of the packages installed in the system. And then we have compliance. So if there's any policy created for the system, if it's, if it's compliant to that policy you created and we haven't created one, and patch represents if there's any other packages that might fix, you know, known bugs or issues with the system, we can find out more information about that as well. So this represents how we would long-term manage because this demo is showing one machine, but what if we have 50 or 100 or maybe even thousands. So insights allow us to proactively manage all of that. And we can also fix this. So if we would like to, we could, you know, select all of these and we could remediate with Ansible. And this would create a playbook that would have the instructions that would fix these issues on the system. Also as part of the playbook, we deployed something that is known as, or we enable something that is known as cockpit or the web console. So if we now log on to the Edge One server, we can actually manage the server through the web UI. Just to show you. So even if you're not a, you know, you've been working with Linux for a long time, there's a way for you to manage the system through a UI. So this is just an example. You can show some, you know, performance metrics, how's the system behaving, what's the CPU and memory consumption at the moment. You can do like access logs of the system and you can access these different log items. So you can do some basic troubleshooting and start understanding what's going on with the system. You can do also some network configuration. So let's say you want to configure the network in a certain way. Also if you want to manage the accounts of the node and also access the terminal of the system. And if we look at our job here, the job is actually done. So it has now deployed our container to the local node. So let's see how that looks. So if we do a Podman PS, which is basically a Docker PS, we can see that the containers is up and running. And if we want to look at the system, the unit files, we can also find those over here. So we check on services and we filter for Minecraft. You can see that we have our two system, the unit files deployed to this system. And as a last note, let's see if we can launch our Minecraft client to connect to this newly deployed server. Hit play, it's gonna start the client. So my system is running low on resources at the moment. So it's taking a bit longer here. And we do a direct connect and we see edge one dot home dot lab port 80 and hit join server. And we are able to access our Minecraft server deployed to our edge device. And that is it. Thank you everyone for your time with this demo and I wish you a good day. Thank you, bye.