 Hello everyone. How are you today? Are you feeling bad missing lunch? Are you good? Can we hold out for like 45 minutes? I probably won't go. I won't drag you through a whole hour This is gonna be like 90% demo But I will have some drawings and how many of you are in my talk yesterday Okay, cool This is different in that I will do just a few slides then do a bunch of demo and then maybe show you one slide But let yesterday I went back and forth, so it'll be slightly different, but it's still a lot of demo Um, I assume the audience is quite similar. Let me ask you about some technology to see what you know How many of you know what code ready containers is? Decent amount but less than half I would say how many of you know what OpenShift is Okay, that's a lot more. How many of you know what Podman is? Okay, that's good. How many of you know what quay.io is? Good I bought half maybe a little more than half How many of you know what the red hat container catalog is many okay? So I kind of code ready containers red hat container catalog I will have to describe that a little more a little more, but that's okay So I am going to this this talk will be unlike yesterday's yesterday's was strictly like a how like how something works So it was more like discovery channel Documentary like I didn't actually show you how to do anything new at all Like at the end of that talk you could do nothing new at all just continue to run containers like you had always done The only thing that you could do is maybe meta operation You could probably troubleshoot things better and maybe architect things better But you didn't actually you weren't able to do anything new this talk is is focused more on how to use a bunch of technologies together in a use case in actually a bunch of use cases like five major use cases And kind of get your brain around The I hate to use in English We say a buzzword like a cloud native kind of way of thinking which is slightly different than the way you would think with traditional infrastructure my background is is a ton of of Infrastructure background, so I came from American greetings back. How many of you know what e-cards are? Not that many how old are you guys like how many people are over 40 raise your head? You do guys don't remember what e-cards are okay So Long ago in the early days of the internet like the mid 90s and late 90s We used to send each other like gift cards online. I mean gift cards. They were they were like Good tidings like, you know best wishes cards and things on Halloween or like well the six major holidays like Christmas New Year's Mother's Day Valentine's Day Valentine's Day was huge During Valentine's Day the site that I worked for American greetings. We were like top 10 largest sites in the world So I managed large numbers of thousands of Linux servers back in the late 90s early 2000s when when It was like still a pretty big amount of Linux servers So but the funny part is even to this day with everything I'm going to show you today with containers the problems are identical It's a matter of how to like share code configuration and data Among a large group of resources be them a bunch of servers or virtual machines or containers But either way it's it's about distributing these workloads and doing the same thing So it's funny my entire career basically has been the same problem. I've just been working on it with a new New technologies the whole time and it's gotten easier And so how many are you actually here before I even go deeper. I'll ask you one question How many of you think Kubernetes is complex? Raise your hand So this is the funny part. I think the business problem is complex I think running large-scale web properties with thousands of services is complex And I think Kubernetes is actually the easiest way I've ever seen to do that if I were to show you what we did back in the late 90s It was way more complex than what we're doing here way more complex we had configuration in a hundred different places and there was all these different files you had to edit to add a Service and even worse trying to get rid of a service and delete it was much harder So I'm gonna talk through kind of the beginning of containers here in the use cases that I'm gonna walk through and then I'll start demoing so Five six years ago. How long was it 2014? It was six years ago now. I started with Docker Started playing with Docker and I immediately saw that The exact things that we always wanted to do before we wanted to find run and build stuff So it's basic collaboration, right? We wanted to go find something that is somewhat existing that already kind of does what I need pull it down Mess with it run it get my brain around it understand it learn it then I immediately need to change it a little bit to add new value and then Then then from there we'll get later But this exact set of use cases is exactly what containers does right so if you think about an open source library This is what you do you go out you search get hub you look for something that kind of does what you need You pull it down you run it usually compile it run it kind of figure out what I need to do This is what we did back in the late 90s And then and then build something new on top of it just a little layer of value That's the thing that I'm going to contribute to make it just a little bit more what I need it to do This is a fundamentally different thinking than 30 or 40 years ago where you had to write it all from scratch Open source change the world so so I will say containers couldn't have happened without open source, but now we want it to be easier so Back when I first played with docker. I would docker find I would go out to docker hub look for an existing container Typically a base image add you know run the base image make sure it worked And then I would add a layer to the container image and then what do you do like that's useless, right? Like if I add value, but then I can't share that value It's basically useless like these are fundamental like sort of use cases of Collaboration be it with containers or not with anything that you're writing any kind of code So then I realized and oh and wait I want to back up so some people get stuck here I would call these like the regular docker people they get stuck here they just want to keep pulling down a container image running it building something new sharing it in a registry server and Then they just want to keep doing that they don't want to go to the next level because they're scared because they think kubernetes is complex But really what they don't understand is the business problem is complex running highly available Web services at scale is a tough problem. It's always been hard. It's been hard for my entire career But I would say that it's easier now And so then you have kind of the next problem. How do I integrate this right? How do I make all of these services run together? So say say I have a simple web app? That's just a database and a web server fine. That's not too hard. I can run that on my laptop But say I have a service that has 12 different services It has a caching layer in front of the database a caching layer in front of the web server it has Layer seven firewall that prevents certain things going from the web server to the to the you know to the database There's a couple different web services that all talk to each other that do one thing And so now we're up to like 10 12 services now integrating these 12 services and getting them all to work together And then pulling one out changing it a little bit and putting it back in That's the business problem that we're trying to solve to be honest again 20 years ago That was still the business problem that we were solving But with containers this gets easier But the problem is people get scared at share they kind of get stuck and they want to go back And they want to run a single node They'll want to run a single node container with Docker or podman I just want to keep running that and they get scared to go to the next level I'm going to show you why one that's not that hard to do and how many of you are in Urvashi and Sally's talk earlier today They showed something really cool podman kube generate or generate kube which I'm going to show you too But but I'm going to show you why in the context of this use case and how you go from use case to use case so Also, I would say the first three ish years of containers was kind of everybody figuring out this Up to share then you know the last four ish years kubernetes took off and we started to realize we want to integrate and then deploy With a a YAML file that describes what the application and all the services and how they interact looks now again It might be intimidating to get into that YAML file and start to understand it But it's a lot easier than what we did 20 years ago What we did 20 years ago is we had configuration in Apache configuration in the load balancers Configuration in the network with like routing protocols. We had configuration all over the place We had it in DNS. We had it. I mean we had it all over the place There's probably 10 12 Different places that I had to go write config files to bring a service up and worse when I would bring it down I had to delete it in 10 different places and then hope I didn't break dependencies somewhere else in the environment with kubernetes It's a heck of a lot easier. It's a single command to basically bring down a service So the deprecation of a service is so much easier Okay So let's walk through what I mean because it was pretty high level today I have it scripted because I'm not as crazy as I was yesterday So yesterday, I just did it live. I was like, let's just try this But this one should go fairly smooth. So We're gonna start with find so when you want to go find a container image, right? Let's think through this from a security perspective Historically would look at docker hub one of the challenges with docker hub is that anybody can write a container there? So it's a read write registry and so of course as we know the public internet is always a dangerous place and we've seen tons of articles about You know Bit coin miners being embedded in container images. How many of you have heard about this? This is a very common thing So you'll go and download a container image and it has a Bitcoin miner embedded in it You're running your web server and behind the scenes in that container. It's also running a Bitcoin miner and shipping that stuff back It's always about money. It's always just follow the money and you'll find why people do this stuff But in a nutshell What I look at is here. Let's let's run this So if we do this on a red hat system podman search, we will now see you will see something here registry.access.redhat.com this is actually something called the red hat container catalog This is a controlled You know environment where red hat publishes all of the official container images from red hat So I have a lot more confidence in these container images because I know they come from a trusted source I know they're built. It's a as we say in English It's a walled garden where I know I'm starting from a good trusted starting point and now if I add value again Now I'm finding something that I think I trust And now if I add value I have I have what we call chain of custody So I trust the original custody the original party that had custody of these you know of these bits And then I bring it down and now I have custody but Let's say I kind of trust red hat, but I'm not a hundred percent sure that I trust red hat Like I think they they build decent stuff But you know, I'm not a hundred percent sure that I trust them So let me show you what the container catalog is. So Let's do this. I guess I lied some of it is not scripted So let's say I want to pull down a base image. How many of you know what red hat universal base images? So with the release I will explain it with the release of rel 8 at summit last year We released something called red hat universal base image. It is a container base image that is essentially a rel base image It's not it's the same bits So it's the same trusted bits that we have that have a 10-year life cycle that there are back ports of security and features But it is contained in a container image that has a different end-user license agreement that allows people to just Redistribute it freely. So this was a challenge We had because our old business model the only way we could really control the bits because they're all open source was say Hey, we won't support you if you install a bunch of copies of this and don't pay us So it was really a contractual agreement the problem with this was is in the world of containers The install method changes if you think about it The developer is now installing the software and the operating system bits together in a container image And then deploying those you know and then sharing those all over the place But we had never given people the right to share those bits everywhere at least not contractually so we released can you know, we're at universal base image and We we basically released it for rel 7 and rel 8 but let's say I look at universal base image I say I think I trust this I think I would use this because I think it's trustworthy, but I'm not a hundred percent sure So what we do is we actually try to be very transparent about what is going on So if you look over here, there's something called the health index So right now red hat universal base image is rated as an a we actually have an algorithm that we score these Publicly and we say if there's like one critical it drops to a C or you know There's this different lip algorithm that allows it to drop down in grade and you'll see that over time I want to show you container people don't really think through container images. They age We say like cheese not like wine like they do not get better with age container images will always fall in in Security because what happens is? Trust is temporal right like I might trust what red hat released today But I don't trust what red hat released three years ago because it's clearly going to have Some CVE's in it and you know different different security exploits that have been discovered in the bits that we have So if you look here the top tag here is a rel 8 1 or a UBI 8 1 Container image, but as it gets older It gets lower grade and so you definitely want to try to you what this this highlights a problem with container images You have to constantly pull the latest one if you want to be You know if you want to have the security issues patched And that means that you need to rebuild all of the layers that you've built on top Which now means that I can't just build it once ship it and forget about it It means I need to constantly be able to reship it very quickly It's a very fundamentally different way of thinking about it containers feel really easy, but they're not completely easy And so you'll see here this one's a month old this one's three months four months six seven blah blah blah And they get lower in grade as we go older and that the 8.0 one is the one we released, you know at launch So okay, so I like that red hat shows me some of these things and actually has an algorithm for how they grade it Not to be too mean, but but there are if you look at other registries out there There's there's official images on these on these registries, but there's no definition of what official means It's just using a fancy word to describe a container image So if you go grab the the official, I don't know the official Apache image It is a very different set of criteria than if you go grab the official MongoDB image. There's no there's no standard on what official means It's just some people put a little seal on it and say this is the official one And so what you have to what you have to take away from that and start to understand when you're finding a container image is is Trust is really something that I construct in my mind. It's a warm and fuzzy feeling There's no lot. There's no there's no standard for whether it's secure or not You know Dan pointed out, you know, it's the porridge, right? Is it too hot? Is it really hot? Is it warm or is it just right? And you know the hotter it is the harder it is to use so Obviously if you wanted completely secure container images You would build them yourself from scratch all the time and then it would be exactly what you want with the exact security with the exact patches You want but it's inconvenient so we end up trusting other people So I say you have to download something that you a trusted thing from a trusted source So again in this case scenario red hat tries to provide some data to prove that we're like actually doing something legitimate here with These container images and and kind of transparently show that we're rebuilding them. So if you pull the latest one It's actually going to be, you know, something that's fairly high quality All right, so let's say I've analyzed this enough and I'm like, okay I think what I've seen in the container catalog is decent I'm skeptical security minded person. So I never love anything. It's only decent to me But so we go out and look I said And then we say we've looked at this red hat universal base image and we're happy with it We're like, okay. I think I would download this. Okay, so let's do a pod man run We move on to running it. Let's look at this thing cat slash Etsy OS release. Okay. So this is a rel 8 1. This is the ubi 8 1 image that I showed you that's up there the latest Looks pretty good. We could you know do all kinds of things in here, but I'm just gonna get out again Not much interesting that you can do with running a base image, right? Nobody runs an operating system There's a saying back in the day ten ish years ago Nobody runs an operating system just to run an operating system now. I'm a geek so I do do that But but in in public you know from for my company. I never do that, right? There's no reason to just run an operating system So really it has to we have to build something on top of it to make it interesting And so we're moving on now. We've went from finding to run was a pretty short step Build is the next step. So now let's build something This is a very simple Docker file that I created here that just adds a small You know I add proc ps because I like to have ps in my container image I like to have IP you like to have route and I like to have ping so I add some tools here Just to build kind of a what I would consider a core build back in the day But I would call this like a container core build If you don't know what a core build is a lot of corporations when they would install their operating systems would kind of Standardize on what all utilities they want installed so that when you would log into a server You would have the same tools everywhere, and it would make troubleshooting easier would make installing applications easier We kind of see the same thing happening with containers So so I kind of showing this is kind of a miniature core build and now Let's look at this new image. You'll see I've created the UBI 8 sharing You'll notice though that I created a tag Quaid IO this is just an arbitrary string a name that I put in here But this is going to allow me to now share it which we're moving on to the next step right like the next most interesting thing is sharing So we're going to push to Quaid IO Now as I start to get into this let me show you a drawing here because I want you to understand what we're starting to do here So we've went out. I'm going to get to here We went out to the red hat container catalog Verified that this is a decent image that I would approve of using I consider this registry server a read-only registry server Regular users cannot push container images to the red hat container catalog only red hat employees and and partners that have went through our Certification process can push container images to the red hat container catalog So this is a starting point now. I've pulled it down with pod man I've built a new layer and I'm pushing this out to Quaid IO Which is a read write registry now? We're in an you know a less trusted place where anybody could push something out there So you have to be careful what you're going to download from a read write registry But as long as it's in a repository that you control it's obviously fairly safe And then we're going to get down to code ready containers, which I'll explain in a minute But I at least want to kind of show where we're at So we've pushed it out there But what do we do to run it in production right as I mentioned there's this there's a fear in most people's stomach about Moving from something like pod man to moving to something like Kubernetes like typically people will stumble there And I've seen customers will get stuck there for even two or three years Where they just want to run a single container and I say it's pretty easy to run a single container on a single host As soon as you add two containers or two hosts you have to start managing the network So you have to start managing IP addresses and you have to start managing storage and I've seen all kinds of very funny I've done it myself even I'm guilty of doing all kinds of wacky things to manage storage like I'll create a directory And then within that directory I'll have subdirectories for the database container and the web service One and web service two and web service three and next thing you know I'm creating this directory structure of chaos to then map the storage back to like which container connects to what as soon as you Have two containers I that is the the base case if you will recursion If anyone that's familiar with Persian the base case is two containers or two two servers And you want two containers to make it easy to organize the application You want to servers to essentially provide for failover. So it's a it's an HA or a high availability thing But either way it starts to get hard and that's when we have to start to think about running something in production Um, and I mentioned that so this is where something like open-shift or Kubernetes comes in To demonstrate Kubernetes. I am using in this lab environment. I actually have Everything I've showed you so far here with podman is on a fedora box in a virtual machine Running on rel 8 system and then code ready containers So code ready containers is a way to run open-shift for In a single virtual machine on a laptop for example in this case. I'm running it on a laptop Open shift for is is is a very sophisticated piece of technology Even though I don't think people have fully absorbed what it means it is an op it's based on a technology called operators and so the way open shift for installs itself there are a few base container images and In fact even the operating system itself is saved as a container image out on our registry server And I out on clay.io actually and gets pulled down installed, you know installed on a virtual machine and then all of the other components of open-shift get pulled down as container images and There are these things called operators which are essentially I consider them robot systems administrators They know how to do the final deployment of all these different pieces of software They also know how to do backup and recovery and upgrades and downgrades of all the different components in open-shift So open-shift is essentially a microservices based application itself the platform and it is managed by these operators This makes it extremely easy to install with a single command But then all of this automation is running and you're kind of like I hope you know, I'm relying on red hats I'm in fact, I'm even relying on other engineering teams expertise to make sure this whole thing works This is very easy to do with code ready containers Code ready containers adds one more layer of software it adds like an installer script And then this installer will essentially log into Libbert create a virtual machine on my laptop Bring up that virtual machine Deploy the the core OS red hat core OS image that is the operating system for open-shift And then deploy all the container images and components and gets a full completely working open-shift environment running in a single VM On my laptop now, I do this instead of connecting out to Amazon You could actually do the exact same thing with open-shift and then just install it in Amazon or Azure or Google Cloud but but for the point of a demo. I just wanted to have it all locally so so that we're not going out to the internet So basically we're just having you know, we are pushing we've pushed that container image out to quay. Oh But now we're gonna pull it down and run it in open-shift But the problem as I mentioned earlier is people get stuck here, right? They're they're worried about okay now I have to get from podman to kubernetes I've been stuck in this find build share kind of model And I've been doing it on my laptop with podman or docker, but now I'm moving to kubernetes is tough so to the rescue is a command called kube generate or generate kube I called kube generate but um podman generate kube is a really magical command that basically will allow us to Do that simple model of just running a container and then run another command to export the yaml that I can then run in open-shift So for this demo, I'm gonna show you here what I'm doing I'm running a single container that that one that I just showed you that I pushed out to quay.io I'm running it locally in podman on my laptop Or actually in this VM Here it is running So now you can see with podman the container is running I'm running the latest version of that's been up one second and now once it's running I can take this podman generate kube. I called it. I call this Container tron and then tron dot yaml. I'm just making it the same names that I can see But um and then here I'll show you I'll generate the kubernetes or the kubernetes yaml Then we'll take a quick look at it. So as you can see I'm lazy because I've been doing this way too long like 20 years and I hate writing Giant long config files like this, but podman just allowed me to do this Now I can hack on this change it a little bit if I need to but the beauty here is it's actually already created the pod definition for me the container The container definition the command all the things that I have to do and so now once I have this kubernetes yaml I Can go out to my open shift cluster You'll see I have my code ready containers open shift cluster running the master It's working again It's an all-in-one install where the master and the worker are both installed on the same node You can see it's kubernetes 114. It's 21 days old For the win and then I could see the projects that are in open shift So there's a bunch of projects already configured by default in open shift and think of a project as a It's a mechanism for doing role-based access control So you will create a project and then give certain users access to that project and it's a way to make kubernetes multi-tenant So that different people that don't trust each other can all use the same system in this case I'm going to create a new project called tron You'll see that I now have created it open shift has a really nice Command called new project And that makes it real easy. So then I get projects. You'll see my new tron project is here And then I am now going to I just submitted this OC create command It's essentially a kube cuddle create command this dash f means file I pass it that tron yaml file now I'm going to run the exact thing that I had running in in podman with a single command in kubernetes with a single command So we're going to go ahead and watch this actually it already came up Because I had already cached all the images and everything is local So this actually ran very quickly the first time you run it It takes about 30 seconds because you have to pull all the container images down into the code ready containers environment It has to cache it, but this time it ran very quickly. So we will get out of that And then we will watch we'll take a quick look at that. So you'll see in kubernetes We have a pod running. You'll see it's assigned it, you know Essentially to that to that master We pulled down the image you see from that image that I pushed out to quay.io and now locally we have this thing running So let's take a look at it So there it is single pod running with a single container in that pod Running in kubernetes with like two commands. I didn't have to write any yaml whatsoever, which makes me very happy I hate doing it, but I like hacking on it once a now again It's not because kubernetes is complex It's because the business problem is complex running a bunch of services is complex nonetheless tooling can make this easier, right? So this is great I just showed you a very simple use case with a single container just to kind of get your brain around it, but Now say I want to run multiple containers that talk to each other in a pod How many of you know what pods are? Okay, good good number of people know so a pod is is just a concept if you will there's a definition There's there's a technical implementation of pods in kubernetes as in there is go-lang code that defines how a pod works in kubernetes in podman we actually have The same logical definition of a pod but different code so we have our own implementation of how the pod actually gets fired up But it's the exact same concept It is a Essentially a thing that contains one or more containers running on the same node So, you know with podman you don't run multi-node applications, but you can run multiple containers in a pod on that single host The beauty of this is is you can model an application with different pods in podman and then move them over to kubernetes So in this scenario, I'm going to show you a quick Definition here. It's very easy to create a pod in podman to be honest easier than writing the yaml in kubernetes So I go and create a pod And then here I'm going to create a container running in that pod So you'll see the dash dash pod option I'm telling it run this container in this pod and then I give the name Flynn For those of you that recognize my tron references. I don't know how many of you like the movie tron again. I'm showing my age Podman run we're going to add another one. We're going to run bit. Does everybody remember who bit was Okay, good As like a seven-year-old this was pretty amazing. I'm just going to point out So Now Actually podman has a really cool feature here this pod pod list It'll show us the namespaces. It'll show us the container names now. This honestly is one of my favorite commands because So many people if you were in my talk yesterday fundamentally don't understand what containers are, but this shows it so clearly so Here's the name of the the pod Here's you'll see the cgroup that it's running in you could see the namespaces in the kernel that have been created So now we know exactly which namespace is the clone system called used to create this container And then we could see bit flin the two containers and then we have this infra container How many of you know what a pause container is? Some okay, so with this concept of a pod The ability to run multiple containerized processes side by side on the same host and share namespaces Which is what this is showing so again I keep adding processes to those namespaces and now those namespaces can share memory Could share network can share things they essentially live in the same virtual space in the kernel But to do that I have to create a process You remember before I showed you if in my last talk I showed you when you create a container It's just metadata on disk. There's no process running. Well with a pod you need to have some process running So that you get all the namespaces it essentially holds the namespaces open And then you can add real processes to that namespace and add the actual workloads that you want So there's something called a pause container And this is true in kubernetes as well as in podman when you create a pod So this concept of a pod requires this thing and essentially a pause container is nothing more than a very small container image That has a single command in it that runs pause that basically just creates a process that is a while one loop That just creates a process keeps it open and doesn't exit And then that process stays alive for as long as the pod is alive And then when you kill the pod the pause container goes away, but it does nothing more than hold the namespaces open So we can record we can now see this is where the proof that the implementation of a pod is different in podman You'll see that this doesn't look like kubernetes yaml This is actually the internal data structure representation of a pod in podman, but it's pretty easy to read You can see the the three different containers. Um, you can see their IDs Um, again, these are just metadata that you know metadata labels that represent them You can see the c groups again. We're kind of seeing a more robust version of what I showed you with the ps command um And then here you can see It's something I highlight here You can see I had that first container that I ran bash the tron one And then you'll notice that flin and bit are the new ones that I just added to the pod But where's the pause container? There's no pause container there So there's we have sort of these rules that are kind of arbitrary, but we make it easier in podman We don't show you the pause containers. We just show you the actual workload containers But again, I think this confuses people so that's why I love that ps command Where I show you all the namespaces and you can actually see the pause container and the two workload containers together So all right, so now I have these two these two containers running side by side in a pod and now I want to export these So again single command instead of passing it the name of the container I pass it the name of the pod And then I also added this little magical dash s option So, uh, I'll do this just to kind of show you what it does this this will create the kubernetes service for me And then let's look at this now. I've created A much longer kubernetes yaml file that I really wouldn't want to write from scratch So again, just two containers running side by side. You see how much kubernetes yaml that is now again kubernetes isn't complex the concept of running a bunch of services together In a powerful way is what's complex. This is probably the simplest way to represent it all in one file You know, if you think about what a kubernetes yaml file is it has all the definition for the containers It has the the network information the ports it automatically wires together the containers So I don't have to worry about the ip addresses It also, you know wires together the storage it will automatically pull in volume mounts and manage all that So I don't have to worry about that myself The environment variables But then the magical piece is the service So this is what exposes it to the rest of the kubernetes cluster, right? So we create this thing called a service that exposes these two containers to the rest of the to the rest of the cluster basically and again Podman did all this for me. So now let's get out of here and then I'll show you how to run it so again I had to create another project. I decided to use the second movie tron legacy as the name And then get projects. Let's show you I've created new projects. So now I've got a new place I've ran the one container in the one project and run another one in the other one single command. Let's watch it And then we'll watch it get created so This is probably a good time to explain a small thing about kubernetes. So kubernetes works on this concept of a defined state actual state and then eventual consistency between the two so If you look at old infrastructure the way we used to do it 20 years ago It was There was we did use things like configuration management and configuration management was an approximation of a defined state And then you would run it over and over and over again if it was idempotent hopefully Getting the environment to stay kind of in line but entropy You know basic entropy will say that if you have a thousand servers over a month or two months or five months Or a year they somehow end up getting out of sync Whether it's like arbitrary physics breaking something or if it's a human being getting in and changing stuff So it was like a slow lazy version of this But kubernetes on the other hand with a single config file again Not 20 different config files, but a single config file defines the entire state of that application And it will then go out right now It's making the the actual state in line with the defined state You can think of that when I submit this to the kubernetes api Now it's running and the defined state in the actual state are actually in alignment And because this is not that complex of an application and there's only a single node It's pretty easy to keep in alignment now You get out to you know kubernetes cluster with 500 nodes and an application that has 12 different services And you actually have things failing and network kickups and all these things this this job gets a lot harder But the beauty here is is I don't have to worry about any of that I can let the platform worry about that And again if I want to delete this that's actually the the real magic So now we have our pod running We have actually uh, you see both of these containers started you'll see here that Pulled the images started flin started bit You'll see that the the tron legacy You know pod is alive blah blah blah and then um Let's show you here. You see the pods There's only a single pod in this project the other pods running in the other project But this one has two two containers Um, but now let's do a bonus here. So I've got some time. I think I can get through a couple different bonuses Um We we can all we can I showed you the generate for kubernetes, but what if you really do want to just run a single container on a single host You know like I really hate writing system d files too Like I'm not really into writing files at all. So I'm pretty lazy you can actually do the same thing So I mean the way the podman team is thinking is really elegant and that In that we just are like, you know what if I can write if I can run it with a single command Why should I have to write all this? You know service files definitions. Why should I write all this yaml if I can start with something then hack it Maybe exactly what I want Uh, but you'll see here. I showed uh system d you can do that with pods or with or with containers as well So it's really nice now Again going back 20 years How would I have stopped all of these services if I had deployed if I had deployed a service as simple as two Processes running in my thousand node cluster 20 years ago. How would I have deleted it? It was a giant pain We had we had what were called runbooks and in that runbook we would have 30 different steps that we had to do we had to go delete dns. We had to go Deprecate the storage back it up push it out to a backup place Do all these different things And it was a nightmare and I almost always broke something when I was deleting the service and then worse two years later We would see something that we'd forgotten because there was some edge case that caused it to not get deleted or whatever And it would break, you know, and you'd see you'd see these legacy pieces of oh, I thought we deleted that service two years ago Oh, we did we'd have to go look through our ticket system and realize that we actually deleted it So you had to like figure out from the ticket system what you'd actually done We're now with kubernetes all of that's done for you like literally it's logging what you've created and what you haven't And what you've deleted and what you you know with deployments I don't want to go super deep into kubernetes But with deployments you actually have trackable history of when you scaled the app up when you scaled it down when you created It when you deleted it it's beautiful. So we can kill everything in podman I show these commands because they're they're very elegant It's something that always annoyed me in docker You had to do the stupid for loop to like delete things whereas podman just has a dash a option We delete, you know the containers we kill them then we delete the we delete the the copy on right layer from yesterday And then we kill the pod kill this now. This is actually where it gets magical Again, I could create a very very complex service out here in in kubernetes But all I have to do to delete to here's here's what happens the business person calls me up We need to delete deprecate the service. It's we're not going to use it anymore. Okay. No problem Here's a single command to delete that we're deprecating the other service Okay, boom done and demo That again, I think the breakdown of it almost is as magical as the creation of it because we were able to create it With a single yaml file that we exported from podman that we just ran So we could hack on it hack on it get the way we want build the yaml file export it run in kubernetes Now I want to show you this is what I had went through I have like five more minutes So I'm going to show you one more demo This is what I showed you in the main demo, but I built this the other day So I get I get this question of how to use azure pipelines with ubi And I think this use case is interesting from a cloud native perspective So you'll see it looks quite similar to the use case that we just did This I would consider like this is the find run build journey right here But this is sort of the cloud native journey a lot of times we don't even want to build it locally We just want to build it out in some service. We want to have like azure pipelines or Jenkins or or you know Sierra ci or something like that go out and build all this stuff for us over and over again I don't want to mess with it. So but but if you look it's still a similar thing I want chain of custody, right? I want to have the chain of custody from the red hat container catalog into azure pipelines in an account I control then I want to push it out to quaid.io And then I want to share it only with the hosts that I want consuming it. Um, so the the process is pretty similar um I didn't I forgot to actually show you this when I push it out to quaid.io I meant to show you here are my repositories out at quaid.io You'll see some of them are public and the ones with a lock around them are private Um, and they're only private because I forgot to share them but but um, but you again You can kind of control what you want out there. So in this demo I show I actually created an azure one Uh, I believe it is called ubi azure pipeline source So I you'll see I have two different repositories out there This is actually one that I created for the source and one I created for the build. Um, but but basically in a nutshell What azure pipelines allows you to do is use a single I'll go out to azure pipelines and I'll show you this It allows you to use a single, um YAML file to cause the builds to kick off. So again, our build might look something like this where we have like a docker file Um, and then our I actually show how to build this source. I just uh, I just installed ssh Which I don't even actually know if you need to do um and then here You'll see it's an azure pipelines file is quite similar to a kubernetes file in that it's like this yaml thing But it defines here what i'm showing you is we pull from we basically pull see we pull from red hat Universal base image from the container catalog and then we push the image to quaid.io And now the beauty of this is this service is sitting out there and it's just doing whatever it wants And uh, the only time it will change is when I make a change to the to the docker file or this file So essentially it's a git ops workflow where when I go commit a new changer to github It will actually so let's actually go and do that and then kick off a build Let's see what I can do here. Let's um, let's go to This is where I get it off the script a little bit But let's do this let's do uh Let's actually add a yum install So ssh, I don't know See what what that does it might fail But whatever so I've made a change here And you'll see in my pipeline here Oh, you'll see it kicked off the job right because what happened is azure pipelines saw that I changed Oh, wait. No, it didn't show it yet It's supposed to There it is. All right, so it did kick off a job. You'll see now. We have a new job running I made a commit to github now automatically azure pipelines is is seeing that I changed something and now it's running this job And it's doing something quite similar to what I showed you with podman But it's doing it it's doing it out at azure pipelines And you'll see it'll actually just walk through and show you what's going on And then when it's done it will push the image out to quay.io and I'll have a new image Um, and so it's it's actually a very elegant workflow So here it goes it's pushing it boom It's actually the same or it should be pretty similar but Past their finalized job. So now boom succeeded. So now this did all that human work I I'm now out of it like I might have started with podman And then once I kind of get what I want, you know, and I kind of hack on it locally I can now ship that container, you know that that build file the docker file out to azure pipelines Or some other service. I just show azure pipelines because people kept asking me And then you can see now you could apply this this methodology to anything And now this really helps you stay in that integrate deploy side of you know, I showed here This will help you stay in this territory, you know, where it's just kind of constantly this this most of this stuff over here should be automated Right, so I would consider that azure pipelines part of this integrate thing where you would just have that integration happening automatically Every time developers change things things will just kick off and build and we will only worry about it if it fails basically So with that I'm going to close and say, you know, there's there's some information here. I show a little bit Um, and then I have some more here where you can read some things. I publish prolifically. I read a lot of blogs There's a blog on that azure pipelines how to set it up if you want Um, I don't think I put it on here yet because I just published actually I think it publishes monday I'm sorry. I think that azure pipelines blog that shows how to do this. So it's like really fresh material Um, but if you have any questions, I'm happy to answer them. I think I have like five minutes Does anybody have any questions? Say that one more time Yeah, we we basically just use the kubernetes api. So we just we just target the kubernetes api So we just implement it exactly as it is that would be fine Um, so so let me let me back up and say, okay So you said how is compatibility between podman export generate kube and and kubernetes The short answer is we don't implement all of kubernetes We implement a very small subset like service and pod and container So just the bare minimum to get you started anything more advanced than that you're going to be You know hacking that yourself, but it's at least enough to get started and then we do the same thing the other way So you can pull a very complex kubernetes yaml file out of kubernetes and run it. There's actually a kubernetes There's a podman play kubernetes and you can actually play a kubernetes file Now again, it will ignore anything advanced. It will just run the pods and containers in the service We don't even do volumes yet, but we're working on that But but yeah, it's enough to get you started It's never meant to be a production thing more like something that you can kind of easily get between the two to bridge the gap And also I think it targets that use case of of docker compose Like in my mind my annoyed, you know old creaky sysadmin bones I get annoyed by having two different config files docker compose and kubernetes yaml And they kind of do the same thing and we can't share them and it's annoying to me I would rather in a perfect world I would rather everybody run a single node kubernetes and just share the kubernetes yaml files But in lieu of that because not everyone will do it. They get annoyed At least podman can play them So you could you know one one podman user could generate the other one could play And you could share the kubernetes yaml files between each other instead of the docker compose file So that's that's kind of the use cases Any other questions? Uh, you can do signing with scopia. You can do simple signing with gpg keys So if you look scopia has a sub command for signing Um, and so you build them with podman Then you sign them with scopia and then you push them and you can move those signatures wherever you want They're just shared on web servers It's uh, it's configurable in podman and cryo to go verify those those signatures You just point them out to a web server basically and they pull the the gpd keys and they'll they'll verify And so you can prevent it from pulling you it actually works on poll it'll prevent a poll if it doesn't verify the image If you want to set that up What was that? Oh, yeah, I'm sorry. He asked what about signing of container images And so I was explaining you can do with with with scopia I didn't quite understand the question um What do you mean? Okay. The question is what if you use helm, but I don't quite know There well, I'll say this there's no plan to be able to run helm with podman That feels pretty advanced. I feel like if you're down the helm route that far you probably want to run a kubernetes instance locally That's my architect brain speaking. I don't have the answer, but that's my that's my recommendation Yeah Yeah, I think I think the lead developers of podman will kill me if I come back and say that I want helm support I'm looking at dan right now. I see him over there shaking his head Um, so here's the problem with product management the job I do I have unlimited wants and a very very finite budget And I probably would say again A developer is not blocked from doing that today They could run code ready containers on their laptop run helm in open shift 4 and do all this development But it's going to eat up more resources, but it's at least you're not blocked It's just suboptimal like you would rather do with podman. I totally get it But I couldn't stack rank that as one of the highest Priorities for podman Any other questions? Are we done? Okay, we have four minutes Somebody could you repeat the question one more time? The this this I heard Oh cry ctl versus podman. Okay, so the question is what's the difference between Cri ctl and podman. Okay, hold on. I have an I have your answer Cri ctl podman Hold on watch this. I actually published this So I actually I think I published this or maybe dandit. No, I think I think I did No, it's dandit. Sorry, I I stole Dan's thunder But I did do this drawing for him. So I I feel I feel happy in that So in a nutshell, I would encourage you to read this but In a nutshell cri ctl is a very simple utility The only things it can really do are like look at the images See what's running it can like do a ps it can do images It can do it has a few sub commands, but it doesn't build images. It doesn't do It's a very I would consider it a troubleshooting utility in production Versus podman is more of a developer focused use case where it's very rich A rich set of sub commands to do all kinds of things Cri ctl is just a very bare minimum to troubleshoot things mostly In a and also just to be compatible and help implement the standard the Cri standard It's like a human interface into Cri So that at least you have some utility to troubleshoot of Cri is actually working right that kind of thing All right, I think we are we have two minutes. I think right With with uh with ubi Well legally, so this is the funny part sadly, I've become way too accustomed with the lawyers But uh in a nutshell there's nothing illegal about it, but we would never support it So running it wouldn't boot it has no kernel So if you export as an ovi ova, all right, let me repeat the question The question was can you use ubi like if you exported redhat universal base image as an ova file an ova file is a is a standard for saving virtual machines for for You know kvm and the short answer is if you did it it wouldn't boot because there's no kernel We don't release the kernel because in containers There would be nothing illegal about that but to be very honest It would be basically useless because it wouldn't be supportable In that scenario, I would probably recommend just using sento s so he asked if you added the sento s kernel to Ubi to boot it would that be the question then becomes is why do it because like I'd rather just use all the sento bits that were built together and kind of work together and then Relying the sento s community to support me because redhat would never support this frankenstein thing Okay, we are done. Thank you everyone