 Let's get into the big topic of open source, something that we actually say about. This is so awesome. We are an open culture that it is actually everything. So it's that process that a developer or, let's say, a developer. How's the Kubernetes ecosystem really going down? Good evening, good afternoon, wherever you may be. You have found yourself once again at the level up hour where we talk about all things. Containers, Kubernetes, an open shift. So, if you are joining us, please do like, subscribe, share. Let everyone know that we are here. We don't have any yesterday, but I am joined by my esteemed colleague, Jafar Traibi. And we have a very interesting show for you today. And it is something that I don't think is an edge use case by any stretch of the imagination. And that's how do you move your containers from your desktop where maybe you as a developer have been dutifully working on your latest project to production Kubernetes. Well, there's functionality and tooling that actually will help with that. And Jafar is going to walk through a bunch of the steps involved with this, some of the tooling, some of the kinds of things that you need to know. We'll even take maybe a brief demo and of course your questions in the chat are always welcome. So, Jafar, welcome. Hi. Hi, Randy. Hi, everyone. Thanks for joining us again. So, yeah, indeed today's show is about the developer experience. So, how we start on your laptop or desktop and then how you can from there go into a full fledged open shift container platform. So, yeah, maybe they have any questions about the topic in general, or should I just go into maybe a brief overview of what the developer experience has been so far. Well, let's, let's consider this is, would you say this is a very typical sort of thing that a developer will find themselves needing to do. Or is it something that depends sort of on how a particular organization handles handles its, you know, dev to prod lifecycle. Yeah, sure. So that's, that's I think a very good start starting point because so I've been working with customers when I was doing open shift consulting for about five years. So, and during those five years, I've, I've met probably hundreds of customers who are all using different types of technologies and are and started getting into the container journey. And so, for us, from an open shift perspective, we were all hyped about containers. I mean, when I started working with red hat. It was just the beginning of open shift tree. And we, we started shifting towards Kubernetes and containers. So for us, it was everything revolved around containers and Kubernetes. As I went to developer to, to meet our customers and our developers, I was actually surprised to see that those topics were very, very new to them, actually. So, and even years after we, we had embraced that there were still some developers who were still using like the traditional way of doing things. Kicking off application servers on their laptop doing building their code locally deploying it locally, etc. So, yeah, not everyone was embracing containers as much as without it would be like the instinctive way of doing it. But as things started to get more and more, I mean, the user experience was probably something that they had to deal with. So first, there, there were things like like Docker that have embraced containers and understand how to use the technology. And they first had to apprehend the how to to use containers. If you've been using things for years and, and, you know, having your local IDE, everything works, you have your shortcuts. It's, you really have need to have a very compelling reason to change your workflow, especially as a developer, because you want to focus on your, on your, on your code and not necessarily on the, the, the noise around to make your application work. Right. So, so I'd say that Docker had made in the early days a great job in making things easier. And the, the, the, the, some, some things happened along the way. If they started to, to have some difficulties in, in providing a reliable life cycle for their container engine and stuff like that. And some, some things emerge in the open source community as a, as an alternative to Docker. So we, we can think about things like cryo, which is like the now has become the default runtime for Kubernetes. We can think about podman, which is a sort of a command line interface to manage your pods when you are using pods and containers when you are using cryo on your local environment. And also things like build out, which is the open source engine to build your containerized images locally. So basically, the principle was to take what brought Docker had both initially, but in a single package and break it down into very, I would say focused tools that did just one thing, but did it nicely. So it was basically a separation of concerns. Well, I'm starting to get, you know, shades of the old Unix toolbox mentality is, you know, a small tool that does a thing, but does it very well. Yeah, exactly. So, so that's the, and that's basically what we're going to speak about today. We're going to have a focus on on podman. And so podman for those who don't know it is a CLI that has been created to basically act as a replacement for your Docker experience when you are using containers locally. And if you've been using Docker before, it's very easy to to to start using podman because basically they are implementing the same interfaces and same commands, which are basically the talk to the container runtime interface centered. So what I'd like to show you in this episode is steady on steady on. Yeah. So, before you start showing showing us, I actually, I'd like to understand a little more is, is there now a prevailing best practice, or does it remain the case that there are a lot of practices and and it's just in such a steady state of evolution that you can't send on here it is the best practice around this. So, there are of course some best practices that you need to to to follow when you are using containers in general. There's no single way of using containers. There's no way. So think about a very simple example. So if you have a multitude application, and you want to continue rise that and run that into what we call pods, you have several options which is the first one is to make everything run within the same pod you have maybe three containers that are going to run in the same same pod. It's probably going to be easier to deploy. But it's not necessarily a good practice if your individual components need to be able to scale up and down individually. So there you have the other option which is packaging your application into separate containers and into and deploying them into separate pods. So basically just for something as simple as how do I package my applications, or how do I manage my services. You have already those types of questions that you need to ask yourself. If you're doing microservices, are you going to package the microservices with the DB or are you going to have the DB outside and and maybe the API somewhere else etc so So there's an architectural component that you need to consider up front. And, you know, usually it's not going to be the case that this is the first generation ever of the thing that you're doing and you'll have some prior experience about perhaps the particular elements of the application or microservices that are going to be able to scale independently or where there might be some performance bottlenecks or things like that but but it's important not to necessarily take the fastest and most immediate step. It's to think through well okay how is this going to actually run do we need to actually separate some things out where where have we historically seen bottlenecks and things like that that's what I'm I'm taking from from what you just said is that about right. Yeah, exactly. And so, let's take another example and it all depends on the use cases and the type of applications but say for instance that you have, you need to to manage state within your applications and the traditional web or application servers, provided ways to manage state out of the box. But if you wanted to be able to scale your components individually and still have this notion of shared state between all of those instances because now you're going to run 10 10 instances of your containers for the same thing which is probably maybe your front end or something like that. You don't want the user to switch from one container to the other and then suddenly lose state so now you have to start thinking about things like distributed cash or replication and such things when you're moving your applications into containers so. So yes, there are several things you need to take into consideration. And that's probably why developers needed some time before really starting to use containers as such as a native way of doing their local development. So, yeah, something else, but that's going to be a different story. When you are when you are doing. So for example, web development or JavaScript or such things and, and you want to iterate very quickly on your local IDE. You have things like how to reload or what we call live reload, which is basically as soon as you hit the changes and save them in your local files. They are immediately reflected into your local application that runs on your laptop. But when you shift to the. You every time you make a change to your source code. If you are following the I would say traditional ways of doing things. Whenever you hit a change you need to rebuild an application. Using like Docker build or podman build or build, etc. And then this is going to generate a new container image and then you have to redeploy a new pub from that new image, which takes time because now in addition to the build time required for your application package. You are adding that build time for the image which can be 30 seconds to maybe a minute and then redeploy in the application etc. So that's a big change in the developer workflow. And that's some of the things that developers need to get acquainted with, which is how you can still have the run time you run running in the container. But maybe mounting your local volumes where your code is so you can have that sort of life reload experience that you would have with with a traditional environment. So, yeah, there are many things to take into consideration. And we actually as Red Hat have provided different ways to address that. But maybe that can be I would say the content for another episode. Always, always. Well, and sorry to hold you back because you were raring to go but I wanted to actually get into some of those details and I think a lot of our audience would find it useful to get some of that background. So as you were, I think you were reaching for your keyboard and I was standing back. Exactly. That's what I was going to do. So I don't have any questions in the chat. Well, let's take a look. So far, so we don't have any. So by all means, show us some stuff. Okay. So, so what I'm going to show you here is basically how to run a three tiered application. First, using parliament on my local environment, and then generating the assets that I need to deploy that application on OpenShift. So, after that, we're going to talk about some other ways that you can achieve the same end goal, which are not necessarily using those tools, but basically using OpenShift native tools to do that. So the whole idea here is how do I move from my local experience to the OpenShift environment. Okay, so can you confirm that you see my screen? Yes. Okay, perfect. So, so first thing, let's check that we have nothing running for the moment. So we see that there are no pods. And what we're going to do is we are going to deploy this application. So this is an application that has three components. The first component is going to be a MySQL database. Jafar, could you maybe increase the font a bit on that? Sure. Yeah. You know, so usually when I can't read the text, I assume that it's my older guys, but, but Stabby has confirmed that it's not just me. How about now? Is it, is it okay? A little bigger, a little bigger. Think about your grandfather. There you go, getting there. Cool. So, yeah, so as you can see, we are going to run three podman commands. So the first one is going to create the demo database based on MySQL. And it's going to use a container image that we have on Quay. The second one is going to generate a Python application that injects data points in the database. And the third one is going to be a web front application that's going to show basically some live graphs. So let's go ahead and run the application. Back. Back. Okay. So now I have the three pods running locally. And I'm going to check that the application is actually loaded. So let's go ahead and hit that endpoint in here. So yeah, probably need some time to kick off. So yeah, let's go ahead and try to understand what's going on in here. So we have the three containers. Okay. Let's check that we have a volume that has been created. So we see that we have a volume that has been created to store the data for the database. So that's a first check. Then now, as I would say a developer, I'm going to try to list my containers to check the logs for those components. So what I want to see here is if the demo web component is started or not yet. So if you know a bit about, so yeah, it seems like it has already started. So I have the application up and running in here. So let's go ahead and pretend that we are the developer locally and trying to access the logs, et cetera. So how do I do that? And the goal here is to show you, I would say the local experience and then the experience that you get when you deploy things on OpenShift. Let me pause you for a second, Jafar, because we did have a question. And that is, do you need to open a firewall rule? No. So for the moment, I don't have any firewalls. Everything is running locally. As you can see, I am accessing the web component at local host. And so for the moment, everything is on my laptop. When we are going to expose or deploy things to OpenShift, then we will need a way to access the remote container from the outside world. And we will talk about what OpenShift provides for that. Okay, so let's go ahead and check the logs for our container. So I'm going to do podman logs. And the thing here is I want to check the logs for this container and from the demo web pod. So, yeah, I can see that everything is working fine. And basically, what we did here is if you are not necessarily familiar with podman, if you do podman, podps, it's just going to show you your pod IDs. And whenever you run podman, it has two containers, as you can see here. There's, I would say, a container that we call the infra container. And there's the container that has your application layer on top of it. And if you want to be able to see all your containers, then you have to add those extractments that say container IDs, that basically display all the containers that run within the same pod, right? And how do I know which pod is my application layer? So basically you see here the ID of the infra container. And that's the same that you see here. So you can guess that the other one is the actual container that you want to look at if you want to troubleshoot your application and understand basically what's happening in there. Okay, so the goal here is also to show you the experience when you are using podman and how to do some usual things before going into OpenShift. Okay, so now that we have that application up and running, let's switch to OpenShift and try to replicate the demo on the OpenShift server. So here on my OpenShift cluster, I'm already logged in and I can see that I have nothing running yet in the podman environment. Looks like a whole lot of nothing. Yes, so if you remember correctly, we had a volume when we were using podman locally. We had a volume that was created to store the data for the MySQL database, right? So we want to replicate the same thing on OpenShift and the mechanism that we are going to use is by creating what we call persistent volume claims. And this is going to trigger the creation of a persistent volume on OpenShift. So this is the end result. So you can see here. So maybe I should... So is it easy to read now? Maybe we can get a little bigger. Okay, how about now? Not bad. Okay, so what I have here, you can see that there are some comments here that say this Kubernetes resource has been generated by podman. And so we're first going to run it on OpenShift and then I'll show you how to actually generate those assets from OpenShift. So first thing we're going to create the PVC, okay? So this is going to trigger the creation of a persistent volume claim in here. And once I have that, so it seems like there are some... Yeah, there seem to be some lag in here. But let's wait a bit for that. Well, while we wait on that, Timothy in chat was saying, please share how we could deploy the image in dev to maybe sit environment. Yeah, so okay, let's see what has been generated for us. So first thing is the persistent volume claim. Okay, so everything has been created. Now it seems okay. I have my persistent volume here. And if I look at the pods, I already have also the pods that are running in OpenShift, okay? So how did we create those? So basically, if you look at the different components that we have, the first one is the demo database. And Podman has generated a pod resource for us that has all the arguments that we need, like the container image, the capabilities, the mount point, etc. So basically all the resources I need to deploy the container image. So this is a pod. I have the other component, which is the generator. And again, it references the image. It tells which capabilities I need. And the third one is the web environment. So there's something that is specific to this container image. So I picked, I would say, a random image from the Internet that we use as an example when you are using Podman. And this image happens to run with a root user, which is not something that you usually want to do when you are using OpenShift. So the best practice would be to rebuild your application to make it use a non-elevated user and run with a different UID. But we wanted to actually just go ahead with the image because we don't have the Docker file for it. And I had to add this item here that says run as user zero to basically allow it to run as root within OpenShift. So, okay, so we have the demo component running. But now let's look at the container, the demo gen container here. It seems like it's not able to start and nor is the demo web. So let's delete them and, okay, so now that we have the demo database that is running, maybe it didn't find the demo DB ready. So let's create the generator. So now the generator is working. And let's go back and instantiate the web element. All right. And Jafar, when you get a moment, we do have a couple of questions in the chat. Sure. When you get to a convenient stopping place, I'll let you decide what that is. Yeah, sure. So now I have all the three components running. And the error that you saw is actually an important one. And I'll be discussing it when we're done with the Polyman Generator. So as you see here, we are using pods in here. But I don't have yet any service that allows me to access my containers and to answer the firewall question that we had. We don't have any routes to do that. So what we are going to do is basically, as you see here, we have three pods. I'm going to expose the demo web pod. And I'm going to specify the pod that I want to expose, which is 8050. So now if we go back to OpenShift, I have a service that allows me to access my container, but I still don't have the route. To do that. And we do provide a convenient way to create a route from an existing service to OpenShift, which is the OC expose SVC for service, demo web. And now it tells us that it has created a route within OpenShift. Right. So, okay. Now we can see everything. Everything looks fine. We have the application running locally and running on OpenShift. Okay. So we have used the resources that have been generated by Podman. And basically the way to do that is very simple. But I think you had a question, Randy, before getting to that. Yeah. So actually we had a couple of questions in chat. And so one of them was, you know, please share how can we deploy the image and dev to maybe sit environment? Too sorry, too. Steph has put it on the screen. Yeah, as I'm sharing my screen. Oh, sorry. Yeah. Yeah. There you go. So yeah, again, what was the question? Yeah. Yeah. So if you have any questions, please share how we can deploy the image and dev to maybe sit environment. Oh. All right. Okay. So, yes. So this is basically. So in OpenShift, the way that we do that is we create what we call a project or a name space for each and every environment. I can imagine that this is my dev environment. And if I wanted to have a QA and production environment, basically I'm going to have two additional name spaces, one for QA and one for prod. And I will be able to create what we call pipelines within OpenShift to allow me to basically take the image that has been built into the dev environment and redeploy it to QA, but probably by injecting different environments to that name space. So that's the, I would say the principle that we follow in OpenShift. You have your dev name space, your QA name space, your prod name space. You built your image once and for all within the OpenShift environment. And then you promote that application from dev to QA, meaning that you don't rebuild the application. You just use the image that has been already built and deploy it to the other environment. So of course you can do it manually, but the best practice is to have an automated pipeline that allows you to do that. Okay, and then one other question, which is I think a little bit more fundamental, how did it generate the ammo files? Yeah, okay, so that's our, I would say getting back to our, to the main topic of today. So as we see here locally, we have three pods that have been generated. Okay. And now we want to generate the YAML assets for that. So what we're going to do is actually podman generate. And if we look at it, we have two targets that we can use. We can generate system D containers, or we can use cube assets. And that's the latter one that we are going to use. Okay, so what are the options? You can generate, so basically podman generate cube and give it a container ID, a pod ID, a volume name, for instance, or a service if you wanted to expose or to generate the service for your asset. So let's go ahead and generate the cube resources for demo web. And we are going to generate demo web dot channel, for instance. And now if we look at it, we can see that it has, sorry, it has, you know, the timestamp, the timestamp showing that it has just been created. And it's a kind, it's a resource of type of pod type. So it has referenced the image that we used to deploy the container, and it has also referenced the containers that we need to expose. Okay, so it's very straightforward, I would say. You just use the podman generate cube, pod name, and then you have your YAML resources. So now we're going to generate the demo gen YAML, and we're going to generate the DB. Okay, so now let's have a look at the database, because there's something specific here. If you remember, in the beginning we spoke about that MySQL image using a local volume, right? That local volume was named MyDB. And in the YAML file that has been generated, it references the MyDB persistent volume claim that should be mounted as the volume that is used by the container image. So the MyDB volume should be mounted via live MySQL. Okay, but in this definition here, we don't have the YAML resource to create the PVC, because if you look at what we have generated so far, we have the front end, we have the generator, and we have the database. So now let's go back and list our volumes. So podman volume list, and we see that we have a local volume called MyDB, and we're going to use the exact same command, so podman generate cube. And this time we're going to give it the volume name and say pvc.yaml, for instance. Okay, so now if we check the file that has been just generated, it has the... It's type, kind, persistent volume claim, which means that as soon as I'm going to create that, it's going to create the volume because I have storage class associated there. And once I have that, I have all the resources that I need to deploy my application. So any questions so far? No, but we do have a new question in chat, and we'll get to that when you get to a convenient stopping place. Okay, so is Red Hat recommending podman over Docker? I would say, yeah, definitely. We have made the shift out from Docker as a runtime a long time ago, and the community in Kubernetes is also using cryo as the default runtime for Kubernetes. And this is what we implement in OpenShift. So, yeah, I think that's basically the way things have evolved, I would say upstream. I would say that's our recommendation, but doesn't mean that you have to do it. Well, and I think it bears mention that the podman project is Docker aware in the sense that a user who is familiar with Docker should be able to transition pretty easily to podman. That's been a design goal, right? And there are actually a lot of advantages of using podman, which is basically also so for example, rootless builds and such concerns were huge concerns if you were using Docker. So for instance, you want to build your applications within a pipeline you needed to expose the Docker socket to be able to do such things. And that's basically one of the use cases for which we built podman and build on those very specific or use case focused container tools. So, yeah, there are a lot of reasons why we recommend that, but it's probably going to take a whole show to discuss that. Well, I like having more shows in the queue. So, and good question, you know, obviously, in the container space, we hear a lot about Docker and sometimes a little clarity around why you might choose podman over Docker is important. But yeah, I think it's safe to say that Red Hat is pretty squarely in the podman camp at this point. So do we have some other things we'd like to show? Yeah, there are some other things that I wanted to show. So basically, what we've spoken about is how you can take your assets that you have deployed with podman running locally and how you can then publish those to OpenShift. But if we look at what has been generated, we see here that we have pods. And if you remember the error that we had with the pods that were in an error state that we're not able to restart. Basically, we don't have what we call Kubernetes deployments or replica sets that are managing the error states of the pods to be able to delete them or restart them, etc. Let's see what happens if I delete one of those containers or pods, sorry. Let's go ahead and delete the demo db. Okay, so now the demo gen falls into error. So basically whenever I delete a pod, it goes away. It's not re-instantiated because there's no deployment behind it. And if you are only using pod resources to deploy your containers, this is the standard behavior that you get. So now let's see how with OpenShift, without necessarily using podman. Okay, let's see. Well, actually, Jafar, I think I might have to interrupt you because as you know, we have been invited to leave early this week to clear the airtime for the what's new, what's next podcast that immediately follows. But we actually have a question in the chat from Ravi Sharma who's saying, would it be possible to show how to extract the whole dev environment YAML files? But this includes SVC and routes as well. I think, clearly, we're not going to be able to do that today, but I wanted to write it to your attention. We are going to be able to do that. Just before that, I'm going to show you something. OC New App, which is basically the OpenShift command line, is going to generate all the resources that we need. And it's going to actually use deployments instead of pods. So basically this will give us the behavior that we were looking for. So the takeaway here is that if you have the image already stored in a container registry, you can use the OC New App with the Docker image parameter referencing the container image that you want to deploy. And OpenShift is going to create all the resources that you need for that. As for the question that you asked, let's go back to ours. And basically we have an OC get all dash dash all json. OC get pods. Okay, let's OC get pod demo web dash json. Or YAML, sorry, because that's what you wanted. So basically I have the running assets which are in the namespace and basically I can just do that like OC get the resource dash all YAML and then I have the YAML files that I can store somewhere. And that also includes like the routes. If I want to export the route, OC get route demo web dash all YAML and I have it. So you can also have a shortcut which is OC export all and then dash all YAML and you're going to have all the resources concatenated. All right, so that's it. Yeah, just here we see that it has been deployed. We have the application. I just need to specify an environment variable here and it's going to work. Fantastic. All right. Well, that was, you know, we had been thinking this might actually be a short show but in fact we're sort of down to the wire here with all this information. I hope everybody found it useful and thank you for your questions. Again, I would encourage everybody to like, subscribe and share. I'd also mentioned we had dropped a link in chat a little earlier about how you can pursue more knowledge and information like this from the level up program which includes some of the great training we have at Red Hat. And seeing as how I'm part of the learning services organization and run the certification program, we do have certifications that encompass this body of knowledge as well. So do take a look at that link. See if you can maybe learn a bit more about some of these interesting topics and, you know, thank you Jafar for a very interesting discussion and demo. Is there anything else you wanted us to kind of leave people with before we got the heck out of here to make room for the next live stream? Yeah, I think so. Basically, there are different levels of embracing containers. Paul Mann is definitely the first step on your local environment. OpenShift is another step and the more I would say advanced step is also using like things like containerized environments which fully run remotely like cold radio workspaces that we provide with OpenShift. So you don't have to care at all about having things running on your laptop anymore. Sounds good. All right. Well, thank you again Jafar and you will see us again in two weeks. So thank you everybody and have a great day. All right. Thank you very much. Bye bye.