 Well, hello everybody. Welcome to get another episode of QBuyExample Insider. I think we are currently on our 21st episode, which is really quite a lot. So we're excited to have you here. I'm Langdu White, your host. I am a professor at Boston University, but I used to be with Red Hat. And now I also teach data science and some level of software engineering. And also participate in QBuyExample. And this show is about trying to interview people who are actually doing the work in various aspects of the Kubernetes world, such that we can actually get a sense of what is coming over the next, say, six months or whatever, that we have a hard time getting out of people like product managers and such. Because sometimes, as they used to say at Red Hat, the pizza that gets delivered is not necessarily the one that was ordered. And then joining me today is my co-host, Josh Woods, and then our guest, Irvashi. And I will let them introduce themselves. Josh, why don't you introduce yourself? Right on. Hi, everybody. I'm Josh Wood with no S as Landon and almost everybody else forgets, even myself sometimes when I'm writing it. Maybe you should just have more than one of you. Right. That's what happens when you send both, Josh's. You get Josh Woods. I'm a developer advocate at Red Hat. I focus a lot on OpenShift and obviously the Kubernetes core operators and some stuff that surrounds that and our extension of that Kubernetes core into the OpenShift product. And I am excited to be here with Irvashi because I am undertaking some learning and then, in turn, some teaching and content creation around Podman and Podman Desktop just as we step into this. So this is a useful interview for me, both in the public and the private sense. Hi, everyone. I'm Irvashi Munani. I'm a principal software engineer at Red Hat on the Container Tools team. So I focus on all the Container Tools that we have that are Podman, Build, Ascopio, Cryo, and then now as well as Podman Desktop. That's what I've been doing for the last basically six years now ever since I joined Red Hat as an intern and then here ever since. I also taught a class at BU last semester. One of the classes that Langdon used to teach introduced software engineering. That was a really fun experience as well. And we got the students to also use Podman Desktop in that so we can talk a bit more about that later. Yeah, that's it for me. Did you only teach that class one semester? Yes, one semester. Oh, I thought you did it more than that. Oh, crazy. Irvashi is awesome. Yes, last year. I forgot myself. Yeah, I was going to say you did it at least twice, right? Twice, twice, yes. And then she also helped me run DevConf US. Yes. I appreciate it. Of course, we're not doing it this year, but. Yeah, that sucks. But hopefully we'll be back next year. Right, right, exactly. So why don't we kind of start really simple? What is Podman? Because I'm sure there's people who don't know. So Podman is an all-rounded container development tool, right? If you have used Docker before, you can think of Podman as like a drop-in replacement of Docker. But the thing with Podman is we add more to it. So with Podman, you can create pods as well. We have some features where you can generate QBAML files. You can generate system D files. You can do the reverse as well. So we have just kind of created a container development tool that you can do everything. You can build container images. You can run containers. You can run pods. And a lot more than that. I mentioned a few more tools like Builda, Scopio, Cryo. They are also other tools in the container space. They just focus on different aspects and are very specific. So Builda is just solely for building container images. Scopio is solely for moving your container images around the internet and add some security aspects to that. And Cryo is just for running your containers in a production environment. So it is singular towards Kubernetes and does only what Kubernetes needs. So for example, with Cryo, you can't build container images. You can't push images because those are things you don't need to do in a production environment. So a few years ago, the engineers at Red Hat and the containers team just thought of, how can we make containers more secure and how can we make them specific to one aspect and then get them to work together? So even the Podman can do everything. It does use the underlying libraries from the other tools. So we're not replicating code unnecessarily. We're just using what we already have, basically. That's something that's actually always been interesting slash confusing to me architecturally. So when I use, we'll stick with the command line tool for just a moment before we talk about the desktop component. When I use Podman to build a container, I'm fundamentally using the Builda libraries inside of Podman. And then we also have a separate binary Builda that exists outside of that. And that's that same pattern for each of those tools that you mentioned like Scopio and then obviously the runtime itself. Yes, so essentially we have two libraries for the image part and the storage part. These are shared across all the container tools. And then for the build specifically, yes. When you're calling Podman build, it's invoking the Builda library to do the build for you. So yeah, it is kind of put together that way. Right on. Yeah, and you can think of it that like, let's say you have a machine where you just want to do a bunch of builds. You don't really care about some of the things that Podman gives you, like maybe running a container, running a pod, then why have a bigger binary like Podman when you can just use Builda itself. And Builda, for example, gives you more flexibility in how you want to build. You can use pass scripts. We can do more than just having to use a container file or Docker file. And as I recall, and it's been a while since I used them independently, but as I recall, there's actually like, there's more features available in Builda than there is in Podman for building, even though it's the same library underneath, right? So there's a bunch of different things you can do kind of from an automation perspective that are much simpler of using like Builda directly. Yes, that is absolutely true. Do we give a shout out to Builda, of course, being the way you pronounce DACA, if you're Dan Walsh and grew up in the Boston area. And so that's where Builda gets its name from. Yeah. But then now that Dan sees Builda, he pronounces it Builder. So it's interesting. Oh, that's funny. I don't know if I've got it because it ends in a soft A sound. Yeah, so he's like, oh, that's hilarious. That's funny. I think it's funny. I just notice that every time he says that. Yeah, I still miss Ryan. I think Ryan had put up a DACA, DACA, DACA website with basically just a clip of Dan Walsh saying, DACA, it was really amusing. That might actually kind of a good segue into something I want to make sure we talk about at least a little bit and touch on to kind of clarify it for the audience. So we've talked about Podman and then we will talk about Podman desktop. We've talked about Builda, Scopio, and some of the like underlying tools that we used to build, distribute, move around, push to registries, actually run these. What are the things we are running and building in the end? Like the specification and is it interchangeable with other systems for building these container images? Yes. So other container tools follow the OCI standards, the OCI spec. So that's the open container initiative. That's what OCI stands for. A few years ago, like a bunch of people came together from the community to put together the standardized spec so that like in future, as more container engines were being created, you can easily plug and play your container images to them without having to reformat them to another format. So the OCI defines an IMIT spec, a container runtime spec and a container registry spec. So all our container tools follow that. So all the images that you build with Podman Builda are following the OCI IMIT spec and you can pick them, you can build them there, you can use them with any other container engine that follows the OCI runtime spec. So yes, you can, it's not like restricted to just those tools, you can move them around, yeah. Cool. So if I build something with Podman, I can run it with the Docker engine with maybe some of the cloud providers, internal engines that I don't necessarily know the underlying tech for if they obey the contract of the OCI standard. Yes, that is correct, yes, cool. So, well, to be clear, right, at least from my experience, mostly. So the, not all of the tool chains all follow the spec exactly the same way, let's say to be charitable. I can't speak to whether, who's in the right, all I can say is that you do run into conditions where you try to run a Podman built container in some other runtime and it won't always work. Oh, okay. It does mostly, but not always. And like I said, I have a theory as to which one is following this back and which one is not, but I obviously can't prove that. So, yeah. So, but moving right along. So, Podman desktop, why don't we talk about what that is a little bit, because I think it's kind of cool. All right. So, Podman desktop is a graphical user interface that lets you do everything containers from just one application and one view. If you've ever used Docker desktop before, you can think of it similar to that. It runs Podman under the hood, but it's open source. It's not restricted to just using the Podman container engine. You can plug in any other container engines that are compatible with the OCI spec. So, you can connect it to the Docker engine. You can connect it to Lima, et cetera, like just any other ones that follow that. So, yeah, that is a high level overview of what Podman desktop is. And it's available on Mac, Windows, and Linux machines as well. So, one clarification there is there was an initiative called Podman Machine. And I don't know, did it get subsumed by Podman desktop? Is it using it underneath? Is it something completely different? I don't remember what happened there, and I was kind of curious. Okay, so it's still around it and get consumed by anything, but so containers, essentially, the concept they are Linux processes, right? You do have Windows containers, but in this concept we're talking about Linux containers. So, when you are running containers on a Mac or a Windows, you need to actually run it in a Linux virtual machine that runs the Podman service. So, the client on the Mac or Windows can connect to it to startup your containers, build your container images, et cetera. And the way that we did this was we created something called Podman Machine. We wanted that when you install Podman desktop, all of this setup happens under the hood and the user doesn't need to know about it, doesn't need to really care about it, unless they really want to. It is available for them to go and customize, but it also works right out of the box. So, Podman Machine, essentially, what it does is that we have an image, a virtual machine image, based on Fedora Core OS that has Podman installed in it and all its dependencies and it starts a Podman service. So, Podman Machine goes out to our registry where we have stored that, pulls it down, sets up the VM, starts it up and starts the Podman service to it and connects the Podman client to that service over SSH. So that when all of this is set up, then when you just do on your macro windows, Podman will alpine, it will pull the alpine image, but it's doing all of this and that Fedora Core OS virtual machine. And not... No, it's basically giving you, it's giving you kind of the equivalent of the feature set of like Podman desktop except command line rather than GUI, basically. So, command, no. So, Podman Machine command line is not Podman. Podman Machine is just for setting up the virtual machines and managing that. You can also go and do that. You can do Podman Machine in it. That will initialize a new virtual machine. You can do start, that will start it. You can stop, that will stop but you can also even delete it. You can have more than one Podman Machine. It doesn't have to just be one, but you only have any active connection at a time. But then with Podman desktop, Podman Machine and Podman CLI also gets installed. So you can go to your terminal and just do Podman images as you would under the next machine and that would work as well. Oh, I get you. The Podman Machine is setting up the virtual machine and it just handles all of that. It's the VM management. Podman is the regular container CLI tool and then Podman desktop uses Podman Machine to set up your virtual environment and then uses the Podman RESTful API to connect to the VM so you can do your container commands. So it does use Podman Machine underneath. Yes, it does use. I thought you were saying Podman Machine, you can do Podman Machine container creator. No, I didn't say it very well. But yes, I think I understand the point. But at the end of the day, it makes it so that between Podman desktop, Podman Machine, you're really making containers available on Mac and Windows. And on Linux, especially with Podman desktop, you're giving a nice user interface to your containers. But at the end of the day, you're trying to like unify the experience right across the different tools or across the different platforms. Even though underneath the hood, there's kind of quite a lot of monkey business that has to happen to make the containers work. I'm still curious what's gonna happen with Windows containers, but we don't change that. So, okay. And so when, yeah, one of the things that I thought was interesting and I think, I know I've talked about it on other live stream shows and stuff is one of the things that I like about Podman, especially compared to Docker, is how it helps you to transition to Kubernetes. Because even at the most basic level in Podman, intuitively enough, you can have pods. And so you can even start to think about your containers in terms of a set of, you know, a pod that is actually containing multiple containers. Can you give a little bit of an example of like what, you know, like kind of what a pod is as far as Podman is concerned and like why I'd wanna use one and more a relationship to Kubernetes? Yeah, so I would say, if you know what a pod is in Kubernetes, that's essentially what a pod is in Podman. It's a group of containers that share the same network space, user space, et cetera. So you're right, it is just to give users the ability to play around with pods, how you put in a Kubernetes environment. So we like to think of Podman as a development tool for your container. So as a developer, if you wanna get, you know, some application tested and running, use Podman and try and emulate what it would be like in a Kubernetes environment. And that's where the pod concept comes in. So we have that support as well. So, you know, once you get used to that, we also have like, you know, some commands like cube generate that can automatically take your pod definition that you have, that you have started with Podman and translate that to Kubernetes YAML for you that you can, for the most part, just pick and plug into Kubernetes cluster. Some tweaks may be needed there for some, you know, specific like use cases that are very specific to a multi-node environment versus Podman, which is on single learn and probably doesn't have, you know, support for those things. So yeah, pod is basically what you would have in Kubernetes Pod. It's not anything majorly different, I would say. And we do try to follow how pods are done in Kubernetes. So like, we try to follow the same like reset policy stuff, you know, just the kind of namespaces that are shared and everything. So we do follow it as much as possible, yeah. So yeah, I feel like there's a guy who might have filed a couple of bugs about how most generate that might be in this live stream. Yeah, sorry, Josh, go ahead. So like in Kubernetes, I think of the pod as the kind of atomic or basic unit of execution and of organizing execution, right? At some level, the scheduler's decisions are enacted on pods rather than on individual containers. And the pod is a group of one or more containers that configurably share resources. Yeah. In Podman, we have the same abstraction. So would it be true if I said that when I work on a single container or run a single container with Podman and the underlying tools, is that a, that's actually a pod with exactly one container in it. And I always manage these executing units in terms of pods. I would say that I'm not, no, I would, I'd say no because a container, so I would say in Podman, a container is like that smallest unit that you can run where Kubernetes doesn't give you that aspect. So in Podman, you don't need to have a pod to run a container. You can run containers without it being tied to a pod. But in Kubernetes, you cannot do that. You always have to have a pod to run one container or more. So the smallest unit is a container in Podman. We just wanted to have the ability for you to create pods as well. That gives, you know, some more, you know, makes it some more features and all that you might, based on how you wanna run your workloads, it gives you that. So you could use a pod for that, but it really is either or. There is no, like if you have a pod that you need, like if you have, you wanna run a database container and a, you know, web server container, you wanna put them in a pod, you have that option with Podman. If you don't want to, you can run two separate containers and get them to talk to each other as well. So. Right on. And so how does that kind of map to this idea of generating Kubernetes YAML from building these resources on the Podman side and then kind of getting a boilerplate or, you know, your basic setup for YAML to carry over to Kubernetes? Yeah, that's a good question. So if you just have a container and Podman and you wanna generate QB YAML, it would create a pod kind or a deployment kind depending on what you pick. So it would convert your container to a pod definition that Kubernetes can understand because that's, you can't just have a container there. Same way, if you have a pod running in Podman, it would also just convert that to a pod or a deployment kind based on what you pick. So we are able to take the configurations of your container and just translate that into a kind, a Kubernetes pod or deployment kind that can be used by Kubernetes. Right on. And so like, do we see a lot of like developer uptake around that feature, like using Podman to someone in the audience? Oh, that's like good. As an on-ramp to Kubernetes, is that seeing a lot of usage, is there a lot of development around that kind of area of Podman? Is that a really popular use case? Yeah, I would say yes from the community. So the reason that, one of the biggest reason we added the support was to make the transition from understanding containers, learning containers to Kubernetes easier. Honestly, Kubernetes is a very complex platform. It can be very intimidating if you're someone new who doesn't know containers and has to start with that. So that was one of the reasons. The other one we wanted for like production, sorry, a development, you know, a development use case to a production use case kind of the thing where you use Podman to develop your, develop and test your workloads and then you can just plug it into Kubernetes to run them in production. So that was what kind of brought this idea on of having a support of moving from Podman to Kube and back as well, having that support there. It is not perfect right now. We only support some of the basic Kubernetes kinds, but we are actively working on adding support for new kinds. It's a lot based on the requests that we get from our users and community that, hey, we wanna support. We would like to have job support here and address support, et cetera. So it's driven a lot by them. Also, all the container tools engineers are low level container engineers and we don't really use Kubernetes as users. So just coming up with some of the complex use cases, being able to take that into consideration. So those are some of the edge cases we usually end up missing, but then we have community who come up with all those examples and bugs or feature requests. So they also help by either adding PRs or opening issues. So that helps drive it. And one more thing I'd like to mention is why we chose the Kube ML format. So Kubernetes is very big in the container orchestration game, right? A lot of users use the Kube ML format to define the container workloads. Given that containers can run on your local machine and run on an edge device, run in a cloud, run in a production cluster, et cetera. We wanted to try and get the user base to stick to one format so that when you are transitioning your workloads between these different areas, you don't have to rewrite them in a different format and then that could cause issues of maintaining and stability, et cetera. So this is kind of one of the biggest goals we have here is to try to have one format that is easily portable to different environments and platforms, and then it's just easier to maintain and learn. It's just like one thing you have to learn and know. For sure. And it's something that comes up when I talk about this stuff a lot is like it's really wonderful to have these like deployment systems and all this resilience and automation and the deployment systems. But in the end, it actually as a developer, it asks me to deal with a lot of stuff that actually has absolutely nothing to do with the application I'm writing and working with, right? It's like, how do I route traffic to it? How many copies of it should I scale it to on the cluster? Like none of those are really app developer concerns and you kind of get pulled into them on these platforms. So before I let Landon take us back in a more general direction and talk about Podman Desktop, I do actually have a really specific question about this that's interesting to me, I hope to the audience. You mentioned Ingress is one of the things like trying to generate from a Podman container and I assume it's runtime configuration. So to help me understand that in a concrete way, if I try to generate an Ingress, which for quick overview for the audience is a Kubernetes type or kind that describes routing traffic into a cluster to different configurable endpoints that might be your application, a load balance or whatever it is, do I generate that Ingress by looking at a running Podman container and seeing the port mappings that have been configured for that run of that instance of that container? Yeah, I would say that's most likely what would happen. We haven't gotten started on it yet and we have been looking into like how it works in Kubernetes and how we can map that to Podman. As I said, Kubernetes and Podman do work slightly differently because of just how they're, one is configured for multi-node, one is for just node and Kubernetes just have a lot more you know that you can do with it, with its controllers, et cetera. We do have support right now for services so we generate, we can generate a service. We just don't have the Ingress support yet and I would say that would just extend what a service is. Right, so a service is maybe even a bit, because the part I'm interested in isn't really Ingress in particular. So when you create that service, what we're looking at when we generate that YAML for a service is the runtime configuration of the ports that are mapped. Yes, whatever you map the container. Cool, yeah. That helps me understand how we would pick up different bits of the configuration and the container itself running on the Podman side to kind of get to Kubernetes resource kinds. So that's enlightening to me. Yeah, so I know we've chatted about this before but one of the things that I would really like to see is now that Podman supports Docker compose kind of configuration, whatever you want to call it. I really want to be able to load something with some arbitrary compose file that I found out on the internet and I want to load that into Podman and then I want to from Podman to be able to generate the QBML that compose created, right? Basically one of the ways that I like to use that kind of the Kubernetes side of Podman is like a way to do a test setup of what I want the environment to look like so that I can kind of get the whole thing running in the way I think I wanted to before introducing the extra complexity of Kubernetes, right? So I kind of just set it all up, right? And then I say, okay, now export it. Okay, now let me take that YAML and then I push that into a Kubernetes cluster somewhere and then I can go and fix that, right? So give me kind of a starting point. And so I was just curious if, like I said, I know we've discussed it but I don't remember what exactly happened to it. What is the expected relationship between kind of a compose file and Podman and being able to generate Kubernetes YAML? So for compose support, we have a project called Podman Compose that is community owned and maintained that essentially uses the compose back end, Podman as the back end. So you can to compose with Podman. Another way is you have the Docker compose itself and you can just point it to the Podman socket and then it should work that way. But then like- That's not how I've used it mostly. Yeah. So the next, and also Podman Desktop also has support for Compose now. They installed the Docker Compose binary and point it to the Podman service. So you can use that as well. So in talking of transitioning from Compose to a Kube YAML, it is possible with Podman right now. It's a two-step process. You use one of the compose configurations that is available to run your compose file with Podman. So that will create the containers, pods, resources, et cetera that is defined in that file within Podman itself. And then you can then use Podman Kube generate pass in the container ID and then that would generate your Kube YAML for you. So you can use two commands to get to that step from moving Compose to YAML without you having to manually translate it. So that is possible. Then we actually even have, I think, a blog on it that goes over how exactly you can do it. It's not some, we haven't just made one command that does a lot of the transitioning and the, under the hood, but it is possible to do it with those two commands. Gotcha. So what do you think, and this is kind of really what the show is largely about, right? It's like, so what do you think is happening next? Like what is the plan kind of with Podman or whatever? You know, is it, is the objective to kind of get it closer to Kubernetes? Like are, is there going to be more of those features that we're kind of talking about that you said are not entirely there? Is that the goal? Is, you know, is it, you know, essentially bug fixing? Is it, you know, runtime support? Like I just kind of, where does Podman feel like it's, or do you feel like the Podman is putting its focus over the next, like six months, call it? Yeah. So I think there is a lot going on in the container space. So right now edge and automotive are one of the biggest areas that have picked up and in the container space. So a lot of work is going into Podman to be, to have it, you know, run on thousands and millions of edge devices. So optimizing performance and memory and resources for that. So a lot of that's happening. For the edge use case, we have system D support. So you can either run system D inside Podman or use system D to manage, you know, your Podman containers. So we have the generate system D command, which, you know, takes a container and writes a system D unit service file for you. So then system D can, you know, start that at boot or whatever you want to configure it. But we have expanded that. And this kind of ties back to having QBAML as, you know, the one way to define your container workloads. One of the Red Hat engineers started a project called Quadlet. Essentially what Quadlet did was it reads your QBAML file and does a lot of the transitioning under the hood to create a system D service file for you that can run those containers. So Quadlet has been integrated with Podman now. So you have a QBAML file. You don't have to translate that into a Podman workload. You just plug in the QBAML file and Quadlet will read that, translate that to a system D service file and system D can, you know, start those containers and manage that for you. Being able to use system D to manage your containers, it gives us the ability to add support for auto updates and rollbacks. So this is something that would be valuable for edge devices. So you have, you know, thousands of edge devices up there and they're running an application and you need to, you know, push an update because of some bug fix or, you know, some other features or et cetera. So with them using system D, it spins up a Podman service that, you know, periodically checks the container registry like, hey, is there an updated image available? If there is, it will pull it down and then start a new container with that new container image. And then another health check service starts up and to check that if everything is successful, if it's not, it will roll back to the previous image. So having system D being able, system D work with Podman so well and adds that ability where you can kind of, you know, manage auto update and rollback on those edge devices. So there's more that's going into that space as well, a lot more of the optimization stuff. And then when it comes to the Kubernetes world, we do not want to compete with Kubernetes, but we want to make the path easier for developers and users to transition from, you know, Podman to Kubernetes or any Kubernetes-based cluster, kind, OpenShift, WinQ, et cetera, right? So that's why we are putting in a lot of effort to make that, you know, transition easier and add much more support for it. So that's one of the goals of Podman Desktop as well. So within the UI and Podman Desktop, the just buttons you can click to generate a QBAML file, to play a QBAML file or to, you know, deploy a YAML file to Kubernetes cluster. And your Podman Desktop is able to connect to the cluster because you can, when you pass it, the path to your QubeConfig file, which holds the authentications to your clusters. So yeah, so that's that area of Podman Desktop going on. And then apart from that, I would say this just, I can't remember everything on the top of my head, but there is a lot on the roadmap. One more thing on the roadmap is multi-arch builds. So even though we already support multi-arch builds via emulation, we are working on adding support by having a build farm. So let's say you have machines of different architectures that are available to you to use. Podman can already connect to remote machines. So leveraging that feature we already have, being able to connect to the machines and farming the builds there, building the container images, pulling it back to your local machine and putting a manifest list together with the container images. And then that can then be pushed after registry. So this will just make multi-arch builds faster because emulation is much lower definitely than when you're building natively on the same architecture. Right, yeah. I will say as somebody who's leading a lot of student, like technical projects, right? Is the M1 is killing me. Like, you know, did we really need to introduce ARM in a very non-uniform way? So now I just have one student on a team who everything is ARM, right? And everybody else is X86. It's really, really frustrating. So yeah, I appreciate the multi-arch builds. I hope they get very transparent in that sense. Yes. So kind of a related question. So okay, so what are you most excited about? What is the particular thing that you think is the most interesting thing that's gonna be landing in Podman or whatever over the next six months? So I'm going to be a bit biased. I'm working on the build from stuff for integrating it into Podman. So I'm really excited to get that, to see that working. As someone who's, as an engineer who's coding it, it is very complex right now. So just, you know, putting all of that together, it will be, I think it'll be really fruitful to see the final results and people being able to use that. So Nalan actually, another engineer on our team did a POC and put that together. And I'm taking that POC and converting it, integrating it into Podman. So the whole connection, everything can be seamless. So you just create a farm. And when you create a farm, you can see, you can choose which machines you want to be in the farm. So it doesn't have to be all the machines. You have connections to it. It can just be a subset of that. So let's say you just have three machines you want to build on today and tomorrow you want to build on, like, you know, other five machines, for example. So we're doing that. So users can, you know, customize that and expand based on whatever their use cases basically. I will like to point out that Nalan is also the one responsible for the name of Builda. So I used to sit down the hall from him when I worked at Red Hat. Josh, what would you like to know next? What would you like to talk about next? So I'd actually like to know a little bit more about that, that idea of multi-architecture support because I think based on some of the things I see going on, that's a really important story. Not so much because of ARM and the M1 Max, although that's what I run and where I build things now. But because of ARM in the data center in the future. So OCI has some notion of like what in old terms would have been like a multi-platform or a fat binary, a way of shipping multiple architecture images. Is that part of the support or plan support in how Podman will support multi-architecture images? Like if anything else, like what I'd really like is just kind of a sketch of the process. If I happen to have a three or four Linux AARCH 64 ARM servers sitting somewhere, I wanna build containers for them and run those containers and execute them there. What's that look like in the existing and planned multi-arch system in Podman and in Podman desktop? Yeah, so we do have support right now for multi-arch builds. It uses QMU user static to emulate the architecture and build for that. So when we build multi-arch images, we create something called a manifest list. It's basically a JSON formatted list that points to the, like has the image information or details and points to the different digest and for the different architectures. And then when it's pushed to registry kind of combines the list with the images of the different tags and digest. So the same image for X86 for ARM for whatever other architecture you're building for and that is pushed there. So then when on a machine you're doing Podman pull image name, right? Podman detects what architecture you are on and pulls the image for that architecture from the registry. So it is smart like that. So that is essentially what is happening. It's currently already happening right now. With the build from stuff, we are just trying to make multi-architecture builds more performant because you don't wanna emulate and if you have machines you can connect to or just a machine in the cloud you can connect to that can do the builds for you. We are trying to leverage that and give users the ability to just have one machine and build for different architectures even though that's not your architecture on your machine essentially. Right on. So just as a quick aside on that. So on the max, we have this idea of binary translation which I like without going all the way back to Linus Torvalds at Meta in 2000. Is there support planned or investigated, explored in any way in Podman for using Rosetta for running these multi-arch images like if I wanna run an x86 image on my ARM Max, there's ways I can run those applications, right? And so this is kind of an in-between point between virtualization and emulation. Like binary translation is quite a lot faster than pure emulation is implemented in the QMU libraries. Do we have any support for that or like investigation of that? Yeah, so I think there is already support for it. If QMU user static is installed in Podman machine and I think we do that by default so the translation can happen there. I'm not exactly sure on the semantics and I'll stand but I think there is support for it right now. Very cool. Yeah. So Linus is running Linux and not having this problem. Just to be clear. That's a different podcast episode. I'm meeting developers where they are. Nevertheless, that actually might be a good point to transition to a little more focused talk around Podman desktop itself and that we have this kind of pre-show throwing out brainstorming ideas for questions and we came up with a question that I kind of love for its bluntness at getting at something is why do we need Podman desktop? I think you would be a great person to tell us that. Like lots of folks in this environment are probably really used to using CLI tools to build, run, monitor containers. What's Podman desktop add to that and bring to that that makes it really work both working on in your case and like using it for me? Yeah. I think I would say it expands the user base. So it makes containers more accessible and I guess approachable for people who are not either using a non-Linux or not using a Linux system or who are just not familiar with the command line interface or they're not comfortable. So that was one of the biggest reasons for having Podman desktop, especially I was just actually especially to allow people who are on Mac or Windows to start using Podman and containers there. One example I would say is when I co-taught the class at BU we actually got, we introduced the concept of containers to students and students they're like sophomores, juniors, et cetera they were not that familiar with the command line. We showed them stuff with the command line. They were very confused like what is going on? And I think just it's nice to be able to visualize everything and see everything in one location instead of everything being I guess text based in your CLI. So when we introduced them to Podman desktop then they understood the concepts much better. They were able to retain information via actually able to start the containers and all. The concepts from like the mechanism before typing it right on there. Yes. So they're like, oh I can just click buttons and get everything done. And I can see things happening, right? And all of that. So that is I think one example that I was like, okay, I guess like it makes sense like Podman desktop is important for new users and people who are new to the containers world and just making containers I guess accessible to people as I said before. So having that pretty graphical user interface just attracts more people I guess. Yeah, the other thing I think that we haven't really talked about and I'm not seeing when I try to run on Linux but I know I've seen it before is Podman desktop can actually install a Kubernetes cluster as well, right? It actually install kind. Yes, it can install kind with just the click of a button. You can install kind and connect to it and then you know use it like deploy to it and all that jazz. Yeah, I thought I had seen that before and I was just pulling up on my actual lap happening. I'll make a tiny note for the audience before we proceed farther into this discussion that kind is Kubernetes in containers. So like if you'd like the movie Inception this is kind of a little bit of the concept here is we're gonna run containers on a Kubernetes that is itself a set of containers running on my local container runtime. That's really the kind idea and what it's about is being able to spin up local Kubernetes instances quite easily and quickly on your local development machine, your laptop, so you work on it. Because that's not necessarily tip of my tongue familiar I think that was maybe useful to tell folks who are listening and what we're talking about. Before we go link in the chat feature which is Podman desktop will give you that that local Kubernetes cluster to point Kubernetes ML that you can generate inside Podman and Podman desktop. And with the latest release you can also start up a local OpenShift cluster so MicroShift and you can also do like a developer sandbox thing as well. Sorry, MicroShift? Yeah. Is that a new one? Because we had MiniCube, we had OpenShift now there's MicroShift. Yeah, there's MicroShift also. I think MicroShift was the format of ONS or SNO, single node OpenShift, single node cluster, yeah. Oh, Red Hat finally did single node cluster just for Kubernetes, all right. I will have to check this out. Here's a question that might be revealing about why that is or about Sheep. How much, tell me something about the system resources I need to run MicroShift alongside Podman. Like what are kind of the system requirements for that? That's a good question and I don't have the answer to that because I don't, yeah. Fair enough. So I will, because I brought it up and we didn't quite get the answer I do want to make sure audience purpose is the same. There have been a OpenShift as a really large system designed to run in really large clustered environments with a whole bunch of computers and a big data center and a big AWS bill. There is a desire, much like with Podman Desktop to be able to run that locally for testing, development, ease of trying out new ideas and just having things. Using Podman Desktop. Yeah. And so resources to get an OpenShift on a local machine have been like that can be a sizable resource commitment on a local machine and that's what MicroShift and the single node OpenShift were about was really winnowing down OpenShift smaller and smaller to a footprint that while still being useful does not have all the componentry of a big production deployment that may expect to be running on dozens of computers and now you're trying to cram it on a single laptop. So that's kind of what those look like. Hey, Langdon, good find. I also found some random. Some random mountain biking company that called MicroShift. MicroShift is called Shifters, right? I was like, what? Derailers in the industry, MicroShift. Yes, yes. Yeah, that was way beyond me. Sorry, just one more thing I wanted to mention was even though Podman Desktop, you can start up these local clusters and all. One more thing is that if you have a KubeConfig file that has odds for various Kubernetes clusters and you point Podman Desktop to that file, there's like a dropdown menu that will show you all the different clusters. You can actually connect to so you can pick, you can switch between the different clusters and these can be just clusters running in cloud, somewhere else, et cetera. So you can also like deploy your generated YAMU files directly to any of those clusters by switching as well. Right, and I think I was just gonna say, I think that it's really, and kind of going back to that earlier question, I was like, why are we talking about Podman, right? When relationship to Kubernetes is that like, if you want to experiment with Kubernetes, it is really easy to like, if you want to go like kind of like mess around with Tecton and see how it works and like, change stuff around or whatever, it's really easy to like destroy a Kubernetes cluster. And so it's so much easier, I think, to be able to kind of say, okay, give me a new kind, let's do what one of the really nice things about kind is you can actually make multiple Kubernetes clusters within kind, and then you kind of have this Podman GUI that will like kind of let you point to each one and then you go mess one of them up and then you're like, oh, I can do better now. All right, now let me try it again on this other one and then just generate a new one. And it really does let experimentation or exploration of like kind of trying to figure out how to do, you know, these various bits much, much more simply, actually part of the KB, the Tecton learning path that I've been working on, is actually doing it with kind and doing it with Podman desktop so that you can kind of say, oh, look, I created a pipeline. Oh, look, I broke it terribly. Let me just recreate it over here and it's a lot easier. Yeah, like lowering that risk can actually be really powerful and like encouraging one to try new things and try to figure out how things work, pull them apart without being like afraid of breaking them, you know? I mean, that's really the whole principle of containers boiled down and presented in a different way, right? Is to lower that risk of each individual deployment. We have expressed it a long time ago in the phrase, cattle, not pets. So what your clusters eventually to be cattle and not pets, you don't mind if you destroy one and need to start a new one and that kind of the model England's talking about there. So just to help me understand, because I haven't done it a lot, if I'm using Podman desktop, I have this list of Kubernetes clusters that are recognized to my instance of Podman because I have KubeConfig's for them lying around. Like specifically, and I know we've talked about this a little bit, so I'm not trying to be like super redundant, but specifically what can I do with and on those Kubernetes clusters? So then back inside of Podman, I've got that list. I've said, I wanna work on this cluster right now. What can I then do with that connection to that cluster? Yeah, so right now what you can do is the YAML that you have generated, you can deploy that directly to the Kubernetes cluster, so you don't have to copy and paste it and to open another terminal to connect, you just click a button and it will do that. Also the view, the UI, the GUI for Podman desktop and the pods section, it actually can see the pods running in your Kubernetes cluster as well. Yeah, so it will show you that. But then I think to do any extra debugging and everything, I think you'll have to directly then use KubeCTL or something to connect. As I said, we don't wanna compete with Kubernetes so we don't really wanna replicate a lot of the things that are already there. We just wanna be able for you to move things there and back like if needed, basically, yeah. Yeah, I mean, that's a really tough line with that. I mean, it's not just Podman that had that problem, right? I mean, Docker had the exact same problem, eventually integrating essentially Docker swarm or what we used to be just swarm, right? Because that line is really hard. And personally, I think Podman has actually done a really good job of trying to find that balance in there somewhere with obviously the possible exception of the bugs I filed that you actually responded to. But one thing I was kind of curious about is just do any of the kind of Podman engineers use Podman desktop or really Podman machine to allow them to work on experimental versions of Podman because that might be a way to kind of eat your own dog food, so that they get some more experimentation with using the tool chain themselves. I was just kind of curious if it's designed to work that way. That's actually where CoroS actually originally came from was trying to basically allow people to work on broken versions of GNOME. But that's what I was kind of curious about. Yeah, so the engineers that focus on the Podman machine work, I think they do use it a lot more on a daily basis to keep testing it and everything. They do have Macs, so that's what they use for, I guess, a lot of development purposes as well. But a lot of the fixes happen in our regular Podman and Linux environment, so that's where that goes. So they have both environments, and I think they do use it on a daily basis or way more than I use it. We were excited for that. No, I actually would use Podman machine to experiment with Podman itself, not necessarily. Obviously, the Podman machine guys are hopefully working using Podman machine right now. Yeah, I'm not sure how we do, but I'm not exactly sure of that. All right, Josh, what else? Did we have any other kind of questions we wanted to cover? I think that was most of it. I got through the things I knew I wanted to ask about in advance. So actually, Yoroshi, we were talking a little bit, I think, before the show started, kind of in our fancy green room guest reception area, where we have hors d'oeuvres and champagne and stuff, we're getting ready in a real... No green M&Ms to be clear, though. Which is all supposed to be secret, so as I shouldn't have just said it on the air. I believe at things that are important for the audience to know. I think that we're fairly close to a new maybe RC release on Podman itself and some new upcoming releases on Podman Desktop. Do you wanna tell folks a little bit about where we are version-wise and what the next version that's coming out is? Yes, so we are currently working on releasing Podman 4.6. We are actually releasing RC 1, I think, either today or by the end of this week. And then we'll have, I think, two more RCs before the final cut for that. So I think three to four weeks down the lane, 4.6 should be out. And then Podman Desktop, we announced 1.0 at Red Hat Summit like a month and a half ago, and currently 1.1 is out and available. And if you're interested in trying out Podman Desktop, you can either go to podmandestop.io and download the binaries from there, or you can do, if you're on Mac, brew install and if you're in Windows, whatever the package manager there is, install for that. So those are available for you to install and get started right out of the box. Right on it. That's interesting to me even in a, just in a quick perception of how you stated those things tells me that Podman Desktop is not tightly coupled in its release schedule with Podman itself. So that gives you some clues to kind of the architecture of how they communicate, but you know, maybe say a little bit more about that independence between the two things. I think we've touched on it in two ways. One, we just learned by implication that they're not tightly coupled in their release schedules. Earlier, you mentioned using Podman Desktop to connect to some other underlying tools that might actually replace items in the normal Podman stack. I mean, I think specifically you and Langton were talking about kind of shelling out to Docker compose via Podman Desktop, like using this other utility. What does that tell us about the architecture of how Podman Desktop communicates with these backends? Yeah. So whenever we release a new Podman version, we create a new virtual machine image. So we create a new Fedora Chorus image that has the new Podman version in it that is pushed out. So once that becomes available, then Podman Desktop is able to use that new version that's available. So I would say it is tied in a way. I think so far when we have released a new Podman version, a new Podman Desktop has come out or around the same timeframe basically. And then there are some features that Podman Desktop specifically requests from Podman. So once we add those and then release that, then Podman Desktop is able to pick that up and update their UI and all to show that or expose that or however they like to do that. Podman Desktop has its own set of features they like to release. Like for example, being able to create a kind cluster, a local micro shift cluster, et cetera and all of that. So they're not, I guess, very super tight together, but like when a new Podman is available, they do try to pick that up and use that. So so far that's what I have seen, I think, yeah. Well, we're almost to the end of the hour. So we would like very much to thank you Urvashi for coming by the show. And we continue to hope to see more and more improvements in Podman and Podman Desktop. Like I said, I think it's a ridiculously useful tools and set of tools kind of for doing experimentation, for trying things out before you wanna kind of commit to the extra complexity of Kubernetes. It has a lot of other advantages as well. It's funny, I've actually used a couple of containers using the system D tool chain for a whole bunch of years now. And so those work pretty well for me. I think kind of trusus admins who are not me would probably find much more use cases rather than the hack job that my laptop is. So again, thank you so much. We really appreciate your time and I hope to see you around in the future. Yeah, thank you so much. And I would like to mention one sort of quick thing is that Podman and Podman Desktop and all our container tools are open source. All our projects are under the containers organization on GitHub and we welcome contributors all the time. So if you're interested in just opening an issue or fixing that issue that you open Langdon, for example, we welcome PRs and issues and all. So please check that out. And the team hangs out on IRC and Libra.Chat, I think that's the one. The channel is Podman, Podman Desktop also is there and Podman Desktop is in Matrix, Discord and the Kubernetes lab. So if you're looking for where the team hangs out you can find us on these platforms. Excellent. And I dropped the... What I meant to ask you about was kind of community gathering places. So I'm glad we got a hint of that in there since I didn't actually get around to that question. And I dropped a link to it. I dropped a link to it in the chat and I think or I believe all the community stuff is actually linked to there as well. They're all on the website. Yeah, and we have Podman.io website as well if you want to drop that in as well. Yeah, it's a new revamped Podman website. So it looks really nice. I will say like Brent Bouddys blog posts on stuff are fire. Like, if you ever have a question about how to get something done in Podman or kind of in containers in general, that's not to the level of like Kubernetes, Brent probably wrote it. Somewhere along the way. All right, thanks again. And we'll call that a show. This was great. Thank you so much, Josh and Linda. Thank you.