 At this point I say ask away. Otherwise, I'm going to just start talking if anybody has a quick question. Otherwise, we Yes, God's got a question or is just joining us. No, I just joined. So Okay, great. So just we did have a major release this week podman 3.0 came out And we're really excited about it. It's our first version that we say we support Docker compose with it now so you can actually use podman as a socket activated service for compose Should be able to handle that lots of new features a ton of new features and I have a talk On Saturday, Saturday morning 6 30 my time, which I think is like 12 30 your time 6 30 am on the new features of podman. So I'll be talking a lot about it at that point. And Let's see the balance. What would you like to talk about? I'd still love to see anything interesting you mentioned before that would be Yeah, it would be awesome. Yeah. Nobody wants to see that. I think last year, I recall, we were talking about, you know, your your golly locks and the three, three bears. I didn't really say bad not beer because it was always confusing to me with the bus next Where where we wanted to we wanted to push it a little bit more forward the idea of using second and distributing images. So we didn't get that that done in the past year. I think we're very busy getting to all out the door and now we all out the door. And I think this was a lot of work for the for the entire team also with the help of the community. And now I'm personally personally looking forward to, you know, stabilize more on on the features that we delivered and I think where we are now is a pretty pretty cool place to be because, you know, now I think we're truly a drop in replacement for for docker because now we have the API compatibility. Docker compose is I think the huge thing now also for for three or now I'm looking forward to to go a little bit beyond that and go go go go back to the the, how to say, to the more innovative mode. Right. Last, I think the past year was a lot of work went into really achieving or fulfilling the promise of the compatibility with with docker on the API level in this case. We had a before on a CLIML. Now I'm really looking forward to to push it forward. The ideas ideas that we're having Giuseppe is working on some incredible stuff on second but the moment to make it easier. So I think there's a lot of cool cool stuff happening at the moment. Yeah, I think, you know, so if those that didn't attend last year would my my talk was about container security and that we basically came to the sort of the middle point of security where we if we go to have security people and everything starts breaking and people start turning it off. And if we go to loose on security it's it's you know what's what's the purpose. So we were trying to do is get to you know what what I would say is is, you know, docker got to a certain level of security and I believe pod bands gotten to a higher level, especially with this containers and things like that. But basically we're all using the same set cop rules we're all using the same, you know, list of default capabilities and things like that. And the question I was at bringing up last year was, you know, how can we, how can we get better about this and and one of the ideas I had was that we start putting more information into the image so that the container engines like darker and pod man could, you know, for this image I can run with, you know, this tighter security so that the applications could. And over time, you know, we've we've we've developed some ideas would develop some tools but as Valentin said, we've wrapped up in the day to day work of getting pod man 3.0 out and getting to the point where we had compatibility, we felt that doing compose support was more important. So, yeah, going forward, we did over last year add additional security to, you know, so now if you're using pod man 3.0 we actually are using less capabilities than we used to similar. We've dropped a couple of capabilities from our default list. We've done other things in security but but hopefully over the next year we're going to increase that some so much more. There are some, you know, really cool features coming in the operating system this year for helping out rootless so rootless containers is sort of is what I see is the future of containers because you're instantaneously have a layer that protects you from root and just by allowing users to launch containers from the home directory but there's lots of things that sort of don't work right in that environment and don't. You perform as well and so I'm constantly working with the kernel engineers inside and outside of Red Hat to say, you know, can we do better, you know, can we do, you know, can we run overlay native overlay file systems as root on root on on systems can we take advantage of, you know, using NFS home directories to run my containers, you know, what can we always, you know, what can we enhance and that's sort of what we're going at going at. So we have a few questions out here. I'd be better if someone else read these questions since you mentioned compose. All right. First one is since you mentioned compose, where are we now in feature parity for Docker and potman for the regular developers that are used to work with Docker and want to migrate to potman. We would say that we're pretty much 100% there now there were certain features of Docker that will never have a darker swamp they won't be a potman swamp we believe that Kubernetes is the future. There are some deprecated features of Docker that we didn't implement like they have a link flag when they run containers to link to containers together but they've gone away from supporting that. But usually, if you it's difficult to say, you know, we're 100% compatible with with Docker from all use cases because we would say is for root full containers we should be 100% compatible. But often people are comparing Docker roots, root full Docker versus rootless podman. And there are just certain things that are not allowed inside the operating system to be done in rootless mode. All we have to use sort of compromises like the way we set up networking in rootless mode. The way we, as I mentioned earlier, we're using fuse overlay file system so that's a user space file system that is not necessarily going to perform as well as a kernel based file system. So this, but for the most part right now if you open up a bug that we believe is, if you open up a bug that we believe is not a bug in Docker, we will that shows a difference between us and Docker we will fix it. And, you know, that's that's basically the way we're running at this point. Right. If I look at the counter correctly, we're 718 people at the moment. This is fantastic. You're looking, you're looking at the wrong counter. We're 96. So 720 is the full conference. So you look at the one inside. Still fantastic. 100 people coming to this is a lot of people. Yeah, I was surprised. All right, this was the first one. Thanks for asking the next one is high container folks. I've got a question as a personal outside the box challenge currently I tried to participate in open source projects without providing code. So maybe you can discuss a little what things are that you do in your community that could use two more hands or another brain, but doesn't require pushing code. That's a good one. Yeah, it's his Tom Sweeney here. He could answer this a little bit, but basically, we're always looking for documentation. So the fine thing is, you know, often when people report issues and in any buyer upstream projects, I always ask them to contribute, you know, and especially if it's bugs. This might be a US thing, but I call it the Tom Sawyer. We always trying to get other people to paint the fence for us. So if you don't know Tom Sawyer, let's look it up. And, but also just opening a bug opening an issue, you know, telling us what what you don't like or mentioning a feature that you would like to see in the tool and then we can discuss it and look at it. But there's lots of documentation, lots of man pages, lots of, you know, we're asked, we ask people to blog about the way they use certain technologies. So there's lots and lots of way you're going to your community meetings and saying, Hey, have you tried podman instead of Docker? So anything like that is very helpful. But we're a bunch of engineers and engineers tend not to like to write documentation. So people can come in and help us document that stuff better. That's always appreciated. Also help test. So we started to, especially now for 3.0. So before we cut or released 3.0 last week, we were for a couple of weeks in the mode of release candidates. So, you know, this is where we're, where we're closing somehow the features that we get in. So we want to stabilize, we cut release candidates. So it's like, I don't know whether it's a beta or alpha. Let's say it's more close to a beta. Well, it's a release candidate where we can use all, all helping hands for, for testing, especially on systems that we're not using because most, most developers are using, using Fedora or some are sent us. So typically Ubuntu or Dabian is less tested than Fedora, just purely by, by less developers using it. So I think helping to test release candidates would also be hugely, hugely appreciated. Yeah, I'd add, I'd add one to Valentin and then I'd add like finding new use cases, right? Like, I'll give an example, like when we came up with UBI micro, and we were, we were thinking add packages to UBI micro if there's no RPM or YUM in there. And so then Dan and I started to chat about like, well, could we do overlay mounts and image mounts and there's like all this tricky stuff you could do. And the question is, like, like that's a place where like an ad user could easily hunt a new use case and something that we hadn't thought of, like propose those kinds of ideas. You know, one that came up on the last one I did was like, somebody proposed like a curl minimal, like, which then it sparked me to think about like, if we had a curl image, maybe a curl minimal image would actually be the best way to do that. Like just asking those questions around use cases and like finding ones that are common that, that like, you know, a bunch of people would use is very useful, I think. So what are people's biggest pain points with containers at this point? You know, what are you guys seeing as, you know, deficiencies, things that you would like to see better? Well, people have more questions. I'll take a little bit of one. Yeah, if you have more questions, yeah, go for it. The next one is this CKI team is using Docker and Moby on Fedora 33 incentives as a backend for GitLab Runner. Now with the kernel moving to GitLab.com, they would be interested to know whether using Potman as a backend for GitLab Runner via the API combat layer is within scope and reach. I think Brent will be happy to talk about that. Yeah, Brent can do that one. I just repeat that. Can you guys hear me okay? Can you just just repeat that one more time because I, we're having trouble getting logged in on anything but Chrome. So a bunch of us are having to switch over. So the question is now with the combat layer for Docker, if it's possible to run the GitLab Runner backed. Yes. In fact, we have, we have several people doing that that are pretty handy at it. They're on IRC on free note with us and we can identify them if they so choose. And we've been doing some work. And well, we've been helping the GitLab Runner proper folks to also get this working on their end, but I believe that as far as runner jobs. It's working quite well. And people seem to once they get it working, they seem to just be happy and wander off. So that's, that's kind of what we like. And there was also, I think if you Google, there was a blog by one of the gentlemen that got it to work that might be of help. So you could consider taking a peek at that. Yeah, I'd Brent, like, like, like you mentioned, but I'll say a little more explicit like red hat is working with GitLab the company to also essentially officially document and test from their end. So like the official product from GitLab will also eventually support podman and then we'll actually have docs for it on their end and it'll be tested in their CI CD etc. So they're still working on that but but it's definitely on the roadmap and we're working on it together so it's pretty cool. In case you're interested, I just pasted a link to the blog that Brent mentioned. Thanks, mountain. Yep. Yeah, something else we plan on spending some serious effort on and probably use some help from community is getting better max support and getting basically with the advent of the remote API we, you know, our goal here is to get the Mac and Windows users and get the best experience possible on Mac and Windows. So, you know, the thing, the fundamental thing about containers yet to realize is when people say containers they really tend to mean Linux containers and in order to run a Linux container you need a Linux kernel running. So, even people who run things like Docker and podman on on a Mac are actually talking to a an instance inside of a VM that's running a version of Linux. So we're working on what we plan on working on over the next six months is to make that that first initial experience when you run podman on a Mac to help it download a VM and get the entire system set up for you and make this seamless and easy as possible for the user. Similar we've been working a little bit or at least the community's been working on podman on top of WSL and WSL to never know which one to call it at this point. But the, and, you know, we've been following along on that and we try to help others as much as possible but anything we can get the community to work on other platforms. You know, and have it communicate back to, you know, figure out how best to that going forward would be great. Dan, if I may. I wanted to just follow up on your compose question because I was struggling with the browser at that time did. Did you mention that compose was root full only. Okay. And just one point we should drive home is that the, the implementation of compose requires it to be used as a root full pod man. And that's, that's somewhat dictated by how Dr compose works in itself, and somewhat. It's somewhat pushed by the fact that our, our back end networking for rootless is not the same as root full so it's it's an item we're working on, but it's important to know that that needs to be root full at this point. I think that that's a great, great point but hopefully by within the next six months. I think I think that's more than feasible. Yes. Good. Brent and Dan a popular one in the thread now is becoming Mac support. A lot of people are saying Mac plus plus so maybe Brent and Dan you could talk about future Mac support. I think Dan hinted at it. I'm going to start working on it about about a week and a half. And our, our vision is to be to provide a user experience Dan has in his mind, which is you click this and it just works, including making and including making it on the Mac natively work with them running in the background, so to speak, sort of similar to, is it Docker machine they call it now. And so we've done. Docker desktop. Yeah. So we are working towards that goal. And, and that work will start in the next week and a half in terms of sort of figuring it out laying it out. Coming up with an agreeable plan for how that might work for the team. And then we'll begin implementing it. I mean, that again, the interesting thing there is, you know, we're going to do this as, you know, as an open source project. So we might have a default operating system that we download with the tools that we provide. But our goal also is to have people experiment, maybe use other types of virtual machines, we have to figure out how we can make that easy for them to use so that we're not hard coding people into, you know, perhaps our favorite distribution but allowing them to to work in customers. Yeah, Giuseppe is on the line here and Giuseppe is our tends to be a crazy guy he goes off and really experiments with deep pots and the operating system so I want him to just talk a little bit about some of the advanced features that he's been looking at, maybe around pulling. Yeah, yeah, well, it's, it's a hot topic, you know, contenders tools like lazy pulling an image. So, like, container days, this feature already through CRFS. And yeah, I was looking, you know, we can use something similar, not just for the use case where lazy pulling an image but also to improve the regular pool operation like what happens now that if you change a single file in a terrible, you end up pulling then the full terrible again. So I was looking for a way to improve this part and basically have a way to, so for clients to pull just what's what's changed in the remote image, like if you have, if you already have locally most of the files there is no point to repool all of them again, just, just what's changed. Yeah, so this is something I'm working on the constraints are that it must work with what we have now with the registers without, without changing go images of sorts, remote. So this is the biggest constraint that I'm looking at. So, yeah, I think you're just a just to dig deeper into that. So there's been a lot of efforts, you know, people understand the way the container images work right now is there just a tab all with a whole bunch of content and then a JSON file that describes the tab all. And so when you pull down an image you see like these different layers coming down each one of those is sort of an individual tab all. And that's just the way it was designed when Docker first did it back at the, you know, 2011 or something. And so there's been efforts, there's been thoughts about how we could change the way that works to make it more efficient right so as Giuseppe said if you have these two, two things to think about here is if I just want if I have a big image say it's, you know, 100 megabytes and all I'm going to run is been LS. You know, maybe I just pulled down the bin LS and the other, you know, libraries that it needs and that executes. And that's, that's one way of doing this another one is now I have an image I've already pulled it down it has, you know, 100 megabytes of stuff on it. And I changed the man page in the image, and I push a new image up, because it's a tower ball we have to pull down the entire tab all to the host, and it has all the content that I already had in the host plus the small change to the tab all. So, so there has been ideas about what we could make use something other than time, or we could use some other format the problem is, we have a huge installed base of container registries out there that don't know about this new format. And then we have a huge installed base of clients. So if all of a sudden, you know, if pod man comes out here and just says, Okay, we're smarter than everybody else we're going to stop pushing these images that no one else in the world understands. And so we're going to get, you know, people going to say, Well, no, it doesn't work with my dog. I do a pod man build and it doesn't work with my existing Docker Kubernetes environment. I don't want your, you know, you're breaking constants. So, with Giuseppe in the other upstream developers are looking at is how can we continue to support all of the registries that are out there. How can we make minor changes to say potentially this is how format in such a way that the existing registry and existing tools can continue to pull it and then tools that understand more about say the formatting inside of the top wall is and take advantage of it. So that these the but that's the world we have to work work right because about because Yeah, the way I'm looking at it's there is a little small breaking change because it's using a different compression format. So, but that's really a minimal change for existing content. And it's even open up requests for mobile to support it. Yeah, yeah, what I go there is to get if we are going to change some format that we have to get. Yeah, obviously, darker mobile container D which is cryo, you know, all the stuff that's using Kubernetes we have to get sort of the entire ecosystem of containers to be able to support it. And that's why we have to work upstream for those type of issues. Another cool feature that we were working on recently is that overlay in the kernel but a new month flag volatile. It that's basically it allows to optimize some workloads. For example, building images that is, if you, if you're familiar with the more a pto they work like if you sell a package they will basically call f sync on each fight. So that will slow down installation of packages significantly because well they are designed to run on the host, not in a container in a container that are different. If a build fails in a container we just throw through away the entire container and start again, there is no no need to make sure that the files are really be written to the storage. Yeah, this got into. I forgot to know a couple of releases ago in the Linux kernel one release ago. Yeah, and we are starting at implementing it in our tools. I just wanted to dig into that a little bit deeper and explain why this is important. If you're running, if you're running an operating system and you're creating content and the operating system in the kernel crashes. You know, you basically want your data to be stored to the disk. And so what the kernel does regularly is does what's called an f sync which basically takes the file that you that you thought was completely written to disk. It's actually written to memory and when f sync happens that the memory that the kernel understands is actually physically written to disk. And that obviously slows down the entire operation because when you do when you're doing a DNF or a yum install or any probably apt and everything else. When they're running content, they they want to make sure that the content is actually makes it to disk when they're done. So if you're inside of a container and you're doing all that, that's that's all well and good, except that. And when you're building a container image, we don't actually commit the thing that you want the image until all the files are done. And that last commit stage goes. So all of the syncing that's going on when you're installing a package is actually slowing down the kernel because the kernel is constantly okay, I got to write it to disk. If during a pod man build the operating system crashed. And you came back up that install that passion install is totally useless to you. So why are we spending so much time syncing the desk. Another use case where this might be important is if you do an apartment. So I'm running a container and basically I don't I want to contain it destroyed when the system, you know, when I'm done running the container. Well, if during that running time I'm running that container the system crashes. You know, I didn't care about sync because that that as far as I'm concerned that container is useless to me right. It should be should be removed from the system. The last use case is Kubernetes Kubernetes runs containers. It runs all of its container any of the containers that it creates during its run a totally destroyed if the machine crashed. So when the machine crashes reboot, you don't care about so we're paying a heavy overhead for three or four use cases in container world where we don't, you know, we don't really need the overhead to do it. So what Giuseppe has done is he's gone to the overlay guys and the kernel guys and said, why don't we have a special option that we say turn off the all the syncing so that we can take advantage of the way containers run. And it should be that should be showing up. It's not in parliament 3.0. It'll be later this year. It's actually that the pull request just emerged into build a street speaking of build a there was a question there. Maybe Tom or Dan, you'd like to field about build up from Dimitri. I'm interested whether build up will be able to build container images work in open shift with restricted SCC. Okay, I get I probably should take that one. So, but let's talk about so restricted SCC as far as open shift is concerned means that you have one UID inside of a container. And you're running is non route in that container. So the standard open shift environment says that you have one UID. So when I do a build, you know, if I pulling down an image that on my system, that's going to have to run. You know, it's going to create root files and it's going to create a say of the patchy user UID 60. So when I pull down that content to my desk, I have to write out all the content as UID is, you know, it's root of my container and whether that's, you know, I have to write out all that content. And then there's suddenly going to be a file that has to write out with a different UID. So fundamentally, when I'm doing builds, I need more than one UID inside of my container. So when we currently when we run builds and open shift, if you're using the source damage or any of that tools, it's running with a different. It's not running with the strict. It's running with a different Kubernetes YAML file that find, you know, how how it can run, because it fundamentally needs multiple UIDs. Now, almost everybody that runs Kubernetes with containers that need multiple UIDs is running those as fun as root. So they start with UID, UID zero is UID zero and, and so on. And that's how most people are doing builds at this point. Secondly, I know there has been some efforts to sort of hack out. And Giuseppe, you can talk about this in a minute, but to hack out the file system to allow you to say, show them to a different UID and we could fake it out and say it's still the same UID. But even in a in a darker file, you can have the ability to say I want to run this process as a user. And so just having the user line inside of a darker file would cause us to have to have the multiple users inside of the side of a build. So bottom line is we have to run, in order to get multiple users, we can't use the lockdown, the tightest mode of security and open shift. But one of the things that Kubernetes up to this point is not taking advantage of is using namespace. So when you're running rootless podman and rootless builder, you're taking advantage to use a namespace in that case because you're not running any processes as root but inside of use a namespace you're able to do root like things. And so we've gotten into, into upstream cryo at this point is, there's a way to specify a attribute to cryo to say I want to run this container this this pod inside of a specific inside of use a namespace and cryo will go out and allocate the user namespace for you. So what's what's going to be happening in the future in source building inside of open shift now is they're going to be specifying for all their builds a different user namespace or get a different user namespace for the build, so that all of the builds will be isolated from each other based on UIDs and none of them will actually have real root inside of the containers. So that right now we're doing that as an extended is a attribute of that you could send it basically as sort of a secret to cry out to say do this, because Kubernetes doesn't understand it. So we're working with the upstream Kubernetes to get this in as a full fledged pod Kubernetes. And what we're hoping for sometime in the future Kubernetes will be you'll be able to specify in Kubernetes in the Kubernetes YAML file that you want this container to have multiple UIDs, but to run in a different user namespace so that you would still have a locked down container from the idea that it only has a few UIDs and none of those UIDs of root and be able to do things like build. So I have there's lots of customer use cases that have come to us and say where they need to have multiple UIDs inside of a container and the future to me is using namespace for that type of. Yeah, that's the best we can do. Any other questions. All right, talk about other stuff then. The. Let's let's talk about. I'll bring in Giuseppe again and we can talk about support for NFS. Yeah, yeah, so the support for NFS has the same issue that Dan was talking about. Like the NFS server doesn't know about user namespace so whenever, whenever a container is running inside of a user namespace. And it gets some, and even if you're a previous user has full capabilities for the NFS server that it doesn't know about that it's so if you try to show on a file to different users that are present in the user namespace. The NFS server would refuse that request because from the, all it sees is a request like to to change ownership of a file that to a user that it's different than the user that made the original request, which is, which is allowed by the local server running local because it, well it has understanding of the user namespace and can check whether the user has capabilities for doing though. So what we what we have done is the to trick NFS server to to store this information in an extended attribute so that the file ownership it's not changed for the files. The user can what all the files will be home by the the same I'm privileged user that created the user namespace. And in this end of by a fuse file system just over life as that we are using now for a router as containers. So, whenever a container tried to change ownership for a file the fuse over life as we started the information in an extended attribute, so that he can be, he can keep the original file ownership on the server on the NFS server. And, well, and that's the other way around whenever a file it's opened. The fuse over life as we read the real owner, you had the energy ID from the an extended attributes. So, yeah, this will enable running user namespace and using NFS as a backend. If you if you will access this files outside of fuse over life as month, you will see that they are home by the same user. And even the mode it's a 755 so it allows everyone to read that. Yeah, so if you don't go through fuse over life as you will see the wrong ownership on the on the files. So, it's fundamentally we can, we've got code to be able to do this at this point but it requires a change to the Linux kernel. So, just, and this happened back in December, the kernel NFS did not support standard extended attributes. Us being able to write an extended attribute to the file system would not be recorded by the NFS, the NFS server would not understand the extended attribute and that feature just got into the 5.11 kernel. Two months ago. So, because of that, we're going to have this feature in Fedora Fedora and rel very quickly but we have to work with sort of the upstream vendors that are doing. You know, like net apps of the world to get it out to them so that they could support this environment because you have to deal with the client server nature of containers. So, John, let's, you want to talk a little bit about. What we've been doing is having a community meeting on the first Tuesday of every month, generally at 11am Eastern time. And we've been opened up for topics from from the community, mostly it's been from red hat but this is coming up one we've got a couple folks who are not red hat oriented they're coming in so I'm very excited to see that. And if you're looking for info on the meetings at any time it's out on pod man.io. And if you look for the community button on the left side there it's underneath that. There's a viewing system there on that page and love to have you at any time it's free of cost. We run it for via blue jeans similar to this conferencing here open for questions generally at the end so if you have anything that you want to talk to us about it does a pod man community meeting but we're happy to take anything topic wise or question wise for scopio build a container images or storage or anything else that's inside that realm underneath the containers. And, yeah, what's happy to join you join us anytime for that. All right, Chris case got a question for that I think Brent's going to take it. She's asking, do you see pod man is a library of back end similar to container D and other tools that can consume to add container functionality. Both API is lip pod go live rest API Docker socket. Yeah, yeah, yeah, blah, blah, blah. Basically, he's asking, can we use the pod. So here's where it is to the API and today. We generally tell people not to use the code inside of lip pod proper directly. We have quite a bit of API calls, and we don't what we really need is an upper level series of calls that can be consumed by folks that are on this team and make it easier for them, and we don't have that today. And that way, when we have this upper level API call when we change the lower level stuff. We don't suffer from signature changes or whatever else so as far as going directly in we don't advise that yet. And we do have a work item to create an upper level library that I assume Matt he and will implement. The priority is as good as is as such that said, we do have, we do have a good series of go bindings that that consume the rest API. And that's what pod man remote uses so they're well tested, well maintained in that regard. I will admit they're a little heavier weight than we would like and we're going to work to to skim off that weight here in pod man three dot whatever we should be able to begin to sort of address that issue. And then we are working on a set of Python bindings that also consume that also consume the rest API. And they are the, they are called pod man pie similar to Docker pie both Docker pie and pod man pie should work on the rest API today. The pod man pie are still in a work in progress and they have not had a version released yet. And on the topic of pie man pie, we would be happy for contributor help there are folks want to come in and help knock those bindings out. We would be appreciative of that. Brent, can you elaborate a bit on what you meant with bindings being being heavy. I just want to make sure that audience doesn't misinterpret it for being hard to use because I think they're, they're, they're pretty simple to use and cool. They're simple to use there's no performance issue and I mean heavy or anything like that. But for those that have programmed in and go you know that there are things called dependencies and imports. And right now our bindings end up importing some fairly heavy weight. So large maybe is the right term dependencies that we really don't need. And so we need to work to strip those off and that will result in less source code being consumed and compiled and of course then the binary size will go down. Okay, since we haven't got another question, I'm going to throw this one. So, Valentine, you've added this feature called short names expansion, I guess, or something. Can you talk a little bit about that people be seeing it with 3.0. So maybe maybe somebody in the team can can look up the blog post and paste a link to it. So if you want to catch up Brent Brent is doing it. So when you do a potman pool fedora, or let's start differently. So when you do a Docker pull fedora fedora doesn't point to a registry. Right. So what we refer short name. So Docker resolves it in a very specific way, namely it always resolves to Docker.io. This is understandable. Well, they own Docker hub. And it makes sense to resolve to to their registry. And at the beginning of Docker, there was pretty much only Docker hub so also for historical reasons, it's understandable why a Docker resolves to Docker hub. But today, and for many years, actually, there are more public container registries, you know, there's Quay, pretty much every large software vendor has registry redhead, Microsoft, Google, Amazon. There's also lots of on premise on premise registries running at companies. So potman and the sibling projects and actually also the Docker demon from project atomic implemented a different way, which allows for resolving the short image names to more registries than just Docker, the Docker hub. So there's a cool config called registries.conf that all these tools read Etsy slash containers slash registries.conf. It's the system like one where you can specify the what we call the unqualified search registries. So potman, or all versions prior to potman 3.0 and build a one dot 19, what they were doing, what the tools were doing, when you were pulling from or pulling a short name, they will go through the list of the specified registries. So let's assume there's the registry fedora registry redhead registry and then Docker hub, they will go through the registries in the specified order, and try to pull the image the first successful pull wins. And well, this this work for a long time, until last year, we were notified by the security team at redhead that there is a certain risk. Namely, when people are able to squad or take over repositories on a registry. So if you want to pull an image and, for instance, fedora, and you have the list specified in a specific order. And the attacker could take over, could take over or ownership over an image of a registry that this mentioned at the beginning, so that things can happen. So, somehow we were in between a rock and a hard place, because we, we love this feature, right. We don't want to lock in our users. And I think we're all to blame. Humans by nature are lazy. I don't, I even, I keep forgetting the registry of redhead, because I think it's access dot registry dot, I don't know, you know, I keep I keep forgetting it, because I usually don't have to worry about it because it's in this registry. So I can just do apartment pool UBI eight. And I'll be, I'll be happy. But we also had to make it secure one way. And there are two new things. Actually, there's two new things to it. The first thing that you'll notice when you upgrade to pop in 3.0 is that pop man will now show and also build up for what is worth. They will show when you pull an image a prompt, so they will prompt you. So pretty much it, you can, then you have to choose pretty much of the options that the unqualified search registries offer. So in this case, you can choose, you know, do I want to pull image foo from the Fedora registry from Docker hub from the redhead registry sent us registry. That's at least in the Fedora universe, you know, there's in the Susan open SUSE universe, they will point to their registry. I think Debian points to their own registry as well. So you see, there is a certain demand to not only use Docker hub. So this is the first thing that we do the intention behind is to make it explicit what the user wants. So we don't want to hide it, or pretty much make it more obvious to users that they're using a short name and to think about it. Most secure ways to always pull an image by digest so fully quiet, fully qualified image reference and then the digest because it really needs to match what you what you're pulling. So this is the first thing that that we implemented. Second thing is we came up with an idea. So we wanted to tackle now the second part, the ambiguity of short names, which is introduced by the list of unqualified search registries. How we did it is well, we're all some old old Unix Unix parts so we're pretty much aliasing it. Imagine it like a bash alias where you have the left hand side and the right hand side. So if you go on github.com slash containers slash short names, you will find a project. Okay, I have to speak quicker. Jen is reminding us that we have three minutes left. So we now ship configuration file, which is also in the registries con format. I will give a talk about registries conflict about this. Or what I just mentioned also on Saturday afternoon or afternoon my time if you're in the US this morning. And this is now shipped or it's about to be shipped in Fedora and in rel other distros are contributing to it as well. So this is really a community effort where if you then, for instance, do apartment pole Debian, it will reach out directly to the Debian images. If you do apartment pole Fedora, it will go directly to the Fedora registry. So there's no searching anymore. There's no ambiguity. And we know exactly what what we're pulling from there. I'm personally, I, I hate short names because they cause me a lot of headache because I help maintain the containers image library. The middle of the mastermind there and I guess we we share the headache and the pain a lot, because this really introduces a lot of ambiguity. But I'm happy about the feedback we received so far. There's also lots of interest from from Microsoft. And this is this is pretty cool. Yeah, just a couple of other things. One, once you've chosen an image and pulled it down, then we will record your selection. So if you say, you know, Postgres SQL comes from Docker IO. You run it, you won't have to, you know, we basically add an alias for that. And the other thing is, is companies that want to use pod band can specify their own aliases. So if you have there, for instance, red hat, when the ship rel is going to have a list of all the images that red hat produces, and they'll have their own aliases file little spot and ship the spot of the red hat distribution. But if you were a third party, if you're a huge company and you want to have your images and guarantee that they only come from your registry, then you can have your own registries. So we're running out of time here. I just wanted to say we're going to have a, we're also having a containers plumbing days is coming up that's going to be I think match eighth and ninth. In a few weeks, and that's going to be similar to Devcoff, except totally focused. It's going to be totally free conference similar to this, but it's going to be totally, totally concentrated on the low end stuff for containers. Everything underneath Kubernetes, nothing, none of the CNF stuff or any of that, but it's going to be all fairly low level stuff. It's going to be a two day conference. It's just going to talk about plumbing and Giuseppe's hopefully going to have some talks and some of the crazy stuff that he's working on and other other people's in this team, but also all the other communities out there. And lots of other companies are going to be participating it. So hope to see you there. Container plumbing.org. I think it is right. Someone drop a. It's I've decided to sponsor everybody for it so everybody can go for free.