 Hi, good morning or technically afternoon everyone I feel So I'm going to talk about improving container image registry availability with cube image keeper also known as quick That's a really long title So By the way the slides well as you probably have noticed on the sked website you can access the slides But just in case if you want to access the slides as well. There is a QR code in the corner The QR code is also going to be visible during the rest of the talk So if you see interesting links or whatever in my slides and want to follow the links or copy paste commands or whatever You're absolutely welcome to First before I introduce myself. I have a few questions for you. Who here is using Kubernetes? Okay, I'm sorry Who here ever had like registry issues? Okay, well, I understand why you're here. Who is here mostly because they're interested by cube builder Almost no one okay And just out of curiosity who here is a self-employed or freelancer just okay like me and Was also speaking at conferences in the freelancers Okay, and so for folks who are freelance and speaking at conferences It would be interesting to discuss after because I'm still trying to find out how to put everything together And by the way, I would like to give a huge thanks to the Linux Foundation and to Enix a French company who both helped me a lot to be here because back when I was working at Docker years ago Docker was basically paying me to go and speak at conferences now that I'm a freelancer Well, I mean, I'm technically paying myself to speak at conferences But it leaves my right pocket to go in the left pocket. So that doesn't quite work. Anyway so Today we're going to talk about this thing called cube image keeper or quick We try to make it runs like quick with a soft u but since And I think for English speakers, that's just weird. We settled on quick So I'm going to explain how it works Why we even wrote it compare it to other options And then I'm going to talk a bit about cube builder as well And there's going to be a ton of demos and so I think one thing I should do is Eat a grape. So back when I was working at Docker That was a Spanish team with Boha and Fernando and what they did before live demos is that they would eat a piece of grape I don't know if it was a Spanish thing or Completely related, but I think it can't hurt so There is for the demo gods just in case So first why quick why could be mesh keeper? Mmm Most of you raised your hands when we asked about who's using kubernetes So I'm sure you already know that problem very well, but just in case so kubernetes clusters Run containers. That's their whole point and containers Use container images and so before you can run a container You need to download the container image from what we call a registry So these registries for instance, there is the Docker hub There is quay you can run your own registry, etc And a registry is just an HTTP server like serving blobs. That's it But sometimes registry don't work the way they were meant to work for instance back in April We had a super embarrassing outage At I think it was a Google Cloud EU West 9 which is a fancy way to say Paris There was some fire and some flood and stuff happening and so long story short The registry dot key eight s dot io wasn't available for hours I think almost an entire day and not just for customers of Google Cloud, but for pretty much everyone in Western Europe. So that was kind of a big deal Sometimes images can also be deleted from a registry. You might wonder why would that happen? Why would someone go and delete an image? Well images can be fairly big It's pretty common to have images of multiple gigabytes And so if you go DevOps and ship five times a day, maybe you're shipping a huge image five times a day And it just adds up on your registry and unless you have infinite disk storage or Infinite money to pay for S3 and whatever at some point You need to clean up old images and if you are unlucky You're going to remove an old image that is still in use somewhere. So that's that's the thing that can happen as well Also, some registries have pull quotas for instance the Docker hub Which is probably the most like widely used registry at least for public base images that many of us are using The Docker hub will let you pull. I think it's 100 images every six hours per IP address more or less Personally, I almost never run into that because I'm just a small humble freelancer hobbyist pulling containers But some people use containers in bigger ways and I do and they run into that pretty often So if you run into this once in a while Any of these reasons then maybe QBmage Keeper is going to make your container life a little bit nicer Also, it's pretty difficult to monitor for these specific conditions because as an kind of old-school sysadmin My first thought is well, let's monitor that stuff so that we know when it's down But you need to kind of dynamically add Monitoring stuff for every single registry that you use and check if the images get deleted This quickly goes of out of hands. So we're not really doing that So also if like me you are an old-school sysadmin you might think well 25 years ago when I was installing Debian I had a Debian mirror. So why can't we have a registry mirror? We kind of can but because of early technical decisions in the whole architecture and protocol around registries It's not easy to mirror registries We could talk about that but this is not the place and not the time So I'm going to talk about alternatives and including like me more ring a little bit later when I compare quick to other options But long story short. We need something a little bit more advanced than that and to give you like classic example That was almost the motivation for writing QBmage Keeper Imagine you have a nice cubanites cluster Maybe you have like 20 nodes everything is fine and then you have a peak in traffic so you have autoscaling that kicks in and does its job and Adds more pods to the cluster and at some point you're out of capacity So you are adding more nodes and so you have more nodes in your cluster But when these nodes come up, they are completely empty. They don't have images And so you need to pull these images and if the images are not available You are in this very sad scenario where you have a surge in traffic You have the capacity you you are paying for extra nodes But the nodes don't do anything because they don't have access to the images. All right, so we don't want that to happen So first demo Here I'm going to I'm going to reconnect to that machine here and I'm going to I'm going to just push Some demo images that I'm going to delete later spoiler alert So my demo images are called burrito taco pizza and sushi Which is fairly risky because we're close to lunchtime So maybe some folks are going to run away during the talk because they're hungry And I know what it is to be hungry because yesterday we took a 10 hours Car ride a bus ride to get here and they were like we are not allowed to leave the bus during the stops So it was pretty intense So anyway I have a brand new Kubernetes cluster here, which I deployed like literally this morning And I'm going to install a few things on it So I have a few commands here that I'm going to copy paste From a text file the commands are in the slides. No, so This is the command I used to provision my cluster. It's using some custom scripts But they are in a public GitHub repo just in case you're interested now here. I'm installing cert manager and Quick itself Cert manager is a dependency for quick because we need to generate certificates And so instead of doing that in-house We are outsourcing the job to cert manager and now I'm installing quick itself So I mean I'm using a helm here If you are not whoops, and of course I decided to skip the prometheus installation so that didn't quite work Let me fix that If you're not familiar with helm, you might won't be wondering like what what are these kind of commands? What's going on here? Helm is basically a kind of package manager for Kubernetes so when you see me do some helm install helm upgrade That's the the equivalent of doing an apt-get install or you mean stall of some packages except This is on a Kubernetes cluster And yeah, if I don't press enter it will never install Okay, so this is going to take a minute So the first thing we're gonna do After quick is up and running. I'm going to deploy Some sushi and burrito etc on this cluster So that we have some workloads running and so the the goal of the demo later will be to remove the images from the registry and See what works and what doesn't in the meantime if we go to the Docker hub We're going to see what I have here. Yep. I have sushi taco burrito pizza I Should see the last push. I thought we would see. Yeah, okay. I don't and Right, okay, Prometheus is installed and now quick itself There we go Just FYI the extra flags that you see me give like to this installation process I'm telling quick to not do its magic in cube system and default namespaces so that we can compare You know kind of with quick and without quick and the extra flags that I'm putting here That's to enable some Prometheus matrix gathering, which I will probably not have the time to demo, but just in case Okay, so now hopefully we have We should have Let's see I have a key 9s here, which is a nice way to have another view Okay, everything is not green but blue but everything is running so quick is up and running awesome So now I can get some workloads on my cluster So I'm going to Make that a little bit bigger and copy paste a couple of commands here So I'm just creating Four deployments so I have sushi and pizza in the default namespace and burrito and taco in the cantina namespace so within Within a brief moment. We should see yep running running running running running perfect. Okay So right now how does quick work? Well, if I take a look At the pods in the default namespace Let's see you can see that I'm using images jpeg tazzo demo sushi and Jpeg tazzo demo pizza. All right now if I go to the cantina namespace, which is Protected by quick so to speak and I look at these pods you can see that now There's images references are local hosts 7 4 3 9 jpeg tazzo demo burrito So instead of pulling like straight from the registry we are pulling from local hosts 7 4 3 9 What is on local hosts 7 4 3 9? Well, this is what we have now on the cluster so you can see we have quick proxy and quick controller and quick registry So that thing on local hosts 7 4 3 9. That's the quick proxy It's running as a daemon set which means it's running on every node of the cluster And it's as the name implies it's going to be a proxy to access registries So the the image references have been automatically Reveal written by a mutating web hook which runs in the quick controller So each time we create a pod that web hook is like wait a minute Let's rewrite the image reference here And now the image reference goes to the local quick proxy And now when you hit the the local quick proxy It's going to check if we have the image in the quick registry, which is basically just a cache If we have the image awesome Let's serve it if we don't have the image we fall back to the upstream Registry and then in parallel to that we have the controller that's going to Get the images from the registry and put them in the quick registry That's how it works and normally when I show that I get various reactions some folks have been telling me Well, that seems Way too simple in a way. I'm like, yeah, that's kind of the beauty of it. It's not too complicated Yes, we have multiple moving parts and there is some stuff going on, but it's not rocket science if I could say now some Downsides maybe or you know like to give you immediately the fine print the catch When you pull an image the first time it's going to be pulled twice Once directly from the registry and another time because we need to put it in the cache So it's going to put a little bit of extra traffic on the registry But you kind of recover that down the line because if you scale up or add more nodes or whatever Now you're going to pull from the local registry instead of going upstream another kind of problem is that if you use image pool policy always Which means you know always pull from the registry It's not going to quite work the same because yes, it's going to always pull the image But instead of pulling it from the upstream registry It's going to pull from the quick registry like from the cache So if you rely, you know on on using tag latest or prod or something like that and on image pool policy Always that might be a small issue although some folks would argue that maybe that that's not the best Practices thing to do, you know, you should not rely on image pool policy always because reasons but still I want to warn you Yep, if you're using that that might be an issue now some details We have a CRD so a Custom resource to represents the cached images So each time that the webhook detects Oh a new image It's going to create an entry here, and that's what the controller uses to feed the cache What we see here, that's a reference count So that's how quick tracks if you're using images or not and once an image is not in use anymore It's kept like 30 days and eventually it gets sorry Eventually it gets removed once again because disk space is not infinite So eventually we need to to remove that. All right, so now what I'm going to do on the Docker hub I'm going to delete all these images except sushi because I love sushi and Delete the images Okay All right, all I have now is sushi which is fine with me and now I'm going to Kind of scale things up a little bit So first I'm going to Tell Kubernetes. Hey my sushi burrito pizza, etc. They need one gig of RAM to run I'm doing that because later I'm going to scale up and I want to trigger like a cluster scaling event All right now that I have done that. Let's have a look Is everything still fine? Yes, everything is running perfect. Okay. Now. Let's scale things up a little bit And that's when the problems should start so scale everything to two replicas. There we go Okay And now let's look at our pods and yep I so I like how can I mess is using colors so I can directly see we have an air image pool here So what's going on here is that for pizza? Now I have a new pizza pod on the node that was not running pizza before So that that node can't pull the the pizza image. So I got our image pool Sushi is fine because I did not remove sushi from the Docker hub and Where is the cantina a little bit above? Oh, so in the cantina we have an interesting scenario We have pending pods pending So that means that the scheduler is still waiting is still making a decision about where to put these pods and here Basically, what we have is a the the Kubernetes cluster is full like there is no more capacity for my pods And so that's going to trigger a cluster auto scaling event if I use this scribe on On this pod you can see at the bottom. Let me zoom that a little bit So first I have a message from the scheduler telling me hey You don't have enough memory on the cluster and just after that we have the cluster auto scaler That's basically telling me I got you I'm going to add more nodes to your cluster so that we can run that pod And that typically takes a few minutes. Oh, that was fast So we can see we already have a third node which was just added like 15 seconds ago It's not ready yet, but now it is it's just a time You know to run all the demon sets and whatnot. And so if we take a look, let's go back here and It's okay, if you can't read the small print what's interesting is to see that we have a few Orange lines here and there so we have a few problems apparently And if we go and have a closer look We can see in the container here. We had air image pool for a brief moment and now it's running Okay, so what did just happen here? This is another I would say small limitation in quick Basically when the new node comes up Immediately it's going to try to run my burrito and Paco images and it's going to get them from local host seven Some things so from the quick proxy, but at that point the quick proxy is not up and running yet So there is maybe I don't know 10 20 seconds window when the node tries to start things And it doesn't work because the proxy is not up yet But after these 10 20 seconds the proxy is up Kubernetes or rather the cubelette or technically I guess the container engine retries the pool This time it works and everything comes up But what that means, you know if we're really picky if you try to optimize as much as possible Container startup time you could say quick is kind of wasting me 10 or 20 seconds I don't know if I like that on the other hand the pools happen from a local registry So you might save a little bit of time. Thanks to that Depends, but I guess it's good to be aware of that limitation. All right, so now that we saw like no quick like basic Functioning like in action. Let's compare that to other options because of course before writing quick We did our homework or maybe I should say we tried to see if we really had to write some code Or if maybe someone else didn't before so one thing you can do is set up a registry pull through cache So this is a little bit like a good old-fashioned HTTP proxy There is a small problem that only works for the Docker hub I'm not going to dive into the technical reasons why but yeah only that only works on the Docker hub And also it requires a little bit of tinkering with the container engine configuration You need to go, you know to your Docker chasen or Damon the chasen or whatever the equivalent is with container D or cryo, etc to say hey I want to use this particular registry proxy Or cash or mirror, etc Okay, so since some folks use images that are not on the Docker hub This is not the universal option that we were looking for The next one is to use a full featured registry something like harbore for instance as a proxy cache And you can absolutely do that you can set up Some kind of namespace in harbore and you say hey all the stuff here is actually going to mirror That registry over there, and then you rewrite your image references just like we did with quick And you can even set up a web hook to that's automatically for you just like what quick does the only Downside well actually to downsize the first one Is that you need to set that up for each registry that you use? So you have something on docker hub and DHCR and GCR and quay and your Interrogistry, etc. etc each time you will need to set something up in harbore And the other thing is that the harbore setup is a little bit more involved It's it's not super complicated by any means, but to the best of my knowledge if you want to run harbore You're going to need like legit TLS certificates, which means Some domain name and probably an ingress controller It's definitely not a one-liner like I did with with quick in the beginning of that talk And and again some folks my like Harbore is great like don't get me wrong It's pretty awesome and we use it on a bunch of clusters, but if you need that just to have A proxy cache for your registry. Maybe that's a little bit too much Okay, so next option there is a project called cube fledged and that one is pretty awesome I think I mean the the vibe I got from looking at the documentation. I feel like this is used on cruise ships or Maybe later like even spaceships or whatever or anyway in air-gapped environments like when you don't have permanent connection to Internet and the idea of cube fledged is that it's going to pre pool images on your nodes But there is a but you have to make a list you have to say this is my notes This is the images. I want to pre pool and then it's going to take care of it It does it really well. It talks directly to the container engine But again if the registry is down it can't pull images So I'm sure that for the folks who wrote it I'm sure it's the perfect solution that they needed but for us that was not quite it Another thing is the Kubernetes image pooler and it's pretty similar to cube fledged Same thing like you give it a list of images that you want to cache and it's going to use a very low-tech Solution for that. It's going to create a daemon set And if you want to have 20 images in cash, it's going to you know in that daemon set You have a pod with 20 containers each of these containers using one of the images you wanted I say pretty low-tech in the sense that there is no CRD. There is no operator. There is no You know like weird thing talking directly to the container engine or whatever So I say low-tech here, you know, it's it's not a Criticism it's more like a praise like it's something that's fairly easy to understand and reason with So some folks might actually like that, but again, it doesn't have its own registry So if the registry is down, it doesn't help you if you know about other projects that let you have your own You know like fix this registry availability issues Please please let me know because I would be more than happy to add them here And who knows you know if you have a solution that's way better than quick Maybe we can just scrap the whole thing and use the other project instead So which of all these options is the best? Well, you might think I'm gonna tell you obviously quick is the best solution But not necessarily it depends what you're trying to do It depends of the the problem that you're trying to solve Our particular problem is registry availability and for that quick is great But for other problems, maybe you want something else and by the way when I say we here I am being the Kind of the the kind of you say the mouthpiece the speaker For the quick maintainers and and the team behind that They are like a service company managing hundreds of Kubernetes clusters and there is a bit of everything There is on cloud. There is on premises. There is many different cloud providers Many different ways to install and manage Kubernetes So the reason why they wrote that is because it's really solved that problem for them Alright now we also have high availability in quick and let's see if I can Show that one So what I'm going to do is first enable high availability and there are two steps for that The first one is to create a secret which is going to be used by a mean IO And the second one is to kind of flip a switch In the quick installation and I'm going to do that and this is going to take a Little while so I wanted to start that first and now I'm going to explain what's going on here The the key thing is here like mean IO enable true So the quick registry like the the cache where we store the images that's just like the normal Very basic darker registry Which means that you can deploy it in many different ways You could use local storage like in the container itself, which is great because you don't need any configuration But if you lose that container or that pod you lose the content of that cache and you need to repopulate it Or you can back the registry with something like S3 or any other kind of object store And that's what we're doing here mean IO is used here as an S3 compatible object store and behind this very Simple-looking innocent little flag what we're doing is that we're adding a whole dependency on mean IO And if I go back to my To to my dashboard here Sorry for the whole zoom and zoom. I hope this is not giving anyone see sickness But you can see that now we have a bunch of mean IO pods We have a mean IO provisioning pod which okay This this one is going to crash a few times, but eventually it's going to work I'm still going to get like a Piece of grape to appease the demo gods just in case But eventually that should work and by the way if you if you're running Let's say on NWS and you want to use S3 for the back end for the registry you absolutely can do that Here I just wanted to have something that's entirely self-contained And I wanted to also kind of show off like look you just have like this couple of commands and now you have High availability, which I think is pretty neat But let's give it a minute and Yeah, we'll have to wait until it's actually up and running before I can demo the high availability part So I'm going to move to cube builder And although I I don't think I saw any hand raised when I asked who's here because they want to know about cube builder I'm still going to talk about it But I'm going to go like straight to the point and not bore you with the useless details So cube builder is a framework To write kubernetes operators because writing kubernetes operators It's kind of daunting like there is the whole control loop and CRD So you need to write a bunch of yaml's just for the CRDs we could roughly compare that you know to Writing your own memory allocator and some of us might have done that in their CS studies decades ago Or if you're working on a very exotic embedded platform or whatever But nowadays it's really rare that we would like write a memory allocator by hand So same thing with kubernetes operators. We're going to use a framework to help us as much as possible first Cube builder is not the only option. There are many others for instance copf is kind of Python centric Kudo is yaml centric, etc. I'm not going to give you an intro to cube builder because I'm not the right person for that And that would take more than a few minutes and there are already lots of amazing resources about that So I put a few links here These videos here are fairly old like three years, but they're still relevant And that's the one I used back when I wanted to learn cube builder so I can promise that that's the good stuff So instead I'm going to give you kind of pros and cons of to build there Especially the stuff that we didn't know when we got started. So first there is lots of documentation It's relatively easy to get started and phases on relatively because you know when we say oh, it's easy It's like well It's easy for whom like the the principal staff senior engineers with 50 years of kubernetes experience Or the person just like fresh out of school who's still maybe struggling a little bit with the shell No If you have basic knowledge of go and kubernetes you will be able to get started with kubernetes with a cube builder I would say in a few hours you will have your first controller up and running and when I when I teach kind of Advanced kubernetes classes about writing controllers. We use cube builder and to be clear I am I'm not a good go programmer and I still manage to get that working. There is also the cube builder Channel on the kubernetes slack where folks are extremely helpful and nice So that helps a lot Cube builder helps a lot So when to maintain the crd's the custom resources to give you a concrete example So this is for the cached images. So the idea is that you write Your structure like as a go structure you put these annotations here indicating how that's going to Translate to Jason and by extension to yamel and then you have a bunch of Magic comments like this that are going to be detected by cube builder To generate the extra information in the crd yamel So for instance, that's how you determine What's going to be shown when you do, you know, the kubectl get cached images So in the crd yamel there is information about these columns for instance Okay, so so that helps a lot because I mean crd manifests are hundreds sometimes Thousands of line of yamel and I don't think anyone wants to maintain that by hand I mean unless that's your thing then of course, but most folks don't really enjoy that Well, since in the meantime, I think I think everything is Green or well blue kind of well except that one here. That's the the pizza image So now that we now that we have high availability. I'm going to Push my images again to the docker hub because when we switch to high availability like disclaimer that wiped out The the cache that we had so I need to repopulate that cash So first I'm going to push my images again And now I'm going to scale things up a little bit What's the command that I was intending to use probably scale? Yeah, three replicas No, actually All right, okay, I'm going to use a little hack here to force a repool of the images I'm just putting an annotation on this cached images And that forces the quick controller to look at them again and refresh the cache and I think if I quick system controllers If we do that, we can see the logs of the quick controller and I see yes cached cached cached All right, my images are now probably hopefully in the cache so I can move to the next step which is to Delete the images from the registry one more time So that's going to be essentially the same demo as earlier, but with a twist and the twist is that well, you know what this time I'm just going to delete everything even though I'm sure everybody is getting fairly hungry and Then let's scale things up But to like three replicas Okay And then we're going to Wait a little bit until everything comes up and then I'm just going to delete a node Actually, maybe I could go ahead and do that right away and we'll see what happens. Let's pick Not the node where my shell is running because that would be very unfortunate Okay, let's do a get pods like this. So dash or wide maybe And I should have a command to facilitate that so my shell is running on node What is it 3 2 e 6 etc etc now the quick controller is running on Let's pick that one 3 a 6 f. Okay, so I'm going to do a 3 a 6 f all right All right. Okay, and now delete 3 a 6 of this that one, okay, and We'll see what happens and I the only thing I hope is that I did not somehow delete the node through which the SSH connection is going But which might be what I just did. So let me reconnect. It's it's all right. It's all right I can reconnect. I just need to find the IP address. Okay, I think I can connect here Don't panic keep calm and SSH on Okay, so let's replace that IP address here and Reconnect yes Team X attach and we're back in business Told you no reason to panic. Everything is fine under control Except the timing. I think we have two minutes left By the way, if they are folks watching through the live stream Feel free to start sending your questions now because there is like a 30 seconds in the live stream So if if I say do you have questions and you start sending them at that point? I'm going to get that like much later. So Okay, the the cluster is going to do a little no, there we go. Okay, awesome So I want to see all pods, please So we can see there is a bunch of things That are not right I don't know if everyone can read on the screen, but that's okay All that matters is the the colors and everything that is in yellow orange red means bad So we can see that in the default namespace things are not going well Because we excluded that namespace from quicks processing So quick is not doing anything with that namespace now in the Cantina namespace We have taco which is spending here that one we have an image pull back off But there we go now. It's running. This is the little glitch I was telling you earlier like when the note comes up There is a little window in time when quick hasn't come up yet And so things don't quite work But eventually quick starts and then everything is nice and peachy and this one is going to take a few more minutes Because I think the cluster is provisioning a new node at the moment But if we look for instance at the mean IO pods which are in the quick system so You can see the slightly different kind of blue here That's one of the mean IO pods which went down when I took down the node And that one is also going to take a few minutes to come back because I think there is a persistent volume attached to that So it's going to take a minute, but that's fine because the whole point of mean IO is that it's using I think Erase erasure coding error correction. I don't know if I put the words in the right order But basically we have redundancy and we can lose a couple of mean IO pods without disturbing everything All right, so that kind of worked back to wrap up on On a cube builder who here tests their code Okay, so for the heroes out there Cube builder has some really nice tasting facilities Because we want to test against the kubernetes cluster, but you don't want to start the whole thing So it has something to start just a basic control plane And then you can run your test again that and you can decide exactly which Version you want to use so that's pretty nice now. I'm going to give you the the real Kind of not so great thing with cube builder. It's for version upgrades So let's say that a new kubernetes version comes out and you're like, okay I want to support that fully so you might have to update a few like go package imports and things like that and either you do that manually which is annoying and fastidious or You get cube builder to do it for you and at this point today like it cannot do that yet cube builder generates lots of code for you and You can generate the new version of the code, but then you will have to do the merge yourself With your with your code so currently we you know We have like I don't remember the exact name of the command But something like cube builder generate a bunch of stuff But we don't have cube builder upgrade the bunch of stuff that you generated earlier Unfortunately that being said So some quotes from the quick maintainers who've been working with cube builder when and I have They they told me well It's not perfect but it's still a huge help and saved us a lot of time and I think the key point you know I asked them well if you had like a crystal ball back then when you started And if you had known all that would you still have used to build her and they were like yes? Absolutely because even even if it's not perfect. It's still way better than nothing And and they got like lots of good things from it. All right, so at this point Yep, now all the mean IO pods recovered So that was the the HA part of the demo and it's kind of nice because we can see that the normal pods are Completely like they went completely belly up But everything in the container name space is still up which is pretty amazing because I think it's lunch time And that's all I got. Thanks a lot We're kind of out of time But I'm still going to try to get me be a couple of questions especially since there is nobody in this room after First I see one question online Maybe how do I describe the retention for the persistent storage? Great question. So the persistent storage here would be the storage used by the quick registry and Here I just used like a big switch which is a high availability on and that Enabled me now and enabled a bunch of configuration for me now. Yo, etc. etc so if we wanted to change the the retention policies for the persistent storage we would have to go in there and Since quick is using I think it's the bitnami mean IO helm chart You you can just Pass some helm parameters to that And and adjust that so if you want to change the number of replicas the number of mean IO pods all that kind of stuff You can do that there or you could switch to a real s3 Bucket with whatever settings you want and finally if the question was about how long do The images stay there. I don't remember on the top of my head how you said it, but I'm pretty sure it's in the quick Elm chart as well And one last thing we all we often say hey, we we should thank Open source maintainers, so I wanted to thank the two main maintainers of the quick project Paul and David Who gave me lots of input when preparing this that presentation? I try to drag one of them here to speak with me But they were still feeling a little bit too shy for that But I hope we can get them to to do that at some point in the future because I think it's a really cool project Thanks, and do you have questions? Yes For optimizing the image pool did you consider? Like those lazy pulling Techniques did we consider lazy pulling techniques? I'm not sure exactly what that would mean like There's in container D like experimental snapshot us like star GZ nice, which only pulled a bit that you actually Okay, I haven't looked into that yet, but I think I see what you mean like and that's pretty exciting But I I think it would compliment pretty nicely with that because at the end of the day Once you really need to pull a layer or a bit of an image You need that registry to be up and running And so I think quick might still play a role here now to be honest I don't know at all how that part works. I see that Philly's here So maybe I'm going to ask him a few questions about that after So if that means a different OCI format where we don't have layers anymore Then it means that we will need to see how to support that of course And I I think we would absolutely wants to to support it Some of the limitations I mentioned, you know like oh you pull twice and The the image pool policy always We're trying to see how to address that and we also have some first contributions from the community as well So that's definitely something I'm gonna keep on top of my head. Yeah, thanks Thank you. What happens if the quick proxy image couldn't be pulled and why is that pulled from? Excellent question. So basically who pulls the pullers So at this point, I think it's on GHCR, but I'm not 100% sure So let's have a quick look. Haha a quick look that was not on proposed Describe image image image image My bad, it's on the Docker hub And so there is there is definitely something to do here So I think if I remember correctly Though the what we do on production clusters is that we're still using a registry mirror specifically for stuff on the Docker hub So that basically we can still pull that in a few essential images, you know like here We're using Calico Which I think is maybe not on the Docker hub but the idea is make sure that the basic stuff is Well either on the Docker hub or somewhere else and then set up like one registry mirror for that Pool of images and then we're good for everything else Okay, that makes sense. Thank you And I think I've also seen some some discussions around like baking these images in the You know the VM image used to spin up the clusters I have not looked too deep into that because I don't like to bake custom images Maybe because I still have bad experiences from doing it a decade ago, but that's also an option Makes sense, but I mean it seems very bad if you can't put that image, right? Yeah All right. Well, if there are no other questions, thanks again, and now let's all have a taco burrito or whatever is at the container. Thank you