 Let's get into the big topic, open source, something that we actually have in front of us. This is so awesome! As the Kubernetes ecosystem really boomed. Hello everyone and welcome to episode 16 of the Level Up Power. So first of all, apologies for the few hiccups we had for South in the show. So today we have a special guest with us. We have Daniel Messer, a product manager for our Red Hat registry called Named Quay. And it's a good thing because today we are actually going to speak about setting up custom registries with OpenShift. And Daniel will have the pleasure to explain how we can use Quay as one of these options with the platform. So, yeah, again, thanks for joining us today. Thanks for everyone who's joining the show. Daniel, would you like to say a few words about you, what you do before we get started? Sure, thanks so far and thanks for having me by the way. So I'm a product manager at Red Hat. I look after our central multi cluster registry component called Red Hat Quay. That we have as a strategic element of our portfolio strategy around OpenShift whenever you want to run more than one cluster. So we think that's a pretty important topic in the future. And that's actually going to be the model that many people will prefer in the future now that it is, you know, very straightforward to run and configure OpenShift for in the self-managed fashion. It allows you to do with all the operational concepts around it. So I also look after the hosted offering of Quay, that is Quay.io, which is also interesting because it's the same code that runs this pretty massive public registry service. You can just sign up for with your credit card. So just give you a couple of numbers for that. So Quay.io serves over two billion images per month. And it has petabytes of images under management and governance, which means organizing your storage, your tax, your access policies, scanning the content of these images, building these images even from triggers that are coming from GitLab or GitHub. So yeah, it's a pretty interesting combination, sort of a hybrid between a SaaS offering and an on-prem product. Outside of that, I also do product management for OpenShift components. So you may have seen me speaking about things like the operator framework, the operator life-saving manager. This is also one of my responsibilities. Very nice. Thank you, Daniel. So you are one of the guys responsible for making operators easier to consume in general on the platform and to manage their life cycle. So yeah, thanks for that because it helps a lot with the operators adoption in general. All right, so maybe we should get started before diving into the specific topics. I wanted to have a few words about OpenShift and the integrated registry. So for those who have been familiar with OpenShift, we've shipped an integrated registry since the early days and even before the acquisition of Quay, actually. And so that's one of the benefits of the platform is that as the name platform implies, it has all the components already set up and ready to use for you. So we do have the integrated registry that runs by default with the platform. It manages the images that get built on the platform, but you can also build images outside and push them to the registry if it's exposed. So it does work actually pretty well if you are using it with the cluster in the context of its own cluster. But Daniel, could you please tell us why should we think of using anything other than that? I mean, it does work out of the box, but are there any reasons to look for something else? Yeah, I like that you acknowledge how easy it is to work with that just out of the box, right? Because there's really not much configuration or management overhead to it. The only thing you really need to do at the very beginning when you set up the cluster is to essentially tell that integrated registry where your storage is. And it can use internal storage of the cluster as well, like persistent volumes, but you will want to prefer object storage like Red Hat, OpenShift Data Foundation, so the F if possible. So yeah, it's pretty convenient because it is sort of working in the background all the time. So whenever you import images into OpenShift or you use OpenShift's build functionality to create images as part of a container build process or source to image, it kind of gets used in the background, right? So the integrated registry doesn't really have a UI in the sense. You just use it indirectly via OpenShift features like image streams and build campaigns and image string tags. So these are abstractions that come with the integrated registry and there are all the stuff that you would normally need to do in order to connect really any Kubernetes cluster to an external registry or any registry which is setting up users, creating credentials, defining what these users can see and don't see and then dispute those credentials to the cluster. All that happens automatically with the integrated registry. You never have to worry about pull secrets or permissions or repositories and namespaces that need to get created upfront. All this happens in sync when you create projects in OpenShift and image streams in OpenShift and you start to use those, right? So the one caveat though is that the whole concept of this integrated registry is, the whole context is really just that one cluster, right? So it lives inside that cluster. You can technically export it so others can push to it or pull content from, but it is not meant to be used outside of the cluster in the sense that you have a multi-tenancy scenario because the tenancy of this integrated registry is really tied to that cluster. So in order to make other clusters use that, you would need to be able to do things like permission management, you know, creating RBAC definitions, creating users, user groups. Please put them into a sort of a bucket that's called a namespace and registry or an organization as we call it. So these are things that aren't possible with the internal registry because it's not supposed to be used outside of the OpenShift APIs. So the moment you want to have something central, you need to do a little bit more setup, right? And we think having a central registry is sort of a common architectural pattern that we'll see way more in the future. So we're going to share a slide here real quick just to show that. So we see Red Hat Quay as such an integrated external central registry to OpenShift multi-cluster deployments. And this has various benefits, right? So one of the benefits is that you manage all your content, all the definitions of who your users are, which teams they belong to, which access rights they have to the various content in the registry. You manage all of that in a single place, right? So you can see all the content in a single place as well. You can see what exists, what versions of images are available. And you can also see inside those images with Quay. So Quay does not only store and serve your images according to permissions that you set up, it also scans the images and indexes the content inside. So you can recognize RPMs, DBN package files, Alpine package files, but even Python PIP packages or most recently added this tech preview, Java archives. And it will index into formation. And one reason it does that is in order to tell you when those components have non-security vulnerabilities. So Quay also scans your images for CVEs. And to give you an analysis of the security posture that you have from the content perspective before anything even hits the cluster, right? So that's a pretty central aspect. And the moment you have multiple clusters across multiple providers and even maybe from multiple vendors, you definitely want to have something as central as that. But when you do that, you also kind of put all your acts in one basket, right? Which is, you know, operationally sort of something you need to plan for and account for. Such a registry cannot be down, like even for five minutes. So like if the internal registry is down in an OpenShift cluster, well, that will only impact workloads that come from that internal registry. So, you know, image streams that you've imported or things that you've built inside the cluster. But if you connect an entire OpenShift cluster, multiple of those to a central registry, that means also the images that the cluster itself uses come from that registry, right? And you want that because you want to centrally manage which OpenShift versions are available to install, which OpenShift versions you can update to, which applications and operators are available. So all this needs to have basically the uptime of the Red Hat registry that you would normally use to do these things, right? So, and Kuei's architecture is very, very prime for this kind of HA scenario. I said earlier that we run Kuei IoT service on the same code base. We run it really in the same architecture as well as a product in the cluster as well. So you can run Kuei in an OpenShift cluster as a workload, managed by an operator, but also on rail nodes outside of the cluster. But fundamentally it's the same scale architecture, the same HA architecture that, you know, can scale like from a couple of handful of clusters to hundreds of clusters from a performance perspective, right? So, and that is something you really need if you are going to have a central registry. So when you have that, you also start to have questions about, well, what happens if let's say one of these clusters here is very far away? You know, it will pull all this precious content over a potentially very small pipeline of just a couple of hundred kilobytes, right? If it's coming across a larger geographical distance. So that will like prolong all your rollouts and any event really that creates a pull requested as a registry. So Kuei has the ability to actually spawn geographical deployments as well. It has a geo-replication feature, which is pretty unique in industry, which makes it appear from the outside as one big federated registry with just a single URL and a single set of credentials, a single entry point to pull content from. But in reality it's actually distributed, potentially across the globe, but since content hasn't grown as in the background. So these kind of things become extremely important if you want to do, if you want to reap the benefits of a central registry solution. And we think that Kuei and its architecture that it chips with is extremely well built for that, which is the reason why we put it in the OpenShift Platform Plus package. As an OpenShift Platform Plus customer, you get this basically out of the box for free. And you have a really, really robust registry at hand, which you really need because the moment your registry is down, you will notice within your clusters, basically within five minutes. So there are so many events in the cluster that need to talk to a registry that it ranges from you can't reconfigure your software, you can't scale out your deployments, you can't change your cluster configuration, you cannot expand your cluster by additional nodes, nodes cannot even reboot if the registry the cluster is served from is momentarily down. So it's pretty impactful, right? So uptime is really important and Kuei delivers that very confidently across multiple clusters and that's why we can have such a central approach to all of this. Okay. So yeah, thanks a lot, Daniel. So just to summarize one of the key core features I hear is that it allows you to have a central repository that because it's becoming such an crucial part of your architecture and of your clusters, it needs to be H a and also if you have clusters spread out around the planet, maybe you are in different geographies and you have some in a pack some in US, some in Europe, for instance, having all the clusters talked to a single located registry doesn't really play in terms of having great performances. So what I'm hearing here is that you can have geo replicated instances that have all the content that is mirrored across all those local registries, but it's managed centrally still you don't have to manually do all those civilizations. Yeah. All right. So yeah, that's, I mean, that's, that's a pretty good reason actually to switch to using such a kind of registry. So before we go into a demonstration, actually, we have a question. So someone someone says, I tried to install the way operator on my home lab, and the resource requirements were a bit too high for small nodes is there a push to minimize quay for using age, for instance. So, yeah, I mean, you can answer or I can give also a an answer, whatever you want. Yeah, it's, yeah, it's certainly true that quay has a little bit of a higher entry bar than the usual registry deployment you may know from, you know, tutorials that you see upstream where you deploy a very small Docker registry that comes without basically any features other than push and pull. So yeah, it's kind of a double edge store. So we would definitely want to like reduce the initial resource conjunction of off quay. So right now, what the operator does really is aim to give you a production grade quay deployment that really can fulfill this use case. I showed before where it's the central registry, right? So it makes a couple of assumptions for you about that. For instance, that you want to have two replicas of all scalable components, right, to have rolling restarts during updates and, you know, h a in case a note goes down. So there's also the topic of resource reservations. So the operator asks the cluster for a certain level of resources that have to be provided by the nodes and have to be allocated to the quay components. Specifically before it can even start right and this is how we make sure that the registry isn't starving for resources when the cluster is kind of under load because again with the central registry kind of have this, you know, domino effect and the rest of your multi cluster landscape if it has problems or performance issues or even availability issues. So we're trying to fine tune that to kind of lower these as well. One easy win you can definitely try right now is disabled the horizontal power that the operator does. So that would get you down to a single replica of all the components, but there are still the resource reservations that the operator makes. We are looking to lower dose as well in order to have a little bit more coverage in smaller environments. But in general, you know, you will not really get below as the requirement of at least two bcb use and six cores for quay. If you enable clear security image scanning that will be another two cores and at least four gigs of memory quay and clear the scanning engine need both need databases. So these need to be Postgres databases. They usually come with at least two to four gig of memory as well and at least one vcpu for the clear database we want to have at least two vcpus because scanning and indexing all this content is also pretty database intensive. At least initially when you index a lot of content. So if you if you like disabled some of these features you will save on some resources. And if you just want to use mirroring, then you can certainly disable a couple of things like that. But I would say the operator based deployment is proper is potentially always going to be a little bit more difficult in an edge scenario for edge deployments. We have actually released the the packaged version of quay where just a minimal instance of quay gets deployed. We call this the mirror registry for OpenShift and the use case here is to really provide a bootstrap registry for an edge cluster that has no direct connectivity to a central data center. So it relies on this registry to have the OpenShift core images for installation and running the cluster inside. And for that we have the mirror registry which is really a simplified installer running on a rel8 node deploying quay in podman containers, podman pods actually for you. So that has a simple requirement to vcpus and six gig of memory. It is really stripped down for the same reasons you don't get any of the additional features like geo replication or mirroring or scanning. But you don't need those for a registry that's just supposed to hold OpenShift related product content. You use the OC utility in order to mirror this content in. You don't use quay mirroring to do that. You have the OC mirroring tool that does that, that understands what an OpenShift release is, what an operator catalog is. So I would really propose to use that. And then you get away with a smaller resource footprint for an edge registry that only holds OpenShift product content if that is your use case. Yeah, thanks. Thanks Daniel. So that's a very extensive answer. I mean, having also gone through that, there's a lot of very point on topics that you have addressed. I'm not sure if Stefan is very familiar with that, but yeah, basically the HPA, the horizontal auto scalar is responsible for reserving those amounts of resources and you can lower it. But basically, I would say the quay operator is intended for production use. So if the end goal is really to try quay in, I would say in non production and very minimal footprint without using the scan or such things. There's a section in the documentation where we can deploy quay for POC and you can actually just run quay in a container and, you know, just having podman run the quay registry itself and you can play with it. So this is going to take even much lower resources if you wanted to try that, Stefan. Yeah, there's all kinds of possibilities, but again, the operator has some advanced capabilities and it's aimed at, I would say, providing a production grade registry and that's why there's this notion of resources, requests and such things. All right, so thanks very interesting question and thanks Daniel for the detailed answer. So, all right, so let's maybe go and have a look at OpenShift using a quay external registry and then maybe you can explain what happens in the background. Does that work for you, Daniel? Yeah, absolutely. Let's go ahead. Okay, cool. So let me share my screen here. Okay, so what I have here is an OpenShift cluster that has actually a quay instance running on it. So there's, as you can see here, a route that will take us to the quay that is hosted within this platform. It has been installed with the quay operator. And as we can see here, we have already some repositories that have been created in the registry. And there are some organizations and users that have been already set up with the platform. So let's quickly have a look at those and for the moment, I don't see anything that says level up organization, for instance. So what I'm going to do is as a developer, I'm going to go to the developer perspective. I'm going to create a new project. Okay, and we're going to call it level up. So for the moment, I don't have anything running on this platform, on this project or namespace. And what I'm going to do here is just go as I would do with a traditional OpenShift deployment and I'm going to choose to deploy a Node.js application. So this Node.js app has a lot of options. So I'm going to choose to use a builder image. And what this image does is it's going to use what we call the source to image process that will turn my Git code into a container image that is going to be pushed to the default registry that comes with OpenShift is if we are doing things by default. So, yeah, let's just give it a name here. So for a level up node, for instance. And again, level up node. So it's going to create the deployment, etc. Okay, go ahead and create. And what should happen now is OpenShift has automatically created a build configuration to to clone the code and actually do the build of the Docker of the container image and building the code and creating an output image that would be pushed to the registry. So if we look at the build that has been kicked off, we can see in the details here that instead of pointing to the default OpenShift registry, which should be something like registry, internal registry, the dash 5000 for the for the local port, etc. We see that it's already pointing to the quay external registry or to the public URL that we see here, meaning that OpenShift has already automatically configured the build process to push the resulting image to the outside. So for example, if you see here, this pull spec references the internal image or the internal registry that we use. So this is going to use a builder image that comes from the platform itself, but the output would be pushed to an outside registry. Now let's go back to the quay registries. Let's hit refresh here and see if anything has come up. So we see here that a new repository has been created for us called the level up node. And the level up node is basically the name of the application that we have just created. So we can see here that it also created an organization and it created some users for us to be able to build the images to deploy them and to pull them from this registry back into OpenShift. So yeah, while everything goes, I mean, is built and pushed. Okay, so it seems everything is complete. Let's have a look at the repository that has been created. We can see that it has pushed a latest tag image a minute ago and it has also run some security scans for it. So I mean, that's pretty impressive because I didn't have to set up anything for this to work, although this is not using the internal registry. This is using the external quay that happens to run on the cluster itself, but it could run anywhere. It could be quay IO, it could be a central quay elsewhere. Daniel, can you please tell us what's the magic behind this? How does this all happen? Sure. I guess you can already guess that since I'm, you know, as a product manager looking after quay and also parts of the operator framework, it has to do with operators, right? So there's actually an operator installed on the cluster that you were starting to build on and it's called the quay bridge operator. And it's called like that. Let's have a look at it. Yeah, let's take a look. So you have a bunch of operators installed. That's great. Quay bridge operator. Okay. That's the one. So this is pretty much the magic, right? So its job is to essentially replicate the same user experience you previously had with the internal registry, which means you can store your images whenever you need it. You can retrieve them whenever they need it. You never need to worry about getting credentials first, setting that up in your cluster, pre creating a landing zone in the registry, so to say. All of it just happens in the background, right? So this operator basically talks to quay. You configure it with a custom resource that's called quay integration. So if you want to click that, you have one there. So this is how you configure this operator. And what you tell the operator here is basically where is your quay registry that you want to integrate with, right? Where is the URL? What are the credentials? And you configure also your cluster ID here. This will be used as a prefix for anything that is created as part of automation in that registry. This is pretty important because if you think about this, what it's going to do is going to use your project names and image stream names as naming label things in the registry. So if you put a project called test in here, it will create a test prefix station and with the test prefix in quay. So you can imagine with a central registry that would easily lead to collisions, right? Because others have maybe tested and created the test project in their cluster. So you can also give the operator like a unique prefix per cluster here to avoid any collisions. And you can totally have dozens of clusters running the bridge operator with just one quay instance. So the fact that this quay registry runs on your cluster is totally transparent. It could run anywhere, right? But here you basically tell the operator where to find the registry, how to authenticate to it, and what prefix to use to create stuff there. And it handles all the namespaces. They are automatically synced. Exactly. I believe there's a change that happens compared to the previous release where you had to manually maybe mention the white list namespaces. Yeah, that's another configuration option. So by default, this operator will watch for a couple of things to happen in the cluster. So for instance, whenever you create a project in OpenShift, it will create an organization. When you created that project called level up earlier, it actually already created the organization. The organization. So yeah, so if we look at it, yeah, we see the prefix here. Yeah. And we see the organization in here. Exactly. Right. So that would happen from now on every time you create a project in OpenShift. So you may not want that, right? Because you may only be interested in for that to happen with a bunch of developer or production projects. So you can tell the operator that there's an allow list, a deny list, do not make it so global. But by default, any project you create from now on will also have a counterpart in Quay in the form of an organization. So organizations could be like our top level component to, you know, bring users and content together. So another question. Is it possible to have multiple Quay integrations running on the same cluster, directing or targeting different registries or targeting for. So for instance, I have 10 namespaces that I want to push to this specific Quay instance, or maybe I want to prefix them with something different. Something that we can also do like, can we collocate different Quay integrations and point them to different prefixes or different, even different Quay registries? Does it make sense? Yeah, that makes sense. To be honest with you, that's an extremely interesting question I haven't gotten so far. So most of our users basically use one central registry. Okay. And I have the cluster prefix actually determined by the operator itself. So it would use the ownership cluster ID for that. Okay. But I guess in theory, yes, that would be how it works. I'm not sure though, I have never tried it. Okay. And it's not the way we test it. But since the operator really just reacts to these events and sets up watches, I think that could be an option. We might need to try that first. Okay. Yeah. What most users are doing is like setting it up to one central registry and then let the prefix magic do its work. All right. Okay. So, thanks. So basically, this is the, I would say the magic. Can you tell us a bit more what happens in the background? Say for instance, I have so traditionally when we are using the default open ship registry is going to create some, some things that are specific to open shift. Like, like image streams and image stream tags. So these are basically pointers to, to the images. So with open shift, we don't directly use the Docker pulse back or the container, the image pulse back hardened. So basically, we reference those, what we call image streams, which are dynamic pointers that in turn are resolved to the, to the image pulse back. So what happens when we do that because I haven't had to change anything. So even when I created the app, we just went through the default open shift behavior, but seeing it points out to the quay registry. How does that magic happen? So that's intentional, right? It's supposed to be under the covers like this. So you don't need to like change any of the workflows you have. So what the operator does is watch the course or events. So I already said it watches for projects that are created in order to create. The organizations created in quay with the prefix in the same name as the project. What also happens is, is that whenever you create an image stream, a repository in that organization in quay will be created. So remember, like the project name and open shift maps to the organization name and image stream in that open shift project will create a repository in quay. So if you go back to quay, you see that within the open shift level up, it has created this repository called level up node, right? Yeah, like a repository is really a place where you store images. And here you can see all the tags that. I have maybe it's another one that might be easier to see because it has more multiple image streams. Yeah, that plan, for instance. So yeah, I have an organization for a namespace called cloud native apps where basically there are many repositories one for each of the components that have been created. Right, so one image stream equals then one repository in that project can be an arbitrary number. And then whenever you build with a build config, an image that is connected to this image stream, the operator actually intercepts that changes the output to point to quay and pushes the resulting build in image as part of the build to quay instead of the internal registry. So if you were to actually repeat the build on the NodeJS app that you had before, we can just do that. It would actually create a new image. I think actions and then what I got the build config run that one again. And when that's happening, it's actually going to push another image. Now it's going to push the image with the same tag. It's going to be latest, but I'm going to show you a unique feature of quay that you can use to actually see that. These were previously two images. So if you go back to repository, you need to use your browser back button. So then point is different for repositories. Yeah, so we're going back to the level up. Yeah. Okay. And then go to the tags. So we'll wait for the moment. Yeah. Yeah. So let's let's watch the log until it finishes. Yeah. So maybe I can also share one thing that also happened, right? So in order to talk to a registry, you normally need credentials. Things are public, especially in enterprise, right? So you need to use a name and password to kind of put stuff into a registry and also to retrieve it from the registry again. The operator has also automated that. So when it created the organization as a result of you creating a project knowledge that created what's called a robot token in quay. The robot token in quay is really the same as a service account in OpenShift. It's kind of a, like an identity that can be used to share a password without sharing your username and password that you use to authenticate against quay. And you can have this, have different credentials. So you see here, it has now pushed a newer tag. And if you go in the left hand navigation pane to that little clock icon, you can see that it actually moved the tag, right? Because now the tag was pointing to something newer, which is that newer shot. You also see that there's the ability to revert that. So quay has a time machine feature that allows you to revert those tag overrides in case you accidentally overwrote your production image. So quay keeps track of these things and allows you to roll this back within a user defined timeframe. Very nice. Cool. So yeah, I can see also some usage logs, like if this has been pulled, how many pulls per day, et cetera. So that's also stuff that doesn't happen with the default registry. Exactly. And these are governance features that you definitely need as an admin of such a central registry to have an audit trail of who pulled what image when, right? So these are things that get harder to do across multiple registries when you use an internal registry and quay being an enterprise registry does that just out of the box, right? But it also nicely integrates as we have seen with that flow that you usually have with S2I and build come takes and really put stuff in there. So yeah, so this is happening all the time when you do a big now that is associated to such an image stream. And as you have seen, it already rolled out that new image. So if you go to your project, you should see that deployment has actually been ref and has now updated. Yeah, let's look at the deployments. Right. So in the events, you will probably see that it has been updated. Yeah. Yeah, so it scaled up to a new replica with the new image. So that is all their reaction as part of the operator, right? So once the build finished and the push is complete, the operator will actually trigger that deployment to run that new image again, because you also want to run the new code that you just built off your Git repository. So all that works seamlessly in the background on your behalf, managing things, user accounts, credentials, tags and images in the background. So you can focus on what's important, which is getting your code in production. That's awesome. Yeah, thanks. Thanks a lot, Daniel, for going into those detail features. So, so, yeah, basically everything has happened automatically thanks to the Quay bridge operator, intercepting those open shift events and and transforming them into. I mean, talking to the Quay API and creating things like the organizations, the users, the secrets to be able to pull and push the images per name space, etc. So, so what does what this means is that if I wanted to use an external registry, which is not like Quay, these are things that I should do manually. Is that correct? I mean, that's exactly the same as with any Kubernetes cluster when you want to configure an integration from Kubernetes to an external registry. These are things that you need to to to set up like the the namespace. Now I created a new namespace. I would have to create the corresponding repo or organization. If that notion exists in the registry, and I will have to create all those secrets, I will have to create the service accounts, or what we call the robots, robots manually. So, so this is also possible with that. I mean, it's possible to use an external registry other than Quay, except that we would not benefit from this automation and we will have to create all of those things manually as we would do with with an open shift with a Kubernetes traditional cluster. Yeah, there's one more thing that you can actually see when you go back to your cluster. Okay. If you go back to your project, where you created that Node.js example. Okay, from the admin perspective or from the developer perspective? Yeah, from the admin perspective. Yeah, so it doesn't show anything now, but in probably 10 minutes or so it will be hard to show image vulnerabilities here, because you're running an image that has vulnerabilities in Quay. So if you go switch back to the Quay view, you will have seen that Quay told you that there are like six high vulnerabilities in there. And the total numbers is probably higher, right? It's 51 medium as well. So these are all the CDEs that Quay has found inside that image. So these are packages that have known vulnerabilities that can be exploited. And Quay will tell you what those are, and it will also tell you which component has that vulnerability. So you see here, there's obviously a Python library called URLLipFree that has a vulnerability, or there's a vulnerable, minimal executable in that container image. So this is information that you want to have as an admin or as a developer of these images to kind of reduce your attack surface. Quay also tells you in which newer version of that package just has actually been fixed. So this propagation of vulnerabilities is as in Kronos, so it will take a little while per image, but you can see here. Yeah, so maybe we can look at some other ones like this. This one has been pulled some time ago. And now I'm in the OpenShift UI, and I can still see this information rounded up here within the OpenShift cluster. So I want to tell you which parts are affected by this. So this is an image that's vulnerable, but the running part is what counts, right? So there's apparently like a single part here that runs this, and could be more than one, right? But what this does is essentially looking at all the parts running in your cluster and figuring out if they come from images that are served by a Quay registry. Could be Quay.io, could be your Quay here in the cluster, and if it's somewhere else. And when that's the case, it will interrogate that Quay instance for any vulnerabilities that these images might have been repeating as part of the scanning. So this is not scanning in the cluster, it's basically just asking Quay for the scan results, and soon your NGINX will pop up here, like the one from CMAC there. So that kind of projects all the vulnerability information right inside your console, so that you can see what is the vulnerable stuff that you're running, and where is it running, and who does it belong to, right? So as an admin, this would be pretty handy. Again, this is live scanning in the cluster, already everything has been scanned in Quay. So it's pulling the data from Quay and it's displaying it here in the OpenShift console, and if I wanted to have more details, I could still use the manifest link here, and it takes me directly to Quay. Or if you scroll down, there's actually a bunch of details already in this view. So you'll see also which packages have been identified, which CVE or Red Hat Security Advisory is actually associated to that, and which newer version of the package it's fixed. So this seems to come from a fairly old REL7 base image, that's why it has so many vulnerabilities. And that really shows you the value of, you know, rebuilding your images very regularly off newer base images. And UBI is really, really nice to use for that because it gets updated extremely often by Red Hat, and you just need to rebuild your image off a newer UBI version. You don't backward stuff in UBI, it always rolls forward. But we have floating tags like UBI 8 that always point to the latest UBI 8 version that's out there. If you rebuild on top of that, you will essentially always be able to educate a lot of CDEs that are just part of the base image that aren't even part of your organization, right? So having that view is super useful. It's just static image analysis, so it doesn't do a whole lot of contextual analysis like our ACS product does. But you kind of get it with Quay as part of another operator that's called the Quay Security Operator, which is also installed on this cluster, which makes all this UI integration happen. Very nice. Thank you very much. Excellent, Daniel. So, yeah, so that's a very good example of seeing how we can use a more advanced, I would say, not necessary production really because it's not the right term because the internal image is also used in production use cases in many situations. But again, it does bring a lot more features, this notion of HA, the security scanning capabilities, the geolocation. So, yeah, and the fact that the OpenShift experience remains untouched is, I think, a really awesome experience because as an OpenShift user, even if I switch to Quay, it's going to be transparent to me. I don't have to recreate or rewire image string tags to point out to the outside registry. So, yeah, thanks for these insights on how these things work. All right, so the topic of today being custom registry options, so Quay is, I would say, the red hat choice when you want to upgrade your registry experience, and especially if you're going multi cluster, et cetera. Again, of course, as we mentioned, there's the possibility to use any external registry to store the images and pull the images from those, except that everything that we have shown here should be done in a manual process or maybe you'll have to write out some scripts to do that. So, for instance, creating the pull secrets, pushing pull secrets, creating the repositories in the registry, synchronizing the users between the registry and the platform service accounts, for instance, the service accounts that are responsible for pushing or pulling the images. So basically, if you want to look at it, let's go to user management, for instance, and switch to our level up broad namespace. And we see here in the service accounts that we have a builder service account. This is the one that is used by default by OpenShift whenever we kick off a build process. So here, for instance, we see that the builder has actually some secrets that has been automatically associated to it to be able to push to the external registry, and that's basically what the Quay operator, one of the things that the Quay operator has done. If I wanted to use an external registry, I could very well do that. I could change my image stream to say output goes to my external registry, but in order to push to that registry, I would need to add an image pull secret, or pull or push, actually, and say that this is the secret that needs to be used to interact with that external registry. And I believe I have something like that in the MyApp project, and I have a build config. So maybe, yeah, it's going to be in the build inside. So this is a build config that I have manually changed before installing the Quay operator. So basically I created the app in OpenShift, but I had to manually change the output docker reference to point out to this MyApp Node.js repository that I have created manually also on the external registry. So basically, if you are using the external registry, which is not Quay and not with the bridge, these are the things that you have to do. You have to change, basically, those output settings to say this is the tag or the pull spec that you need to push to, and these are the secrets that you need to use to be able to authenticate to that external registry. All right, so we've got, I think, everything covered. We now have seen what we can do out of the box in an automated way with the Quay operator, but we also have, I would say dissected the different steps needed if you wanted to do that with a generic external registry. So thanks a lot, Daniel. Are there any things we missed or any things we wanted to mention? Thanks, Jafar. No, I think this was a pretty comprehensive overview. So going from the internal registry to an external one, you know, explaining why that is a good idea, but what is the management over it that comes with that and how you can lower that with something like an operator and also get some of the unique registry features right into the cluster with another operator. All right. Yeah, thanks. So I just wanted to show something from an admin perspective on how you need to manage the cluster configuration to to point out to different registries. So we have something custom resource. We, our custom resource definition we have created for open shift that you need to use in order to administrate the registries. And basically, if you wanted to add an external registry, you would have to, for instance, if it's an insecure registry. For my example, it's a non production. Where I use a self side certificate. So in order for it to work, I have to add that registry in the insecure registries list. And the way we do it is that you edit the image named cluster, and then you add whatever registries you want in there. If you also wanted to be able to browse the content of the registry directly from the open shift UI, you would have to add the allowed registries in this section. So that way you are able to see the content of the registry directly from the open shift console. Right. And to add to that, this is, so the insecure part is really needed if you have a self signed TLS certificate in place. So what I find most customers are doing is like they trust the CA that OpenShift uses to generate those certificates. The OpenShift CA is generally trusted. If that's in place and you run Quay on top of OpenShift, it will automatically create a route endpoint by the operator. And that will have a certificate that's signed by that trusted CA. So it's usually not required to put it into the insecure space. It's going to be automatically trusted by the cluster. And then you have a mutually trusted and signed communication going on between the cluster and your registry endpoint. Cool. So, yeah, maybe we can talk about that for another topic, but there's a search manager operator that we can use to delegate the certificate management for some of these components. So we do that natively for the OpenShift components and for all the ingresses and routes that we create using the trusted CA with the OpenShift cluster. But yeah, if we wanted to delegate that, for example, to Let's Encrypt or whatever, we could also go the search manager path and use that operator to do these kind of things. All right, so thank you very much Daniel. That was a very nice session to better understand the capabilities that we can provide with the customer registry. But it's a way to actually say not using the default registry that comes with OpenShift. So yeah, thanks a lot and great job on what you guys did with the with the quay operator and the bridge, because it makes it very transparent in terms of usage and still brings all the quay capabilities to OpenShift seamlessly. All right, thanks a lot. I think we are just a little bit over time, but we started also a little bit later. So any final words, Daniel? Thanks for having me, Jaffar. This has been fun. And yeah, I think there's also a stream two weeks ago in the Ask and Admin series of the Red Hat TV channel. So please check that out. I'm talking there with Andrew and some other folks about quay and the mineral registry and what sets quay apart from other registries and why it's a good idea to use that as a central piece of your multi cluster strategy. All right. So thank you very much, Daniel. Thanks, everyone who has been attending this episode. Again, please like, subscribe and share if you like the content. If not, let us know why and we can work on improving it so you like it. And so yeah, thanks again and hoping to see you soon for the next episode. Thanks a lot and have a great day. Bye.