 Hello and welcome to this webinar on Reddit Quay and OpenShift, how to run Quay on and how to use Quay with OpenShift. My name is Dr. Cameron, I'm the product manager of Quay, and I have with me Andy Block, who is one of our leading experts in delivering Quay in real-life deployment in customer environments. And at the same time, he wrote a couple of things. He wrote the initial prototypes for some of the stuff we will talk about in this presentation. And again, he's one of the experts for both OpenShift and Quay. With that, let me start with a high-level slide explaining why OpenShift is one of our focus areas looking at it from a Quay site. So OpenShift is our enterprise company's distribution provided by Red Hat, and it's our primary target destination. And we want to ensure that although Quay runs on any infrastructure, it runs best on OpenShift. So all the integration and improvements we did in order to deploy and manage and operate Quay, of course, are focusing on the Kubernetes capabilities and the OpenShift capabilities built on top of it. So the Quay operator is supposed to ensure a seamless deployment and ongoing management of Quay if it runs on OpenShift. We also added to other operators, we will talk about a little bit more detail in this presentation, such as the Container Security Operator, which brings the Quay and Clare vulnerability data into Kubernetes or OpenShift, and then expose it within the OpenShift console. And another operator we just introduced yesterday is the Quay Bridge Operator, which is supposed to ensure a seamless integration and user experience for all the different OpenShift workflows. By default, Quay has been written to serve content to one or more OpenShift clusters. So the more OpenShift cluster is probably the more appropriate use case for Quay, wherever they're running. So this doesn't matter. Quay runs on any infrastructure, both on-prem and public load, OpenShift the same. So it's a perfect fit if both are used together. There are a couple of benefits of Quay running on OpenShift compared with Quay running on standalone container hosts. This is still possible, it's still supported. But of course, running on OpenShift means running on Kubernetes and there are a couple of great benefits coming out. So you can effectively, from zero to zero, you can simplify your deployment and get started to use the product effectively immediately. Scalability, you can leverage all the cluster compute capacity management coming out of Kubernetes and OpenShift to automatically scale up and scale down. You have a simplified configuration for networking storage, all the entities which are managed by the orchestration platform in the OpenShift in this case. You have a better way to deal with all the configurations which are centrally stored in HCD. And of course, everything you do once, you typically do it more than once. So repeatability is another important point. And then we of course try also to leverage all the expanded options of OpenShift. So all the great capabilities OpenShift provides which are not by default included in a plain upstream Kubernetes environment but provided by OpenShift that's why it's worse for us to leverage them. Our focus, and as I mentioned, you can run Quay on standalone host. You can use it on standalone host as a matter. However, our focus for deploying and managing and container-wise application are operators. So this is really a company-wide focus and OpenShift is the leading element driving this container or Kubernetes operator adoption and we are following it. So the focus and direction of the product are Kubernetes operators. That's why we built the three operators we have in the meantime, the third one is what we launched yesterday and we can't afford to maintain another solution for standalone host, it's probably not worth the time. And probably with the next major release of Quay, Quay version four, it will be Kubernetes only because we really believe that Kubernetes is the future for container-wise applications. There isn't any other option or choice anymore. In the meantime, we have three different operators. We have the Quay operator we introduced with Sweden one. We have the container security operator and we have the Bridge operator. The Quay operator is really for the deployment and ongoing management of Quay and it's supposed to run on the cluster, on the OpenShift cluster Quay is running on. In contrast to the container security operator and Quay Bridge operator, those are two operators which are supposed to run on all the OpenShift clusters Quay serving content to, which could be the same in the single cluster setup. But typically, as I said, Quay is used to serve content to many clusters across the globe. And that's why the container security operator and Quay Bridge operator are probably running on more than one cluster. It's worth to be called out that the container security operator works perfectly fine with Quay though, but the Quay Bridge operator does not as of today. It's also important to mention that all the operators we developed, although they partially might work with OpenShift 3, we are only provide full support on OpenShift 4 primarily because some of the backend dependencies such as OLM are still in tech previous state for OpenShift 3. So we basically made the decision to ensure that we are developing against the most recent and most up-to-date version of our OpenShift container platform and that's OpenShift 4, which also means that future features we will introduce sometimes depend on newest capabilities we just added to most recent versions of OpenShift. So let's start to have a look how to run Quay on OpenShift. But before we talk about the Bridge operator, the Quay operator, which is supposed to do the deployment, let's quickly talk about some of the prerequisites and there is a dedicated recording available which talks about several prerequisites and options, architectural patterns and deployment options and we will get it out to YouTube pretty quickly as well. Just from an overall standpoint, so Quay can run as I said on OpenShift or standalone host and the default use case again is that Quay is serving content to many OpenShift clusters. So this is the default, this is how Quay has been designed and built to really work at scale across different regions, data centers and whatever. There was a clear line we need to draw between the components which are supposed to run on cluster versus the ones who are supposed off cluster and I will dive a little bit deeper into the details on the next slides but before I talk about the specifics for database and storage, let's have a very quick view on the Reddit Quay architecture. So effectively the product consists of a couple of container or three in the meantime four images and three operators and those are containers running on OpenShift in this case. So it's the Quay container, it's the Clare container, it could be the Quay builders, this is an optional component and the same with mirroring workers or those are other container instances which could run if repo mirroring is used. In front of Quay and Clare, you typically use a load balancer, it could be the HA proxy which is included in OpenShift, it could be also your own load balancer which already exists in your environment. The backend dependencies or all the containers of Reddit Quay are status components and the backend dependency is therefore critical, especially for HA and those are storage and database primarily. So all the metadata is stored in a database backend and only the binary blocks itself are stored in the storage backend. And then there's a third component which is worth to be called out but less critical, which is the latest cache. Effectively the tutorial and the build logs, so the Quay build automation logs are stored in the latest cache and that's why it's probably less critical than the database and storage. And then underneath there was an infrastructure and all the clients and UI, ECLI, API interactions happens typically via a load balancer to Quay and Clare and all the other components do not need to expose to the outside world. And then again, if the destination target for Quay and its content is an OpenShift container platform, typically on this platform, then the container security operator and the bridge operator is running and then connects on the same way and connection to Quay and Clare or to Quay mostly. I already mentioned that the Quay builders are supposed to run off cluster as of today. They require the runtime, the Docker runtime and they don't work with builder yet. It's a roadmap item which is not there yet. As of today, they can't run. So technically we got it up and running but we haven't missed the opportunity to document and also QE test yet. So we will probably add the ability to run the Quay builders on OpenShift on bare metal with the upcoming 3.4 version. Technically, as I said, it can run but we don't recommend to do today primarily also for security reasons. So builders should run outside. The database is effectively somewhat similar. So the database backend, we have a little bit more degree of freedom for Quay but Clare is currently limited to Postgres and that's why we recommend to use Postgres for both Quay and Clare. Since Postgres or any database is a stateful application if a stateful application is running on Kubernetes, we strongly recommend to use an operator. Since we as Red Hat do not ship in operator, we therefore recommend to use one of this party offerings such as the crunchy data Postgres operator which is one of the operators we are actively testing against during our QE cycle. So it's fully supported and tested by us and the joint vendor support as part of the Red Hat operator certification provides additional benefits for customers. If you run Quay on public cloud infrastructure we recommend to use the Postgres service which is provided by your cloud provider which by the way also applies to storage, Redis cache and anything else the cloud provider typically offers because then you automatically get the HA capabilities included. One short comment on disconnected or ergot environment. So although Quay runs perfectly fine in an ergot environment, Clare as of today does not because it needs to fetch the CVE metadata from all the different metadata sources we are leveraging within Quay, which includes in Clare. This hopefully will go away with the next release because ergot support for Clare is one of the top priority features for the next release. And with hopefully in other future releases we will introduce additional capabilities for ergot environments such as repormeuring, to-disk, export to and import from disk and so on. One of the other prerequisites I mentioned is the storage backend. We have a lot of different backend or storage backends we support AWS S3, Azure Blob, et cetera. And the recommended storage backend especially if Quay runs an OpenShift is of course OpenShift container storage, first to be called out that we are not directly connecting to the IH OCS backend but we are using the Numa multi-cloud object gateway object service instead. So this is an S3 layer on top of the underlying storage technology which provides a lot of additional capabilities we are leveraging and this is pretty important especially for HA setups. And again, the mission critical components for HA setups is the database and storage. And that's why this is probably important. How to use the OCS 4 version in conjunction with Quay is pretty well documented. So there's a cables article in the customer portal which explains step by step how to set up this configuration. It's fairly easy to use because both products are managed by an operator in the meantime. And this brings me so after you've satisfied the prerequisites you can start to deploy Quay and the way how we recommend to do so is using the Quay operator. It's worth to be called out that we formerly called it the Quay setup operator but obviously we wanted to change it because the primary purpose of an operator is not the initial deployment only but really the day two management and stuff and with the newest release of Reddit Quay we introduced a couple of feature we will explain in a minute which cost also we name it 2D Quay operator. And since Andy has written the original prototype of this operator, let me hand over to Andy to explain what the operator does. Thanks, Derek. So like most of any operator in OpenShift it is recommended and that you use the operator life of manager to an operator hub to be able to deploy Quay to your environments. This facilitates the integration of a lot of the role-based access control policies any dependencies that need to be resolved as well as the upgrade of the operator itself as soon as a new version becomes available for your cluster to be able to consume. Future versions of the Quay operator will be working to enhance more of the day two operations that are found in a typical Quay deployment. I will be walking through some of those aspects throughout the course of the presentation today. So to be able to deploy the Quay operator on OpenShift now number one the operator itself like most operators will only run on an OpenShift environment. So that is the first pre-work that you need to have a OpenShift environment. For those of you who have deployed Quay off of OpenShift you may be familiar with some of the challenges or lengthy processes for not only setting up and configuring Quay but also if you are looking to integrate Clare it also is a bit of a challenge to get all of those pieces all wired up together. Obviously once they are all wired up you get the benefits of the solution but some of that initial configuration can be a bit of a burden. The operator really goes ahead and facilitates and streamlines all of that and it is customizable. I work with a lot of customer environments where they have their own certificate management systems need to integrate their customer provided certificates. The Quay operator does provide options for being able to inject those configurations through OpenShift and Kubernetes native services like Secrets to be able to inject into the operator configuration. In addition it will also go ahead and deploy the Postgres databases that will be able to be served by Quay and Clare if you look to deploy that optional configuration as well as you can provide your own database if you have one already in your environment. It continues to use a lot of other Kubernetes native features like health checks and monitors and it will also set up the appropriate route external ingress points into the OpenShift environment. A lot of good features that the Quay operator does automate for you. Now in version 3.3 we have a lot of new enhancements to the Quay operator. Some of those that I called out earlier was new external ingress points. So if you're running obviously we want you to run an OpenShift but if you are running for the upstream community I know a lot of customers who like to, who don't really know anything about OpenShift they know about Kubernetes, they wanna get their hands on it. We do provide some more Kubernetes and friendly components like node ports and ingresses as external entry ports into Quay. Other enhancements that were added as part of version 3.3 is the configuration application does continue to run by default in prior versions we spun down the configuration pod by default but now we keep it running so that you can have ready to access to that part of the ecosystem for you to be able to access it if you need to. User configuration changes can be made after the initial deployment. You can use the config operator, the config app pardon me for entities that the operator does mark as read only but I have seen some individuals go in and modify the config.yaml file which is embedded into a secret in OpenShift. We do not recommend that because it will potentially burn into an issue where the operator will override some of the features and configurations that you do sets so I would not recommend that but you can go ahead and after deployment modify some of the components of the Quay ecosystem custom resources. Other enhancements regarding some of those enhancements to the Quay custom resources are now automatically reconciled by the operator such as the image, the replica account, the CPU and memory request for the various components of the Quay ecosystem as well as some of the Quay clear configurations and it means that if the configuration is changed the Quay operator can be configured to automatically redeploy your Quay components. Like all documentation for all Rehab products the latest and greatest can be found within the Red Hat Customer Portal and the documentation for the Quay operator consists of the installation of the operator itself how to deploy the Quay ecosystem resource, how you go ahead and customize that as well as some of the configurations that you can perform once the configuration has the operator has been deployed and Quay has been running for some time. All right, so now the most important thing for those of you in more of a delivery function is how to use the Quay Red Hat Quay with OpenShift. This is important, I've been working with OpenShift for about five, six years before Kubernetes even came out so I've seen everything from all the great features that OpenShift has provided that Kubernetes didn't have so that we can go ahead and now show you how to make use of those components using Quay itself and like any container registry Quay is just another external registry that OpenShift can consume from. So some of the things that you can leverage from Quay is to use it for the source and destination for builds that are produced within OpenShift. But most importantly, you're most likely going to be using Quay for your runtime content. This is your operational containers, they're gonna be running on a daily basis. Now you look at Quida.io, Quida.io is basically the source for a good majority of the foundational components for OpenShift itself as well as a number of the images that come with OpenShift. In addition, from OpenShift's point of view, Quay is just another external registry which means that you don't, which it means it is a bit of a difference from running and using images that are served by Quay externally versus the internal registry which means that if you are running a Quay itself, pardon me, you do not have the automatic RBAC isolation based on OpenShift cluster permissions with OpenShift's internal registry, you do have automatic isolation between the different components based on main spaces. That is not true when it comes to Quay, we're gonna call it some of the differences as we look at the presentation as well as if you're leveraging an image stream as a source for an image that's stored in Quay, you're not gonna have some of the automated components that you would if you were running an image that is being served by OpenShift internal registry. So if you're using Quay with or without the internal registry, Quay can be used with yes an external registry in front of an entire OpenShift cluster with its registry which means that you can go ahead and just leverage it like any other registry itself which is basically it's a source of image content and that you use and source appropriately from your different resources within OpenShift. Now what you can do is you can try to make the first steps towards a progression towards replacing the internal registry and part of this can be facilitated by a new operator that was released as part of version 3.3, the Quay bridge operator and we'll talk about that in a little bit. It basically tries to automate a lot of the components that are found within the OpenShift internal registry and with an OpenShift itself to be able to manage it all within Quay. Next we have, and this continues to bite me in the field constantly and if it's not proxies, it's going to be certificates at customer delivery sites. Certificate, almost every enterprise customer I have has their own certificate authority or one that is not trusted by a public entity. So you need to tell Quay and OpenShift to be able to trust these entities before you can source content. So there are different areas within OpenShift that you need to configure to have the platform be able to trust any external image registry and this is going to be everything from the underlying Krayo container runtime to be able to communicate with the external entity when communicating over a secure socket. You need to explicitly trust these certificates and then if you're leveraging the image stream feature with an OpenShift, you then must be configured additional areas within the platform for you to be able to have the platform itself trust Quay for being able to import content from it. This is all configurable and you can configure the registry itself to be specified as insecure, which means that it will bypass certificate validation, but this of course is not recommended. You should go through steps within OpenShift documentation to configure the platform to be able to trust OpenShift or Quay itself from an OpenShift standpoint. Now, one of the benefits of OpenShift is to be able to use and build images on the platform and you can go ahead and leverage Quay as part of this entire process. You can use Quay as a source for images. You can also use for it to be used as the base for any brand new image that you are building on the platform, as well as most importantly, you can use it as a destination for any build that is produced by the platform itself. There's a great documentation within Natal and the OpenShift documentation but also in the community it talks about the different configurations that you need to make to leverage Quay or any external registry as a source of content or destination for an OpenShift build. Two of the areas of concern that you need to be cognizant about are going to be the source and destination locations whether you're using a direct Docker image reference or an image stream. You cannot use a destination image stream with any registry except for the internal registry unless you are using the Quay Bridge Operator. So you would only be able to use a Docker image by default as a kind for a output from an OpenShift build to a Quay environment. But most importantly, if you are leveraging Quay itself you then would need to config and that Quay registry is protected. The authentication and RBAC mechanisms within Quay you need to configure the applicable push and pull secrets within your built configuration. Now if you do want to use Quay as a registry for OpenShift there are a number of things that you need to be cognizant about and this is going to be being able to change some of the configurations around the set of image custom resources that are found in any OpenShift environment. You can then begin to list the trust the list of trusted registries as well as I mentioned previously you can configure it to specify that Quay is an insecure registry and part of this configuration after you define it within the custom resource is driven by the machine config operator which will go ahead and configure the underlying cryo metadata and configurations within each individual node. As I mentioned previously as well, image pull and push secrets must be able to be configured if accessing any protected image registries as well as being able to leverage the Quay bridge operator to be being the first step towards that journey of tighter integration of OpenShift's registry within Quay. Now there are a number of common terms between Quay and OpenShift. I want to start a moment to kind of give a, some of you might be coming from more of a Quay background some of you might be coming from more of an OpenShift background I want to kind of do a one-to-one match in between the different components an organization within Quay is very analogous to a project for namespace within OpenShift a repository within Quay itself is similar to an image stream they see a collection of image tags that point to a single source. Image streams are a bit of a, it's very much like a proxy where it's kind of a view over a set of related images these images within OpenShift can actually come from different sources but in Quay they're obviously all going to be sourced from within Quay itself. Being able to manage access from a non-human perspective robot accounts are available within Quay for you to be able to integrate into external systems like a CI-CD system or for OpenShift itself being able to have platform talk to Quay you would use a robot account that is configured within OpenShift within Quay itself. Within OpenShift a non-user account is known as a service account being able to in the Quay bridge operator will actually take the service accounts that are configured in OpenShift and automatically go ahead and create the robot accounts. A Quay team is analogous to a group within OpenShift and then a build, both components have a very similar build functionality that is available in terms of function, you obviously have a Docker based build which is going to be in Quay and then you have various types of build components within OpenShift you have your Docker, your S2I, your Jenkins and your tech time based builds along with a whole new array of build options that are coming in the most recent versions of OpenShift. I'm going to turn it over to Dirk who's going to talk more about the container security operator and some integration with the OpenShift console. Thanks Andy, so I just realized that the order of the presentation isn't perfect we should have talked about the QBO right here because many of the things Andy just explained are at least partially automated by the QBO we will talk about them in just consider what I'm talking about here as a small disruption of the flow Andy kicked off. So let me talk about the second operator which has been mentioned on the original slide this is the container security operator. In case you didn't know, so Quay features are built in vulnerability scanning. Originally the previous version, version two has been limited to operating system package managers for various operating system types such as Reddit Enterprise Linux, obviously Ubuntu, Debian, MSLinux and other distributions, Alpine as well. And with the newest version of Reddit Quay we just introduced the initial support for programming language package managers limited to Python as of today as a tech preview feature. So we wanna run it in more testing environments we wanna get some additional feedback and input and then stabilize it in the next few months and then market SGA with the upcoming versions V.4. So Clare is a scanning engine effectively which has been developed by chorus for Quay. So it's really a very specific implementation of a very powerful scanner which is really supposed to run at scale because you need to keep in mind the same software we ship as the product Reddit Quay is what we use also for the hosted self-styled as a service offering called Quedo. One of the five biggest registries out there and that's why it really matters to us that whatever we do for both Quay and Clare it needs to run at the scale of our Quater.org deployment because this is something which really also differentiates us with from other vendors who are doing registry product as well. So Clare is a very powerful scanner. It's not only used by Quay, Quater.org and Project Quay it's also used by several other as a party projects and open source projects. So you might have seen that AWS started to use Clare as the backend scanner for AWS ECR and of last year as well. It's open source as Quay is. There was an upstream repository under the Quay umbrella in GitHub as well. And the scan results are shown by default in the Quay UI. So there was a deep integration between Quay and Clare. And what we try to achieve is that from within OpenShift so let's assume that the majorities of the cluster admin and the developers and the guys who are deploying containerized application on OpenShift they are mostly using the OpenShift console and not all of them automatically want to have access to just another UI which then contains even more information on various images, security, vulnerability, information and stuff like this. So basically we introduced already with Sweeter 2 and another operator, the container security operator which runs on OpenShift so on the OpenShift cluster Quay is serving content to and effectively what it does it fetches the vulnerability information from Quay and Clare and then stores it within the clusters or in the CR and then visualize it in the OpenShift console. So what it does, it's an operator which is running on OpenShift and watches the pod objects and each time a pod object is changing then it reaches out to the registry the image has been pulled from and tries to fetch the information from an API from a security data API currently this is limited to Quay as of today and Quay does, it also works with the hosted version but we are working with several partners on allowing them to plug in where the same concept into OpenShift as well. Yeah, so it's a custom resource which is stored into the data is there you can query the custom resource also via a CLI command I will talk about in a minute and then the information it's also shown in the console so how to deploy the container security operator is fairly easy so it's shown in the embedded operator hub as part of OpenShift you can simply deploy it on your OpenShift cluster and all M takes care so the operator life sector management takes care of all the prerequisites and everything else you need to do it as I said it automatically looks at where the image has been pulled from so you don't need to configure whether what the URL of your Quay or Quay registry is. As I mentioned in various views within the OpenShift console so we initially started with the cluster admin dashboard but with the most recent OpenShift version which came out last week we added a couple of additional views to the OpenShift console to show the vulnerability information. So you can see a subset of the data which is also shown within the Quay UI and there's also a link to directly hop to the corresponding view within Quay so from the OpenShift console you can see the vulnerability data for all the software you are using in your pods in a particular project and also on a cluster level so there are many different views it's pretty powerful and it addresses needs probably of many different target personas who are supposed to use the OpenShift console and there are at least two great blocks out there written by our user experience team who helped us with the design and the visualization layout within the OpenShift console so there's pretty a lot of information and those are just a few new views we added with OpenShift for just a week back so we continue to enhance the operator and the console integration over time and as I mentioned you can query the same information which is stored directly in the CR so keep in mind originally we considered to use pod annotations but then we changed the way based on the feedback primarily from security teams both internal and external ones that we shouldn't expose vulnerability information to the entire set of users of an OpenShift cluster so that's why it requires permission so you won't have access by default but once you have the corresponding permission to query the custom resource then there are a couple of commands to query the same information via CLI as well one thing I want to call out before I move on to the next slide is that we do not directly interact with Clare because I mentioned earlier just from a network and security standpoint Clare doesn't need to be exposed to the outside world so we are connecting against a Quay API and then Quay fetches the data from Clare so this is also supposed to make it easier especially in multi-cluster environments where you really want to limit the allowed entry points into your environment so this is the container security operator or the console integration we did as another key aspect so the second operator and with that let me hand over to the third operator and again as I said at the very beginning of the presentation and they helped us helping working with the internal community and the customer community to write the initial prototype so we worked very closely with customers in the external community really looking at what are their target use cases what are the things they really want to achieve and then we started to write the prototype and over time we stabilized and extended this operator and now we just introduced it as one of the top priority feature of Quay 3.3 and with that let me hand over to Andy to explain it in further detail Thanks Derek so for those of you who want to be able to leverage Quay as the internal registry for OpenShift you can go ahead and use the Quay Bridge operator to facilitate a number of the steps that you would have to manually configure within Quay itself to have similar parity from OpenShift to Quay when you do enable this feature any new namespace within OpenShift automatically results in a new organization within Quay each image stream that gets created within that namespace creates an analogous repository within Quay and then the three key service accounts that are created with any new OpenShift project automatically get synchronized as robot accounts within Quay itself and that allows you to be able to push and pull images from the Quay repository automatically with an OpenShift without any additional configurations from your standpoint we do support multi-cluster setups through a namespace mapping feature so based or a cluster mapping feature you basically give a prefix that is added to every new organization within Quay that allows you to separate and segregate the different organizations within OpenShift any new as I mentioned all secrets in each robot account within an org are automatically created in an OpenShift project the service accounts are really being are really leveraged within OpenShift to be able to facilitate being able to pull Quay images as part of a source for a build a source for a runtime or as the destination or be able to trust Quay and push to Quay as a result of a build from OpenShift and really that's one of the benefits of the Quay Bridge Operator as I mentioned way back early in the presentation that by default OpenShift does not allow image stream destination is part of a build output in OpenShift the Quay Bridge Operator uses a functionality with an OpenShift called a mutating webflow configuration to automatically wire up Quay as the destination for any new build that is leveraging image stream in OpenShift and that's really just the beginning of the tighter integration in OpenShift regarding Quay itself now OpenShift and the Quay the operator as we mentioned is another automated feature within OpenShift it does require some initial configuration to get going out of the box it does require a little bit of manual setup but once you get the manual setup complete you can then be able to leverage it to its full functionality a simple use case is I created a new project in OpenShift brand new organization gets created in Quay all the robot accounts gets configured along with the poll and push secrets if you wanna go ahead and create a brand new app I just happen to pick my favorite my the .NET example application is the one that I always use for a lot of my demos not a lot of dependencies that come along with it you will go ahead and perform the new build in OpenShift pull the build image from the red container catalog go ahead and perform that build get the dependencies for the image itself as well as then push the resulting image to the brand new organization in Quay into the repository that was created as a result of image stream creation that is also created automatically as part of the sample app in OpenShift that new deployment that is generated by the sample application automatically then references the image stream which points to Quay which then allows it to be triggered automatically at the result of the end of a build which will then result in the deployment of the image in Quay and then once you're all done we went from zero to hero and wanna clean up the resources you can go ahead and be able to delete the project in OpenShift which will then delete the associated organization in Quay very much like the Quay operator itself the Quay bridge operator is deployed using the operator hub and the OLM which is part of OpenShift and as I mentioned there is a bit of a setup process that you will need to go through some of it has to do with the configuration of that mutating multiple configuration this allows you to be able to have the operator intercept some of the build in triggering and wire up to talk to Quay automatically now it's part of that it's really being able to rewire and kind of hack image streams in the underlying components of OpenShift and that's really where the functionality of that I wanna call out is this functionality of image streams so image streams is basically an abstraction of a container image repositories within OpenShift images referenced within an image stream can reside either in the internal registry or an external registry like Quay.io or a on-prem Quay that we've been talking about throughout the course of this presentation however when you do leverage a external registry you lose some of the functionality that are found as part of the internal native OpenShift registry this is gonna be the automatic our Robase Access Control configuration that is automatically defined as part of any new project in OpenShift as well as the automatic notifications when new images and tags are available from the image source so if I push a new image to Quay OpenShift won't automatically be able to determine or only even know if a new image is available you must be able to tell OpenShift that a new image is available for its use so one of the benefits one of the features that it's found in OpenShift is that you can configure an image stream to be scheduled which means that it will go out and pull a remote source at a given interval so that is one way to get around that or at least one enhancement that you can look if you don't want to manually import an image stream from a remote source now the Quay ecosystem the entire ecosystem this is everything from Quay and Claire is can be integrated into a CI and CD pipeline and as I demonstrate here in this slide you can integrate Quay at number of different in the ecosystem at different points everything from being able to pull the golden image that you have within your organization as a source for a new built in OpenShift you can then use it as a destination as part of the result of a build you can then use Claire to be able to scan that image and then you can go ahead and obviously use it for deployment time once you've gone ahead and you've gone through the appropriate approval processes that any organization will have as part of their software-defined lifecycle through its various environments whether it be a dev, stage or production environment you can put that, you can integrate all those steps checks and balances and approval processes that are front of most organizations into a full flow to be able to govern which images are deployed as part of your OpenShift deployment now Quay and how it being able to trigger new deployments on OpenShift the bridge operator will automatically go in and trigger a new deployment when a new image is pushed to Quay itself and then that's obviously if Quay itself is used as a destination for or the source for that image on OpenShift as I mentioned if you aren't using the bridge operator and you are just using an external registry by default images that you push to Quay will not result in a new triggered deployment on OpenShift if you're using an image stream they must be either manually imported through the OC import image feature which can be integrated into a CI CD flow or you can configure the image stream to be scheduled additional configurations and integrations can be also included to be able to notify OpenShift that a new image has been pushed to Quay as well as other actions that have been found in a repository through Quay's feature rich assortment of notifications you can integrate that directly into a CI process as well as being able to integrate it into other solutions I've actually gone ahead and integrated into Ansible tower new images pushed to Quay go ahead and call Ansible tower to perform certain actions so really the notification feature in Quay allows you to integrate with a number of red hats feature rich solutions very similar to how Quay can trigger deployment and OpenShift they can trigger builds on OpenShift especially if you're using image streams very much the similar process that I had mentioned on my prior slide which is basically if you're using an image stream it will not trigger a new build automatically you must import the image stream or have it scheduled and being able to and then the obviously the image that you can build on OpenShift can be pushed to Quay itself as the destination I'm gonna turn it over to Dirk who's gonna talk a little bit more about Quay's garbage collection feature and OpenShift Dirk. Yeah, thank you. So one of the other great features of Quay is that it features a serial downtime garbage collection as though you have tech retention policies and it automatically cleans up the underlying images over time the challenge here is obviously that OpenShift doesn't know that Quay effectively deleted the tech and deleted the finally blobs so there's a certain risk that you will go into delete something which is still in use by some of the clusters Quay is serving content to and that's why we started to develop a couple of features to address this. Yeah, so as of today it's really that if the image gets deleted or the tech gets deleted obviously if the image is still in use then it might break an existing deployment. We developed a feature which unfortunately slipped the current release but hopefully it will be in one of the future leases which is the image awareness in Quay. So with the operators we already have we developed something which allows you if you are going to Quay and start to manually delete an image or a tech then it shows you where this image or the layers underneath are still in use both inside the registry but also in the clusters Quay is serving content to. Yeah, so basically the other way around that what Cesar does we have the information was in Quay where this image is still in use and based on this knowledge we can effectively prevent that we accidentally or intentionally delete an image which is still in use by any of your running pods or reference by a CR. So this is a pretty powerful feature which is also part of this large umbrella of features we're developing to deeper integrate Quay into the Kubernetes platform and to provide a superior value coming out of the Quay features plus the integration into the corresponding platform. And this brings me to the last slide of this presentation and I probably need to update this slide again based on a couple of brainstorming sessions we had just this week as a list of slide I took from the Quay roadmap that so the deeper integration into the Kubernetes and the OpenShift platform this has been and will be the top priority for us on the Quay side. And as you can see on the slide we already delivered a bunch of great features and those that have been the features we talked about in this presentation there was a lot of progress we made in the past few months but still we have plenty of stuff we wanna do in the midterm and also in our longterm planning such as more deeply integrating into all the extended capabilities which are part of OpenShift such as the full platform monitoring stack the alerting stack, logging and dashboards. We have a community contribution coming up on the OAuth integration for OpenShift. I already mentioned the image users advantage we introduced with Quay 3.3 as clearly marked as experimental the OCI artifacts back support which allows us to store Helm charts and obviously Helm charts are a thing as well on the OpenShift side. So there's plenty of room for improvements there to deeply integrate Helm based workflows into OpenShift. We already touched the pipeline integration and we are working with the builds and pipeline teams on OpenShift our deeper integration into the pipeline and build automation in OpenShift could look like. And then we have a couple of other stuff we are working on with both internal and external community such as the Notary V2 efforts to bring in signing into both in Quay and OpenShift. We have a very powerful epic designed and which is explained in further detail was in the roadmap deck which is quota management and enforcement and one of the blocker for quota management and enforcement and automated pruning has been this image awareness I mentioned and then I also briefly touched the so enhanced support for ARC environment. So there's plenty of stuff coming up in future leases but we believe already today we provide a very great integration of those two products into each other and hopefully this will satisfy the majority of requirements of both our Quay and OpenShift customers. Andy, any final words you wanna say before we close the session? I just wanna let you know everyone who has the opportunity to be able to work with Quay especially on OpenShift go ahead and try it out. I know there are a number of courses in the GPTE team has out there regarding being able to never use OpenShift in Quay itself going ahead and leveraging some of the courses that are available for you to learn more about the Quay ecosystem and have fun, it's a great tool. I love it, I work on it on a daily basis. I work on it with my customers, they love it. Go ahead, learn about it, learn about the features and I know that there's gonna be a lot more tighter integration with Quay and OpenShift moving forward and it's gonna be a great ride. So once again, thanks a lot for attending today's session. And it was a great point and I forgot to mention for our external open source community and customer community, of course there is a free evaluation form for Reddit Quay on all the Quay product pages and the Reddit customer portal at retter.com and at OpenShift.com. And of course we also have a very strong open source community, so project Quay.io and we are shipping at the end of each sprint we are shipping the final builds as the sprint results to the open source community, those are available as well. So there's plenty of ways to play around with Quay and all the features we mentioned and also the upcoming features which are under development. So don't hesitate to join our internal and external communities, ask questions, contribute, ask for features, provide us feedback. We really appreciate your input and feedback there. Many thanks for watching, enjoy your day.