 Welcome to this overview presentation on Red Hat Quay, Red Hat's industry-leading enterprise container registry. As Kubernetes and container platforms become the new standard for cloud-native application development, container image registries become a critical part of the modern software deployment pipeline. Container image registries are the place where images are stored and are the place where orchestrators are pulling images from. To be able to deploy containers, container orchestrators require container image registry. While Red Hat Quay certainly is a container registry, it goes far beyond that. Quay is an industry-leading, trusted and open-source registry platform. It's developed in strong collaboration with a broad open-source customer and ecosystem community. Red Hat Quay runs on any infrastructure, both on-prem or public cloud, and works with any OCI-compliant tool, but works best with Red Hat OpenShift. One of its key strengths is its nearly unlimited scalability. You can run Quay on your laptop or on your data center to serve a handful or few hundreds of images, but you can also use it to serve billions of images to thousands of clusters and thousands of users as we do with Quay.org every single day. Quay is built for real enterprise use cases, where content governance and security are two major focus areas. It includes a built-in vulnerability scanning, fine-grained access control model and multi-tenancy model, and several other capabilities to federate content across your clusters, data centers and even regions. You can run it for your own in your own data center or on public cloud, or you use our hosted SaaS offerings Quay.org, and we will take care of all operational aspects for you. Red Hat Quay is built to support true-depth SecOps environments. It helps you to control how content is flowing into and then used inside your environment. Quay can manage a centralized content ingress point for explicitly white-listed and trusted content. This allows you to explicitly define which content from Red Hat, other software vendors, your suppliers or the open-source community can be used within your environment. The fine-grained access control model allows you to configure both Quay and OpenShift in a way that it reflects your organizational setup and workflows. The more you shift left your security approach, the more important it is to enable developers to have access to the information which is needed in order to take care of security aspects already in very early stages of the software development lifecycle. With its very efficient and fast vulnerability scanning, Quay helps to detect vulnerable images already in early stages of the CI-CD pipeline. Using the integration into the Kubernetes platform and visualization inside the OpenShift console, it helps to detect security issues with images both running and addressed easily. Whether the Quay UI is used or its API, the information stored in Quay can be used by all interested parties in a Deaf SecOps environment throughout the entire lifecycle. Quay is built to run on geographically dispersed setups across multiple data centers and regions. Once the content you plan to use as the foundation for your business application has been explicitly whitelisted and mirrored into the environment, both georeplication and repository mirroring can be used to distribute the content even further to all your data centers and clusters running on-prem or on public cloud in various regions across the globe. As used by many customers with globally dispersed deployments, the content can be exposed using a single, globally shared registry to speed up the access to the binary blobs in those regions with Quay's georeplication feature. Images that are pushed to one Quay instance are replicated to other instances so that image pulls happen from the local data center nearby storage. Alternatively, or side-by-side with georeplication, the repository mirroring capability can be used to mirror the entire or a specific subset of the content between multiple Quay deployments. This also allows you to have team or region-specific registry-wide configurations such as country-specific setups or if they need to be owned by dedicated teams of the organization. By having this additional layer of manageability when getting content into and distributing content through Quay, customers can get an additional level of freedom and independence for distributed registries. The fine-grained access control allows you to define which content can be used by one and where and is aligned to your specific governance requirements. The container registry is an important part of your software development toolchain, used by many different tools throughout the entire lifecycle. A key requirement for an enterprise registry therefore is that it allows you to integrate it into your existing environment and that it helps you to automate the workflows as much as possible. During your build stage, you can leverage Quay build automation, use your own or the OpenShift builds and pipelines. After the images have been built and stored in registry, several capabilities of Quay helps you to manage, secure and distribute them in a very efficient manner. When the software finally has been deployed to your clusters and runs in your production environment, the deep integration into the Kubernetes and OpenShift platform helps you to leverage Quay's enterprise features from directly within your orchestration platform. Automation and integration are critical success factors and this includes both built-in workflows and external automation using the APIs. Robert Accounts, Webhooks and the API allows you to easily integrate Quay into all other tools you are using today or plan to use in the future. It's very easy to deploy and get started with Quay's default settings on a developer laptop, but probably Quay runs even better inside your data center using the Quay operator to manage the deployment and registry lifecycle. Each Quay deployment is also highly scalable, since Quay is deployed as a container and can scale horizontally as needed. Before the final product version is shipped, it's battle-tested at scale on Quay.0, one of the largest registries out there. Quay.0 serves hundreds of millions of images at a rate of tens of millions of requests to thousands of customers daily and it's using the same code base as Quay. Quay is designed and used by many large customers and their organizations. The multi-tenancy model allows to clearly separate concerns and responsibilities, but it also allows matrix organizations as well. Responsibilities can be organized using Quay's organizations, teams and users and fine-grained permissions can be mapped to your organizational structure to grant access to specific repositories. As used by many customers with globally-supposed deployments, the content can be exposed using a single globally-shared registry to speed up the access to the binary blobs with Quay's geo-replication feature. Alternatively, as I've assigned with geo-replication, the repository mirroring capability can be used to mirror the entire or subset of the content between multiple deployments. Lifecycle environments can be separated via organizations or distinct repositories and lifecycle environment promotion can be automated via the Quay API or directly from within your CI-CD pipeline. Rated Quay also allows you to integrate your existing identity infrastructure and supports multiple identity providers including LDAP, Active Directory or OEC. All relevant actions are locked and visualized within the Quay user interface as well. Quay is built on a cloud-native architecture and is deeply integrated with OpenShift and Kubernetes. Quay runs on any infrastructure and works with any tools that follow the OpenContainer Initiative standards. However, running Quay on OpenShift allows additional benefits such as scalability, since Quay can leverage the cluster compute capacity to manage expected demand. Simplified networking, using diverse ingress options with well-established patterns for any application deployed to the platform. Zero-to-hero, the simplified deployment of Quay and associated components means that you can start using the product immediately. Expanded options, additional solutions that are specifically designed and take advantage of an OpenShift deployment. Looking at it the other way around, running OpenShift with Quay allows additional benefits such as multi-cluster and multi-region content management. Enhanced access control including support for dispersed organizations, automation of builds and behaviors and integrated container security features. Operators which encode the operational knowledge of the lifecycle and management of the Kubernetes native application play an important role not only for the Kubernetes ecosystem, but also for Rathat and Quay in particular. Currently there are three different Kubernetes operators to drive the integration into the Kubernetes platform. The Quay operator automates deployment and day-to-management of Quay itself. The container security operator brings Quay and Quay vulnerability scanning metadata to Kubernetes and OpenShift. Kubernetes cluster admins and developers can monitor known container image vulnerabilities on a cluster, project or pod level directly from within the powerful OpenShift console. The Quay bridge operator is designed to streamline the user experience for OpenShift customers if Quay is used as their container registry of choice. Already before Quay has been entirely open sourced in 2019, we already developed the product in strong collaboration with our customers and ecosystem partners. This includes sharing early designs of prototypes with customers and partners to get feedback as early as possible. After we verified in service or workshops that the feature design will meet the customer or partner's needs, we finally started developing the code, the documentation and test cases of course. After the new feature has passed our QE test, we make it available to selected namespaces at Quay.org to verify that it works at scale and to get additional early feedback from those high-touch customers. Once this has been stabilized again, we build the final product deliverables and make them available to our end customers. In addition to that, we sometimes build features which are required for massive scale deployments of Quay as we do with Quay.org, one of the five biggest wedges we saw out there. Those enhancements are then made available as a product feature as well, given the huge amount of on-prem customers running Quay at a huge scale as well. This includes features such as the Quay usage logs and elastic search, but also includes operational aspects we added to the product based on the experience made by our SAE team which operates Quay.org. Of course, we not only build and ship newer versions if a new feature comes out, but also ship regular updates for vulnerabilities and bugs which are impacting our customers. The entire development model of Quay is 100% upstream-first. We develop against the Git branch, which is available to the community as well, and we ship upstream builds at the end of each sprint. It's entirely open and transparent, given the usage of one single GRR project for both the upstream and our commercial product version. Project Quay is the upstream project representing the code that powers Red Hat Quay in Quay.org. More than 150 contributors are helping us to build better software which is used by thousands of customers of all sizes around the globe. Red Hat Quay is trusted by many organizations of all sizes and backed by Red Hat's decades of expertise supporting the needs of enterprise clients. Working closely with partners allows customers who are using both Quay and our ecosystem partner offering to leverage the best of both worlds. Red Hat Quay is available in two different flavors. You can use Red Hat Quay as a private registry running in your own data center or on public cloud. This offers more direct control and freedom, giving you direct access to change or fixed components as you like. The registry also sits behind your firewall, so the security is easier to audit and monitor. By going with an on-prem registry, you also save on bandwidth and storage and departments are local. But there is a trader for that freedom and control which is having to do the maintenance and scaling yourself. In contrast to other products developed for end users, Red Hat is running and operating its own product as a software as a service offering. It hosted our version, Quay.do, as I said earlier, serves hundreds of millions of images at a rate of tens of millions of requests through thousands of customers every single day. Using Red Hat Quay.do means that Red Hat is taking care of operating Quay for you. Since the same code base is used for both Red Hat Quay and Quay.do, the feature set is nearly exactly the same for both. The same applies to support, provided by Red Hat for both types of customers. In addition to the monthly paid plans on Quay.do, which grant you access to private repositories, all public repositories are free and are limited to empower especially all the open source communities who are using Quay.do to distribute their open source software. On all Red Hat Quay product pages, you can start an evaluation period and get access to the product and Red Hat's award-winning customer portal including knowledge, reviews and documentation. For those who don't want to locally deploy and run Quay.do to evaluate its Quay.do features, you can also sign up for free on Quay.do. It takes less than five minutes and you can start to play around with all those great capabilities I mentioned here. Thanks for watching and stay safe.