 Good morning everyone. Morning. So we have the meeting minutes and if you are in the meeting, can you actually add yourselves to the meeting? So I don't, I don't think we have anybody. Meeting notes today. I'm going to paste the on the chat. Okay, just, and then we'll just give it another couple of minutes and see if anybody else joins and then we can start. Okay, so anybody, I'm going to change it a little bit today. So I don't want it in the interest of time. I don't want to, you know, everybody to just kind of like say hi. I'm just here to listen in. So if you have anything, any stand up items, anything that you want to bring up different from the agenda. Just speak, speak now. All right, cool. So, yeah, so I think I have one item for the sacred time roadmap. I reached out to a couple of folks, Samuel Karp from AWS about getting more participation in the sake with the firecracker and maybe bottle rocket, a couple of projects that they're actually working on. And so hopefully we'll get something in the future as far as participation and we also have another or project for a project from IBM. So I've reached out to them and it's a research project that look at looks at a kernel. And it's not quite like a unique kernel. So it allows you to run Linux workloads. So hopefully we're going to get presentation from them or some participation. And then, yeah, that's the two items that I have. So I think we'll just move on to the next agenda item, which is the quay presentation. So Joe and Bill are on the call so feel free to take it away. Hello, hi, it's Bill Dettelback. Thanks. Yes, so Joey and I are going to just co-present here. Let me see if I can share my screen. Everybody can see that. Yep. Okay, cool. How much time do we have, Ricardo? So I don't know exactly like the whole meeting is it's one hour. So but then we have the other item. So maybe, you know, 30 to 40 minutes. Oh, okay. That's great. Yeah, is that enough. That's plenty. Yeah, I think we had planned for fairly short even like only 1520 so we were happy to take more time if there's questions and more details. So that's great. We didn't come prepared to PowerPoint you to death. Don't worry. So let me hand it over to Joey, he's going to take most of the beginning of this presentation. And without further ado, I'll just hand it over to him. Thanks. Hi, I'm Joey Shore. I'm a former co-founder of quay and now the lead engineer on the Project Quay and product at Red Hat. I'm just going to give a brief overview of where Project Quay is today and kind of where it came from. And as Bill mentioned, feel free to ask questions. We didn't plan to PowerPoint you guys on all the long day. So we're here to answer whatever questions you have. So Project Quay came out of quay.io. We were the first private container registry on the internet. We actually started quay before Docker Hub was a working project or product rather. Right after the Docker public index launched. So we launched at a Docker New York City meetup in October 2013. At the time it was by my startup DevTable LLC. That company was then acquired by CoreOS in August of 2014, right after we launched quay enterprise, which was our on-premise version. Again, co-base, just different version. And then Red Hat bought CoreOS in January of 2018 and we then open source quay as Project Quay after that. Next slide please, Bill. So as mentioned, quay was the first private container registry and as such it has a somewhat unique history. It was independently developed and so it wasn't based off of Docker distribution but is instead even to this day an independently developed image registry with no external vendor dependencies, but it is fully open source. So we're essentially from lack of better term a clean room implementation of the registry protocol. We've been implementing it on our own since that very first version. The community has been making use of the go-based Docker distribution to kind of form the core of most other registries. We've been our own. Also as mentioned, we use the same code base for both our on-prem and cloud hosted version. It is literally the same container image. We just configure it differently. So we give it different secrets, different storage configuration. We feature flag a few things on, feature flag a few things off. And all of our releases, excuse me, go to quay.io first. So this means that we are testing at a scale that essentially no customer outside of quay.io itself is running before we actually push it to our on-prem customers. And that means that we discover problems fairly quickly because if it works for a million repositories and 100,000 concurrent customers, then we're pretty confident it'll work for 100,000 repositories and 1,000 concurrent customers. We're also the only registry product or project that has full push and push and pull support with Docker clients all the way back to version 0.7. So we support the initial Docker v1 protocol. We support Docker v2 schema one, Docker v2 schema two. As of two weeks ago, we support with the experimental flag turned on, full OCI as well. And all of this is bi-directionally and concurrent. So you can, using a version of Docker from 2013, you can push an image and then pull it using a modern OCI-based client or vice versa with subtle scenarios that don't work such as you can't obviously pull a non-AMD64 Linux image from Docker v1. But short of that, if you're just pushing a standard container image that's been used for the last seven years, it'll work at any version of the Docker, any version of any OCI-compliant tooling. And this is part of our commitment, both as a product for enterprise customers and as a project. We have a very strong belief that customers should not be forced into hard migrations of their tool chains unless there's no other way around it. So every time we change our data model or the API changes or suddenly evolves, we make strong efforts in order to ensure backwards and forwards compatibility whenever possible. To that end, we also have the OCI compliance test reference implementation. It's currently in PR, but it is being added to our CI CD pipeline probably later today, if not maybe tomorrow, depends on when we get the final review done. But this will mean that every code change to Quay will be required to pass OCI compliance as per the OCI compliance test suite. And this will ensure obviously that we aren't breaking against the proposed spec, which is, you know, again part of our commitment to ensuring that we support all of these different tool chains. In addition, we have early access for OCI mime types, particularly as part of the OCI artifacts standard, which I'll reference in a moment. And to that end, we've registered via feature flag and experimental feature flag support for Helm v3. So if you have the OCI feature flag turned on and the Helm v3 flag turned on, you can push and pull Helm v3 charts into repositories, similarly to how you can push images as well. And again, part of our commitment towards growing standards. And finally, we are actually driving the OCI standards for artifacts. I myself am on the working group leadership for the OCI artifact standard, which the initial document got committed. I think it was either earlier this week or late last weekend, I forget the exact amount we LGTM did, but you know, immaterial and exactly what it was. And we are actively involved in helping to evolve the idea of what a registry is from the container registry onto a more generalizable artifact registry, while at the same time in continuing to ensure backwards compatibility and also ensure adherence to standards at every level, which Quay is uniquely suited to because we do extra verification checks on all things pushed into our system, which is another strength that we have. Next slide. So just project quay at a glance. This is just a high level overview and this obviously does not cover the full breadth nor depth of the project. We have an OCI compatible registry, as I mentioned, and was also mentioned, we are the only registry that is not only OCI compatible, but is fully backwards and forwards compatible with essentially every container image API and distribution format that has been released. We have Claire image scanning so Claire is another project that is part of the project quay proposal to be accepted and Claire is our open source security scanner. We actually have two versions of Claire currently the legacy v2, which is currently used in production and the up and coming version for which will be, which is available today to for testing and will be tech preview in the next product product in the list of quay, but you can use it today with project quay if you just add some configuration, and we're continuing to obviously evolve and build that as we move towards the first formal release of that. We have image builder support and not just image builder support but full integration with trigger setups you can via a nice UI based wizard. You can build trigger on GitHub or GitHub Enterprise, GitLab or GitLab Enterprise, Bitbucket, or even custom Git if you don't want to make use of specific APIs, and every time a push occurs in that Git repo, build will be triggered on the quay side. Those builds are sandboxed via virtual machines run under Kubernetes if you're using quay IO. If you're running on premise or you're running project quay today you can use the Kubernetes based driver or you can use a legacy one that doesn't have the same security guarantees but we have flexibility there. We also are the, as far as I know the only build system that has full caching backwards through the entire history of the repository because we have the repository available. We can do cache lookups that we see that hey this build could be benefited from a cache from six months or even a couple years ago we can pull that tag as opposed to the more modern tag. And that's again driven by the fact that we are the registry in that scenario and we can do those integrations. And then it, you know, we have a bunch of other features built around that integration. We have Kubernetes operators we're building right now for deployment as well as day two operations. These operators are also part of our proposed project to CNCF and in particular I want to call out to the first one is what we have called the container security operator this operator is already available today. You can install it in an OpenShift cluster you can also install it in the alcoca cluster but in OpenShift the console gives you some additional benefits, and it will actually talking to a dot well known endpoint which could be quay, which it is today or it could be a non quay registry. You can automatically label pods with your security vulnerabilities. So this is very good for actionable intelligence as to what's going on in your cluster in terms of the security of those pods without adding the overhead of a scanner in cluster or requiring that your cluster have access to anything but the registry in terms of network access so it solves two very important problems there. We also have obviously our quay operator itself for installing and managing quay itself and that continues to evolve into a full featured day two operator today. It focuses on deployments but we're already adding day two operations to it with the eventual goal, of course, of making installation of quay as simple as create a CR in your cluster, a quay ecosystem and you get the full end to end experience of quay. We support multiple storage providers. This is similar to other registries. We support, you know, the standard S3 Azure GCP on prem. We also support OpenStack Swift, and we have a built in feature for geo replication which is built on top of the storage system, which allows for registry run registry instances running in disparate geographic locations to copy the registry binary data from location to location in the background, but even across disparate storage providers so you can use Azure in one location and GCP and another, but and and you can configure geo application and as long as you've configured it correctly, the registry will be able to copy the binary blobs from one storage to the other seamlessly without any further configuration, which is very powerful and allows for some very useful cross cloud collaboration. We have very, very fine green metrics and audit logging our audit logging system logs every action taken in the entire registry product at a granularity level of repo namespace and registry available at each of those levels. And that includes pushes, polls, tag operations, anything you name it. And so this is extremely important for audit ability purposes. And it is kind of our number one requested audit ability feature routinely, and we are launching soon. It's already integrated today but we'll be launching into the on prem products and support for not just using the database for audit logging but additional logging providers such as kinesis or elastic. And this allows for growth of scale. So when you're processing, you know, not a couple 10s of millions of logs a month but you're processing a couple hundred billion operations a month, then you're logging infrastructure can handle it and it's part of the reason why we've had that. We have enterprise grade RBAC and all support. This was kind of the risen deter of quay originally as an on prem product was to provide, as I mentioned, we were the first private registry product available and so off and RBAC were kind of the key found services of our products. So we were the first registry that offered robot accounts we have very detailed authentication for external applications to operate and that includes operation at the command line so you can use all tokens and robot tokens of the command line. We have integration with various RBAC providers, OIDC, LDAP, Keystone, including another one we call custom JWT, which means you can write your own off engine and quay will just speak to it. On the LDAP side, we have team sync so you can and as well as Keystone. So if you want to back your teams in quay with LDAP groups or Keystone forget if they're called groups or teams but same difference. You can do so and the system will automatically synchronize those things and again all of this RBAC and all support is tied together with our existing audit logging and finally it is, it's tied together with our OIDC support so you can actually use OIDC and or LDAP or OIDC and Keystone. And you can mix and match these options to kind of tailor quays off and and I'll see to your particular needs. Image time machine is a feature that is unique to quay when a tag is pushed into quay rather than overriding the tag. We actually keep a history of that tag for up to a configured period of time and that is administrator configurable. The standard is two weeks and that allows users to, if perhaps they have overrode a tag incorrectly or they needed an old version for compliance reasons or myriad of other reasons to look backwards in time, roll back their tag if necessary and at least know how that tag changed. This feature in particular has saved my personal bacon at least a few times where I found a tag and it turned out that tag was broken and it needs to roll back. I was able to do that without having to keep extra copies around. I saw a few people said they came in late. Do I need to go backwards to address anything that they missed or should I keep going forwards? I think we can keep going forward and if somebody has any questions they can ask at the end. Sure. Yeah, we have a recording as well. Yeah, I just wanted to make sure in case anyone had any questions. I didn't want to just keep talking and talking. If anybody, you can actually request for questions if you'd like during the presentation and somebody has any questions they can jump in. Yeah, please speak up. If you want to make it more interactive. Yeah, no, as I said at the beginning, if you have a question, please ask it while I'm talking. I'll be happy to answer it now. You do not have to wait until the end. I had a question about who's using it. Maybe you're covering it later. Yeah, I was going to go into that in the next slide. I'll go through these last three items pretty quickly. Flexible deployment models. So while we encourage our users to deploy project way via Kubernetes and obviously the work on the quay operator is towards that goal is not required and so you can deploy quay with a Docker run and a database and storage. So if you just want to run it in a small operation and you don't want to necessarily use cube that option does exist. Notary integration so we do have support in project quay today for talking to notary you can feature flag it on. But with the work that's being done on notary v2 another group that I am a part of, we are actively working towards the next implementation of security. I'm sorry signatures and scanning stuff and that will we will adapt that work as it reaches fruition. And finally I mentioned geo replication but we also have support for mirroring so in our mind mirroring and geo replication are kind of two sides of the same coin geo replication is if you want to have a universally distributed single logical registry while mirroring it's also for us to say to have disparate and distinct registry instances with one instance copying from another. Our mirroring support today is pretty powerful but we are already working on improvements on our roadmap in terms of being of being able one quay talking to another to be able to leverage additional API such as only mirroring images that match meet security scans or have additional metadata attached. So this is the quay architecture at a glance. I'm not going to go through it too deeply because sure there will be a lot of questions on it but at the high level. Project quay consists of the quay container itself which you can see in the middle, the quay container runs all of the pieces of quay our build manager or workers the registry the UI. In parallel to that we have the clear container which is an independent container but quay and clear talk to one another. You can then run as many mirroring workers as you like if you're using mirroring. It's not. Is there a question I'm sorry. I think it's just. Yeah. You can run so you can run as many mirroring workers as you like if you wish to if you have a lot of mirroring operations you want to you can scale those independently and then the quay builders themselves run as separate objects. If you're using the Kubernetes based build system they actually run those jobs under the cube cluster. These these operations are then run via the quay operator again which is optional you don't have to use it. And then quay speaks to for storage is speaks storage the database for metadata. The red is cashing for cashing and then and these are all generally configurable. And then we have other things that talk to quay via load balancer to including the UI customers content ingress the red hat container catalog and other things like the operator hub. Today the operator hub actually runs on top of quay via its apis. And all operators served in the 11 project actually are coming from a quay instance quay I owe today. Okay. Next slide please. So one thing I wanted to mention to before we start talking about using a quick question on the previous slide if I may. So are those those quay clear mirror etc. Are those all replicated or are they multiple of them. Yes. Yes. Okay. The quay container. So both quay and Claire are stateless themselves the containers themselves don't store any data with the exception of local cash. So as an example quay that I owe we run approximately 30 quay containers on an open shift cluster. And they're sitting behind a load balancer and they are actually auto scale based on traffic. So as traffic goes up we rerun that number up and as traffic goes down we take that number down. All of the state of quay itself is kept in storage the database and reds. And so this ensures that you can scale quay. As long as you're underlying database reds and storage can handle the traffic you can kind of you can kind of scale quay more or less infinitely. Which is very important. So the other question that I have so the quay operator takes care of the upgrading and managing the different components or. Yeah, so today the quay operator to the quay operator today started as what we call the quay setup operator initially the very first version was focused on setting up quay but was not was not focused on upgrade. So we are working on the day two operations as we speak and in terms of updates. There's another operator that we have called the database administrator operator the dba operator that we the quay operator will be using. And the goal that we are working towards and we're not there today I want to caveat the goal that we are working towards is you'll be able to change the version of quay in the quay ecosystem CR that you have. The quay operator will be responsible for calling the dba operator, along with itself to do the the upgrade in place if possible. And if not possible it will take down the cluster, do the upgrade and put the cluster back up we're endeavoring. We hope to get to the point where the cluster never needs to go down we can always do in place upgrades, like we do for quay that I am. Even if that means putting the quake cluster into read only which we have to do, we've had to do once in history so far. But the dba operator would be responsible for that so I we have the pieces in place today but they're not yet there in terms of allowing for seamless upgrade yet, but it is still faster today so for example, if you're making use of the quay operator and you make a configuration of the quay, the quay operator along with the configuration tool will redeploy quay by doing a, you know, a kube deployment update where it will replace one node at a time with the updated config. So we already have the pieces working but we're not there yet for the database migration side because it's a little more complicated. So another question about the databases is it a specific database or is it just Postgres or MySQL or Postgres? So you can use either MySQL or Postgres. We generally recommend Postgres because in our experience Postgres is more efficient and Claire only speaks to Postgres so if you're going to be running a database for Claire you might as well use the same one for quay if you want. But we support on the quay side we support MySQL Postgres and the project quay test suite also tests Maria and Prokona, which are of course variants of MySQL. There is also if you're just running project quay locally on your laptop you can also use SQLite. With the caveat that if you're using SQLite you can only run one instance because obviously SQLite is a file. So for like production deployments you recommend redundant databases here? So we generally recommend MasterSlave. We do have support, prototypical support today for read replicas as well and that has been merged into head. I have a PR outstanding that will add some additional changes to that to address an issue that hasn't come up yet but may come up. But our recommendation moving forward will be deploy the database Postgres MasterSlave and then have one or more read replicas configured as well, especially if you're deploying across multiple geographic regions, you'll likely want read replicas in those regions to just make the performance better, right? It's not required. We have customers today that have geo replicated red hat quays deploy globally and they're all talking to the same database in one region. But having read replicas will certainly make the read operations faster and more redundant, which is nice. The other thing I should mention too is the operator today does deploy Postgres for you. One of the topics that has come up, sorry, carry on Ricardo. Yeah, my question is that does the operator deploy it in MasterSlave configuration or it just gives you options? I believe today just deploys it in a standard like Postgres container. But what we're working towards is allowing people to configure the quay operator to choose what kind of what other operators used to deploy the database. So our general recommendation will likely be if you have the crunchy DB operator, which itself manages Postgres and a MasterSlave and does all of the backup and failover for you, then you could just have quay operator create the crunchy DB CR, right? This is the great idea. The great thing about the Kubernetes ecosystem is we don't have to be responsible for how to deploy a database. We're not necessarily the database deployment experts, but there are projects and products out there that are Kubernetes compatible that provide these capabilities. So what we're working towards is in the quay ecosystem, you say, hey, I want to use this crunchy CR to deploy my database. Crunchy will go handle all that and we just say and give us a postgres endpoint and we go great. Now we have a database and then crunchy would manage that as an example. Yeah. When you have a question, I was just going to mention that in the context of a different registry, the topic of high availability came up because because this is such a sort of key part of a Kubernetes cluster. And when it's down or unavailable, the clusters essentially down or to some extent down. And so, so MasterSlave is all very good. But but there, you know, if there's usually some kind of manual failover required in those kind of environments, because you don't, you know, the master and the slave don't know which one is down. If there's a network partition, for example. So yeah, I guess, I mean, it's not an issue for incubation really, and that's the kind of thing that people will perhaps ask you to sort out before graduation. But just, yeah, be good to kind of get on your roadmap, like what exactly the high availability story is going forward because it will be a, you know, big question from the community. Yeah, so our high availability story is around two separate levels of high availability. So from our perspective, high availability of read operations is much more important than high availability of right. Right. If you're, you're unable to push for five minutes, it's not the end of the world, unless you're in a production fire, but if you're unable to pull for five seconds, it can be. And so, in our opinion, and this is something that I've been pushing heavily for read replicas are key. And right so the way it works today in quay as of this moment. If you configure quay with one more read replicas, what it will do is it will actually attempt to pull any any read only operation such as a poll will go to the read replica first. If the read replicas unavailable the system will automatically fail over back to the master database. So you actually already have redundancy there automatically built in on the quay side, such that if you configure it pointing to your normal master slave, say postgres, and then at least one read replica if not more quay will automatically redundantly check them to make sure that it can pull from at least one of them. So our, our, our belief is read replicas will solve the critical high availability aspect, combined with the fact that quay itself is a highly highly available design in having multiple instances. You can have it such that if you have at least one a few quay containers running and we generally recommend at least three, and you have at least one replica read replica backing your database and quays configured to talk to that read replica as well as the master slave. So now you have ha on the quay side and ha on the database side storage itself is generally hopefully deployed in an ha state right if you're using something like Seth or Amazon S3 or GCP right like those those are essentially ha storage systems. If you're if you're using georeplication you have a backup of your storage. Now while at this moment in time the storage failover isn't automatic we do plan to do that as well. This cache is optional. So we're addressing every layer and ensuring that we have redundancy at every layer. And today we already have redundancy in my in my opinion is the two most important ones. Quay itself and the database storage we hope, as I said via georeplication to have auto failover some added in sometime soon that's that's on the roadmap, and that would mean that if you have georeplication enabled full georeplication and store and your primary storage is completely unavailable. You then have a situation where quake and then failover on that side too and now you have essentially a primary and a backup at every level. Thanks very, very comprehensive answer. Any other questions on the architecture before we move on. Okay. Next slide bill. One thing I wanted to talk about briefly before I handed over to bill who will talk about our, you know, customer use cases and numbers is our testing suite and this is something that is fairly unique to quay, and I think is a major benefit for the community and that is our registry test suite so at the same time our registry test suite makes use of high test to create a matrix test. And so we have obviously a bunch of tests written for various registry use cases, including you know basic push pull all the way to I push a manifest list, and I want to be able to pull it via legacy. But the key differentiator here is that these. These tests we just read is matrixed over every version of Docker, or every version of the Docker protocol as well as OCI. So the inputs are a set of you know the version the Docker version protocols v1 v2 1 v2 2 and OCI cross product with itself. So when you run the registry test we let's say I run the registry test we would test basic push pull. What we do is it will spawn up a quay configured wait for it to begin, and then it will run for every single variant of this cross product. The test operation so you can see here if I run basic push pull I copied this from a test run a couple days ago, I get it ran push with OCI pull with version one of the Docker protocol push with OCI pull with v2 1 push with OCI pull with v2 2 push with OCI pull with OCI push with v1 pull with v1, etc, etc, etc, etc. Now, this does make our CI a little slower because we have to running the cross product of what is essentially 50 to 100 tests. In this regard ends up with, I don't know two to 4000 tests if I recall somewhere in there. But it does mean that we have comprehensive coverage for every version of every protocol of every API that we've that we run and adding tests is a pretty straight forward because you can then just write new tests that make it really push your pullers and mix and match these as you want. And this has caught so many bugs not only with our own implementation but actually with Docker as well as some of the OCI work, because as we added support, we were able to say, oh, if we push with v1 in this certain scenario and then try to pull with OCI, the system doesn't know what to do because there's some, you know, miscompatibility there and we were able to submit fixes to various modules as a result. The other aspect I should mention here is you'll notice it says on the far right it says OCI model. Now, it only says OCI model today because quay at as of this moment, thankfully, has finished its data model migration. But if anyone watched my talk that I gave a couple of weeks ago and if not, I'll be happy to send out a link. We actually had to migrate quays data model from a pre OCI model back when everything was based on images to an OCI model but to where everything was based off of manifests. And we actually were able to do so partially because of this test suite where we just added another cross product where we ran the pre OCI model and the OCI model in parallel. And we were actually able to test all of our registry operations against the pre OCI model, the one we were currently running, as well as the OCI model, the one that we were migrating to. And we were actually able to migrate all of quay IO and our on-prem customers were able to use the same migration in the background without actually having any downtime from by changing our entire, and we were able to change, excuse me, our entire data model without any downtime. I suggest watching that talk because it is pretty fun. And I go into the deals of how we do that, but this enabled us to do so and it enables some really powerful verification moving forward. I have a quick question. Not about the test but about the release management for a quay project. What's the best way to track a particular quay version to the github command. I look at the release notes stored on the redhead.com. And how do I, how do I find a git corresponding git release or commit? Yeah, so we are adding, Bill correct me if I'm wrong, but I believe we're adding tags for each of the project quay releases for each sprint. Is that correct? That's right, yeah. So we do, so there's two different release schedules here. There's the, that's actually three, but so quay IO gets upgraded or updated routinely, right? So we'll merge stuff into head, we'll test it, we'll deploy to quay IO, see if there's any problems, fix them and continue. And that ensures that, you know, we have fast cadence on quay IO as well as I mentioned earlier ensures that we catch problems really early, right? Because if it works on quay IO, chances are it'll work at project quay. Project quay releases occur with our sprint. So our sprints are three weeks and they're named right now we're naming them after Star Wars because we just wanted a cool theme. And so we tag after each sprint, we tag the commit shot in our github repo in the project quay github repo with the commit for that sprint. And we also have a build trigger, actually a quay IO build trigger that automatically builds that release with that tag and puts it into quay IO slash project quay slash quay. So, our release pipeline today is to have those sprints. And then red hat quay are the red hat product version of project quay gets numeric releases are upcoming release being 3.3.0. And those get tagged with their own tags. You can actually see the process working towards that today there is a release branch in the github that's called three dot three dash release if I recall, or something along those lines. And once we do the actual release that we will put a three dot three tag on that. Thank you for the explanation and with a new version compatibility metrics for quay and clear. Yes, so today for red hat quay, we release clear containers, along with the quay. So, when red hat quay three dot three goes out a clear container for three dot three, it will be called I believe Claire dash G to be tick because it also includes the off system in it will also be given the tag three dot three and that will also include containers that there's compatibility between those two systems. That being said, for project quay. We don't generally break compatibility with Claire and if we do we call it out on our release notes. So, as of right now, you can use any version of Claire v2 with quay, and you should be able to use any version of Claire v4 or modern ones. With quay three three, but we are, we are going to be calling out for the project wayside when there's compatibility differences and when there's going to be the need to move up or down. Thank you. Good question. Do you support other image scanner projects. So we don't support other image scanner projects in so much as we do the work so quay itself does not talk to Claire. It talks to what's known as the quay security scanner API. There's two versions. There's the v2 which is for Claire v2 and now v4 which is for the Claire v4. But there's no need but it doesn't have to be Claire on the other end right so we actually have There is a guy who works for Aqua who actually implemented a proxy that speaks to Claire v2 API and quay talks to that proxy and that proxy talks to the Aqua security scanner and quays none the wiser. And that was a deliberate design decision on our part. We don't generally lock ourselves to Claire. Our hope is down the road that we can drive a community effort towards having a standardized security scanner API. And our hope is with some of the work we're doing on Claire v4, which is a more modern version of the Claire v2 API, one based on manifest as opposed to images, we can start proposing some designs around. This is how a street scanner talks to a registry and vice versa because it's a bi-directional relationship. And if that happens and I'm very much in favor of doing that. Then we would then say as long as you meet the registry security scanner API spec, you can just plug it into quay or any other registry and quay will be none the wiser and that's the way it should be. The Claire API is in and of itself, both in v2 and v4, fairly simple on purpose for that reason, so that if you do want to write a translator other security standards you can. And we also have other security standards that have integrated with quay not bought via our security scanner APIs, but by making use of our quay API. So what they'll do is they'll call the quay APIs to determine what has changed in a repository and then they will scan the image and then annotate the results in quay by adding a label. So there's at least one security scanner provider whose name is escaping me at the moment, who has actually already created this kind of integration via our OAuth integration. So you just over to their scanner to give them access. They scan all of your repos and then they label with a link to the results and we actually added a change that's going into quay 33 where if you put a URL in a label it's clickable for that very reason so that it's now clickable over. Got it. Anything else on this. Okay. Next slide I think you're up there, but I'm not 100% sure. Yep. Yeah, I'll why don't I pick it up from here I know we're also running a little short on time so let me kind of go through some of the customer stuff someone asked earlier about who's using quay. This is just a snapshot of some of the names. I'll go into a little more detail on on the forward reference in a second. We had obviously quays been been used commercially for a long time. You know as Joey said we are recently open sourced we open sourced back in November of last year for quay Claire was open sourced back in 2015. So as a as a registry we're fairly new to the open source scene, but we have existed prior to that also threw down some stats on quay.io so Joey mentioned about the scalability I think it's relevant in the discussion around usage. Around the scale at which quay.io operates so we, we do cater to almost 100,000 users, as well as over 7000 organizations and an organization and a user are kind of the same thing inside quay they just represent differently on the UI. We've also got close to 150,000 plus robot accounts accessing the service. So again, if we haven't kind of beaten this point to death you know quay.io was built for scale. It runs at scale and that's something that we spend a lot of time on in the engineering team making sure that we don't break that design commitment as well as making sure that our service runs adequately at scale. Red Hat as a company now depends on quay.io for for a vast majority of its container distribution needs. Let me move on to just quickly talk about forward so I, we have some reference information that came out actually just last week with virtual summit that took place. So there's a PDF here there's a link you can read up on. I just want to call out the usage here of quay. Obviously they're using the red hat quay product but as Joey said it's the same bits as project quay. They're a long time customer of quay they began their involvement with quay back when quay was part of core OS. They're currently running on a fairly old version of quay actually, and it's a fairly modest size deployment. I'll say single digit terabytes of storage it's it's not a not our largest deployment by any stretch. But in terms of the use case. It is a centralized registry that's handling lots of application needs within Ford. They also provide a facility for partners to access those images as well so there's a there's an external component as well. Again, I won't spend too much time on this if you have specific questions if we can answer those about Ford will try to do that, but I would encourage you to take a look at that PDF. It is, it is fairly open shift centric obviously but quay is a part of that story. Let me just jump into the community briefly as well. As I mentioned before, quay is fairly new to being open sourced we just completed that activity in November of last year. We've seen pretty good uptake already. You know the numbers are there. We've got 47 contributors we've got. You can see an extraordinarily large number of commits obviously because of the historic work that we did. We basically took the existing get repo kept our commit history and and open that up so we preserve the historical aspects of that. We've already got quite a few forks we're starting to get the GitHub stars up, and we're starting to get increased views and visitors there so that's a growing thing we see that growing pretty much on a weekly basis, our, our, our SIG channel on on Google has been getting more traction and as an engineering team we are, I would say on a weekly basis increasing our involvement with the community and vice versa. On the clear side, there's some, some certainly different numbers. It's a bit longer obviously began as an open source project, larger number of contributors there. Obviously not as much commit activity, but again you can see just the stats kind of speak for themselves there in terms of how much usage there we do have. I guess I'll just sort of summarize in terms of who's working on Claire versus quay from a red hat perspective we to do have some full time employees on Claire we have to. We have four full time employees on quay. And that kind of fits into the contributor model there. Let me just pause there are any questions about that. Yeah, I had one. Are all of those maintainers in those projects from red hat or is there some kind of company diversity across this project. On the quay side it's predominantly red hat obviously because it's recently just open sourced. We are looking to extend that as quickly as possible to be beyond red hat on the Claire side it's, I believe only three of the maintainers are red hat employees. Yeah, I should also comment on the Claire side that Amazon is using Claire v to currently as part of their security scanning system and they've been actively contributing as a result. We're, we're excited for them to move to contributing to Claire v for as we shift development resources from the old versions to new one as well. Let me just wrap up with the roadmap, just to give you a sense of where we're going. So, Helm v three is something that we've got experimental support for in our next release which is coming out very very soon, like, next week. We'll GA that fairly quickly thereafter, hopefully in the next main minor release release that we can do for quay will obviously have upstream support hardened over time much faster. We will be getting the full certification for OCI compliance as Joey went into detail about the test compliance there we feel pretty strongly that we're going to be first out of the gate with that. We're also pushing very hard on the artifact support so in conjunction to compliance will look to have early access for OCI artifact support as quickly as we can. We're also working on something fairly new which is around a novel pub sub model for registry events. This is a proposal that we've we've authored and we've taken to the community. We're looking for feedback on that and we would be interested in getting community engagement on helping to implement that as part of project quay. We think that's going to go a long way towards solving some of the scalability issues around event notification for working with registries at scale, especially the scale of quay.io. Joey mentioned the Notary V2 work that we're doing we're staying very close to that effort. And as we have something available we'll get that into the product. From a feature perspective. This is coming from a lot of our customers we've gotten a lot of requests for sort of enterprise management facilities around large scale usage for example quota. Enforcement and management of quota for images and repose making sure that organizations don't run out of storage or run out of scalability in constrained environments. We've also got quite a few requirements for working in controlled environments financial services institutions public sector institutions where there have to be air gap environments. There may not be direct connections to the internet. Day two occupies quite a bit of our roadmap just around how we want people to be able to run quay on premise and in a cloud environment without touching it. So this obviously fits into the model for our operator strategy, but we really see day two and beyond and day two plus as being major functional use cases for backup resiliency recovery. Any sort of operator centric and operation centric use cases. And then lastly just to touch on from a development perspective we see that the continued integration deep integration with cube is is really paramount so we would exist primarily to serve applications on cube. And so that's everything from how does quay get smarter about understanding what's currently running on a cube cluster to prevent accidental issues would say image deletion or image changes that may affect production runtime. Also, obviously staying very close to CICD workflows making sure that those different tools and different workflows for development work very well with quay above and beyond just the build support we have today. And then the last one I'll just briefly touch on is this notion of image proxy where quay can provide image proxy at the cube cluster level to provide additional resilience in the event that the registry goes down. I know there was a question before about ha and making sure that the registry doesn't go away. Having a proxy support that's intelligent enough to work with highly available cube and keeping things cashed at the at the node level is something that we're looking at. So I blew that pretty fast. Are there any questions about the roadmap. Yeah, I had one. Again, seems to be my place in the world. The some of the stuff on your roadmap, you could probably either, you know, implement yourself or just build sort of plugins to existing system. So pub sub, for example, there are many of them out there you could just publish stuff to any number of pub sub systems. Similar, you know, quota, we have this open policy agent in the CNCF, etc. Is your is your general approach to kind of integrate with existing solutions or build them yourself or some hybrid of the two. I talked to that bill. Yeah, go for it. Yeah, so, so on the pub sub side, the reason that I, so I'm the author of the pubs and proposal part of the reason that that I I'm feeling that it would should be a separate proposal is because the idea is to allow registries to implement it kind of the way they want. I don't. I'm not a fan of API's that are tied to specific implementation. So the purpose of API proposals like you want to back it using an existing pub sub model or you want to use like rabbit MQ or whatever you can. I would probably do it a bit differently than on Prem Quay, for example, other areas where we endeavor rather we endeavor to reuse community resources wherever possible. But we tried to do so in a way that is pluggable so that we're not requiring our users to make to use a specific piece of technology, at least wherever possible. And then, and therefore get themselves boxed in so like, as we mentioned earlier, like you have options of storage if you want to, you have options for databases you have options for log drivers you have options for mirroring and you know, down the road for quota and for pub sub in all of these cases are our plan is to allow our users to determine what the best piece of infrastructure is for their needs, and then hopefully build a sufficiently powerful but also somewhat generic implementation on top of it that make that that leverages the unique capabilities of each of these systems. It's a very fine balance to hold, I will admit, but we never do wherever possible. So, you know, that that's the general goal of us adding new stuff we integrate where we can, and where it provides value, and then we write our own where we feel that there isn't sufficient community or sufficient existing effort that meets the needs of our customers. Thank you. That makes sense. Thanks, Joey. Any other roadmap questions or I guess I can just say at this point are there any questions in general that's our last slide. So in terms of the community. Do you have a roadmap for you know, including more contributors from different organizations. You know, it's more specifically a roadmap item, I think you mentioned it before right but there's some effort towards that or Yeah, it's a good question so we don't call that out as a roadmap items specifically there is initiatives that we're running to get community engagement around quay specifically. Yeah, we can talk to some of those things and Diane who's on online as well I think can talk that a lot of it is around right now, making sure that we have not just an outreach program, but making sure that we're getting value to the community. So the upstream releases are the first step towards that we want to make sure that people get access to the latest and greatest changes to quay as quickly as possible coming from our team. We've also started folding in, obviously, and accepting PR is from the contribution from the community. Our first PR actually was quite interesting it was a fellow who ran a code for matter on our code and sort of helped us just get the code looking really nice. So, so yeah, it's sort of implied what we're doing. I don't think we've called out specific roadmap items to say, get X number of contributors by this date. But it's primarily making sure that we as a, as the project quay team are 100% focused on the community. So if that helps or not. Yeah. Any other questions otherwise I'll I guess we'll hand it back to you record and I appreciate you giving us the entire meeting at this rate. Yeah, thank you. Thank you for the presentation. Yeah, so I think the next step will be to basically the sick is going to create a document recommendation document and that will be publicly available and then after that, the TOC will since you have to see we will look at document and then they need to find a TOC due diligence sponsor because you guys are going for incubation right so and then from there that person will drive due diligence and and then everything goes well then finally goes to a vote and then and then it becomes part of the CNCF. Okay, great. Thank you. I think we have one last item, but we didn't get to it so that was a CDI container device interface so we're now added it to the next meeting so that will be our first item for the next meeting. And that's it. I think we don't have anything else so thank you very much and yeah and stay healthy and you know and stay home. Thank you. Bye. Have a good day everyone stay safe.