 Welcome again to another OpenShift Commons briefing. As we do on Mondays, we have Ask Me Anything sessions with different upstream projects or community leaders. And so this week we have brought together the team from Project Quay. We're going to get a little bit of an introduction from Daniel Nessier. And then we'll have it open for Q&A and Tom McKay is going to lead the Q&A. But we have a number of folks here on the call from the engineering teams from Quay and Claire, which are now part of Project Quay. So please queue up your questions in the chat and we will gather them. If you're on Facebook live, YouTube live, or in the Twitch stream, we'll aggregate them and bring them here as well. So ask where you live and we'll see if we can get them all answered. So with that, Daniel, take it away. Give us a little intro to what Project Quay is and where we're at today. Sure thing, Dan. Thank you very much for having us. Welcome to this brief overview of Project Quay, the upstream open source project. If you haven't heard about Quay, here's a little recap of the history of Quay, which is pretty interesting. It was actually the first private registry on the internet, so it had the ability to store private or offer private container repositories even before the Docker registry and container index at the time had. And it was basically launched as a SaaS service with Quay.io and a non-prem version of Quay Enterprise by a company called DevTable, which was eventually acquired by Core S, which, as you all know, got incorporated into Red Hat. Since then, it's still an open source project or has been an open source after the fact and is also an official Red Hat product. So Quay as a technology and also as a SaaS offering has quite a history on its back and remains one of the largest and most busiest public registries next to Docker Hub and is also available as an on-prem version for enterprise customers or for upstream users to install in their own environments. Quay as a project has been open source late last year. So we are actually not that long with the source code on GitHub and all the community collaboration behind that, but it's generated a significant impressive track record already since then. The mission of the project is quite straightforward. Quay is allowing you to build, store, secure and distribute your applications in the form of containers reliably at scale. And there are two important things here, which is secure and security as well as reliability at scale. And these aspects come from the fact that Quay is an open source technology that runs the Quay.io offering. In fact, all the code base is shared between the on-prem product and the Quay.io offering and all the new features go into Quay.io first. So we basically test at the scale of Quay.io with tens of thousands of users and hundreds of thousands of API requests each day before we actually release this into a product. It's coming from the GitHub open source project and that's where the community now lives. So it's fully open source and available for various footprints including on-premise, public cloud as well as various cloud service offerings. One interesting aspect of Quay as a technology is that given its history that pretty much started at the same time when Docker started to become more widely adopted is that it is one of the few registries that actually allow us for a very broad range of Docker client compatibility. So we go all the way back to version 0.7. So you can also use Docker clients that even precede V2 protocol implementation to use Quay in order to pull and push images. But we're also looking at the new stuff. We have our eyes and the ears on all things OCI. We have basically already an OCI reference implementation with Quay. We are giving people using Quay early access to features like OCI MIME types as well as OCI Artifacts back and maintenance of the Quay project are actually part of that community body. If you were to sum up the strength of Quay as a project it's really about providing secure access to container images which drive containerized application architectures on container orchestration platforms. It does that with performance and scalability in mind. Quay is again one of the largest public registries available next to Docker Hub and it focuses a lot of automation so it's perfectly available and to integrate in CICD system. I said that Quay itself was just open sourced more or less a little bit over half a year ago but already since then we have seen quite upstream of adoption. So Quay is very active on GitHub and so is Quay which is actually the project that implements security vulnerability scanning. So Quay is open sourced a little bit longer than Quay and it's also used outside of Quay but all the container is actually on the Quay project and also on this call today. So if you have any questions for Quay please do bring those up as well. So what's new in Project Quay? If you haven't been following let's do a quick recap of the most recent release which we introduced a couple of weeks ago Quay Free Free where we put a major focus on completely overhauling Clare effectively and introducing the result as Clare before in tech preview with Quay Free Free. So Clare before predominantly as support for programming language packages to do vulnerability analysis so Clare always had the ability to check the operating system package managers for vulnerabilities and use the signature databases to tag images that have known vulnerabilities including the severity and some information about how to fix them. But on top of that we are starting to add support for popular package managers for programming languages. First one actually before is Python. The other big theme is that we see close integration with the OpenShift and OKD platform so that if you run Quay with this platform or even on this platform you have first class user experience. And that's enabled by various operators and one of them is the bridge operator which got introduced as a supported feature with Quay Free Free and is also making its way into the official repository. This gives you tight integration for workflows in OpenShift and Quay so that things like namespaces and repositories or service account tokens and robot tokens are kept in sync that you automatically have permissions to push and pull to your little car of the world in Quay from OpenShift and you can also use image streams and OpenShift builds in conjunction with Quay and get the familiar user experience that you had with the internal registries. We also added a couple of smaller improvements across the board, things like other filtering help users with that kind of authentication back end to be more precise about which users they want to see in Quay. We have some features that actually trigger down from Quay.io like forwarding larger volumes of logging data to Elasticsearch and we are also introducing some experimental features. One of them is Helm. We've recharged support which is based on those AI artifacts back. So if you enable two experimental flags, one on the Helm side and one on the Quay side, you can actually also use Quay to push and pull Helm charts as well. Quick update on what's going to be the focus for near, mid and long-term future. In general, as Quay now being an upstream community project, we have the same open source upstream driven innovation model that all redhead projects have. So we are here looking for feedback like these kind of AMA sessions from you in order to get ideas on what we can improve, where we can add in terms of integrations and features throughout the product and then start making those available on Quay.io in a canary fashion and then make them available to the paying customers with the perfectized version of Quay. Going forward, we pursue three main focus areas. One is that we want to get closer to the platform we are running on and the platform we are most likely to get used with a lot, which is OKD upstream or OCP downstream. So there's a lot of potential for very close integration in the developer workflow as well as administrative workflow to know about events regarding container images that happen in Quay for those to be able to trigger down into open chef. So we want to make these things much more visible and much more useful to users so they have a very natural workflow between these two projects. In general, focus on day two operations of actually running Quay on top of Kubernetes and on top of OpenShift is a big focus in the future too. We are essentially rewriting the Quay operator which was formerly known as the Quay setup operator to be much more future proof for enhancements and also gets the ability to not only install Quay but to keep it updated through its lifecycle. And then the third part is enhancements to the right of street itself as well as to Clare. So on the Clare side we definitely want to add more package managers from programming and exchanges in the future in order to increase the coverage of security scanning of images. We want to finalize mine type support as well as OCA artifact support as well as then enhance some of the more advanced features in Quay like repo mirroring. This is the more detailed roadmap. I'm not going to talk to every point here but one thing that we are currently working on quite intensely is actually updating the Quay code base to be Python free compatible. So that's going to take up most of the bandwidth on the Quay team for the upcoming free for release towards the end of the year. And at the same time we are also redesigning the Quay operator with the main goals that can actually manage Quay in spite of updates and it can also provide the required databases in a useful fashion to Quay. We are also looking to add builder support for OCP. This just closely missed the last mark window but it's going to be in free for. Clearly for has two big goals. One is notifications. We're just going to be rewritten from scratch as well as part of clear before being a completely new code base and also having the ability to get window ability data from different sources which is primarily driving the air get deployment case. So if you want to run clear in a disconnected environment with no network access you need some sort of mechanism to once a defeat updates to the window ability database and that's what Claire before at GA will deliver. I think I want to leave it at that in terms of an outlook and open it up for you and I. Great. Well, thank you for the update. A couple of other things have popped in. One I just wanted to make just be clear about what the difference was between the Quay bridge operator and the project Quay operator that's in operator hub.io. Anyone want to address that just quickly? Yeah, sure. So the Quay operator in operator hub.io is really the operator that maintains Quay itself. So that's the operator that basically gives you a Quay deployment as a service and we just call it Quay operator these days even though we used to call it Quay setup operator but it's really the component that is responsible for the lifecycle of Quay on the Kubernetes cluster. The Quay bridge operator was specifically created to integrate and running Quay instance with OpenShift clusters. OpenShift also comes with an integrated registry and has some extensions that are only available with the integrated registry by default and the bridge operator opens this up to be used with Quay as well. So the most predominant features here are support for image streams as well as automated deployments when builds in Quay have produced a new image artifact. The one operator used to install Quay, the other is to integrate it with OpenShift. Okay, and then Mohan asked from the YouTube channel in Quay 3.3 for LDAP, will it be possible to control the level of nesting groups? I don't think we have that ability right now. I have seen the RFE for that, but maybe someone from the engineering team can provide more background. This is Tom McKay. There is a JIRA issue where we track our issues upstream and downstream for that feature. It is not on our list of things to do in the coming months, but it has been asked by a number of people, so it has not gone unnoticed, but I wouldn't expect it in, certainly not the 3.3, which is in Z stream right now, and probably not the next release, but it is probably working out currently. And Walid is asking, would it be possible to abstract the MIRA pass-through for developers on-prem that is a developer asked for an image in PodManifest, OpenShift, Kubernetes, will pass it to Quay on-prem if not there? It pulls out of the internet from Quay.io or any other container registries, and I'm going to unmute Walid so he can follow up on this too, Zira. Quick answer to that. Maybe Daniels can correct me, but I don't think that doing a pull-through cache like that is not on our short-term plans. Walid, did I get that right? I just unmuted you if you want to... Yeah, just supplement that. It's something that we have been thinking about, but it's not scheduled for the next release. But it's something we are tracking, and we are more aware of. And yeah, definitely coming in from all the bangles. Walid couldn't unmute himself. Sorry about that. The gentleman on YouTube asked a follow-up question, LDAP question, in teams and membership directory signatureization. Will it be possible to add multiple LDAP groups? So, Joey, do you remember what the current level of feature in LDAP is? Well, I mean, I remember I can add multiple search terms. I don't know the LDAP terms. Right now, you can only bind a team to a single LDAP group. Nothing in theory would preclude us from checking multiple groups, but it is not currently a feature. It's a request. We'd have to add it onto the roadmap. Gotcha, yeah. So, if you're the questioner, I definitely open a JIRA issue with as much details as you can. And we'll give it a look. Right. So, Walid had an earlier question at the very beginning of the whole conversation. He queued up one on restricted network air-gapped environments and support updates to vulnerability scanners databases via periodic updates, allowing other standards other than CLAR. I think that's like a bunch of questions mashed into one. But you want to tease that out there, Tom, and figure out... I can answer that. The vulnerability updates for air gap are a plan for CLAR V4. CLAR V4 is targeted for October to come into GA. As part of that, it is being designed explicitly for the idea that you would be able to load in the updates, as opposed to today where it's expected that CLAR can download those updates. As for different security scanners, nothing in quay is CLAR-specific. Any security scanner that speaks the current or upcoming, their different version of the quay security scanner API, of which today it's CLAR V2 for the current one and CLAR V4 for the upcoming, can be used with quay without any additional configuration. On the quay side, you just point it to the security scanner. There's just has not been any community agreement yet on what a registry security scanner API should look like. So every registry today has a different API, but there has been some work done by various external security scanner providers to integrate with CLAR's API. I believe in fact, one provider has a working prototype of a small proxy server that actually will let quay talk to their security scanner. I don't recall what it's called at the moment though, so I don't want to answer incorrectly. But nothing precludes anyone from using a different security scanner with quay today, except for API compatibility on the security scanner side, and that can be done via SHIM. So there's one follow-up question, which other scanners use the CLAR API? As I said today, as far as I know, there was one security scanner that had a SHIM prototype, but I don't know off the top of my head what it's called. What are the other ones do today? Nothing precludes them from doing so. It's just no one has gotten around to doing it. It's also not complicated at all. The quay security scanner API is very, very simple. And before we go to more questions, Diane, maybe I can share the slide deck or who's sharing now? Daniel is, but you can just take it away from him. Okay. And you'll notice Daniel's slides were red and minor blue, just project quay color. I think you would call me the community champion for project quay. And I just wanted to point out real quickly, these are great questions. We do have an email list. It's a Google group quay-sig. And we are on FreeNode IRC. I know there's a lot of chat tools out there, Slack and Discord. But with all of them IRC seems the one that lives the longest. And then again, as mentioned, we do have the GitHub organization. So the majority of our repos are under that, the quay org. There is one, the quay operator, which is the quay setup operator. That's still in a red hat community of practice repo, but that'll be moving over to quay as well. So feel free to explore there. And the repos have varying policies in terms of issues. So for example, the Claire repo, the Claire community indicates most frequently through GitHub issues. We do not have GitHub issues enabled on most of our quay specific repos. Instead we use issue tracking through JIRA, which I link there as well. We use the same tracking for both upstream. So project quay and downstream red hat quay. And so if you want to really see the roadmap and what's being worked on actively, it's all there for you to look at. And again, all of our code that is for red hat quay, as well as quay.io, as well as project quay all lives stream and is successful on the various branches, if you want to check it out. The upstream release pattern is that we release every sprint, which is three weeks for us. And which leads me to our next thing. So the master branch of project quay, we are converting from Python 2 to Python 3, which is a significant amount of work if anyone is familiar with Python. But we're in the benefit that we do have an extensive, both internal red hat QE team, as well as extensive unit tests and integration tests. So the largest portion of the conversion is really checking the updated package versions. So we're in Python 2, we might be using an older version of an integration package. We've updated that. And as you can see, there's a lot of integration points. So LDAP, all the storage engines, all sorts of flavors of things. There's a lot to be tested and reviewed. And we'll put this in the top of the read me information about our bug bash here and the rewards for our community's hard work in helping us with this. But we hope to see you on IRC and on the email list and fixing and raising issues. It's a significant effort to do this. So any and all help, we much appreciate it. And we'll socialize this a little bit too as well. And I think Bill Dietelbach has got a t-shirt design up his sleeve as a reward. So hopefully we can showcase that too sometime soon and attempt you all to use your Python skills and help us out here, because that's really where the community's at. The other piece of information that you can say if you don't know too as well, we've got an effort on to bring Project Quay to the CNCF and have made a pull request and are working our way through the cloud-native computing foundations review process for bringing a project into incubation. So that's been in the works for a while and it's wandering its way through some SIG reviews. But I'll put in the chat the link to the pull request. If you're interested in following through commenting on that, please let us know or just do so. And we'd be happy to have your support and also your feedback on that process as well. So that's where we're at with the CNCF at the moment. And hopefully you'll see us on that beautifully complex CNCF landscape.io diagram sometime in the not too distant future. That'll also help raise visibility for this project and also give us lots more options. And so I do want to take a moment and have either Lewis or Hank, either one of you are welcome, but sort of tell us why there is a Clare V4. So as we know, Clare V2 is the version that's out there now integrated with Red Hat Quay. If you use Quay built from the master branch or one of our recent sprint releases, there is an option to enable V4. And in fact, on the master branch now, the Clare V4 is the default. So Lewis or Hank, which of you want to tell me a little bit about why there is a Clare V4. Yeah, sure. So I mean, Clare V4 is a reimagination of Clare V2's original architecture. There are some issues with the original architecture, which we addressed most of them, evolving around performance. And there was also some data model issues, which didn't allow us to really connect the dots between certain package types like binary and source. All this is now built into the data model. And in the performance side, we really looked at basically how Clare V2 was pulling down images, how it was analyzing them, did most of the work in memory. We moved that to actually be buffered on disk, which will actually want to be more performant because some of these layers can get rather large. You don't want to do all that work in memory. And then we basically rearchitected the actual library, which does most of the work into a repository called Clare Core. That was for an effort to remodel and also to kind of promote easier integration, easier contribution from the community. So you can actually take a look at that in Clare Core. We have had some outside contributions. Everything is well documented now, and it should be a little bit easier to just jump in and add things to the code base. It's a little bit more difficult in Clare V2. And there's a couple other things. Clare V2 didn't really consider content-addressable layers and artifacts. Content addressability is the fact that if I have a hash for an object, it will always reference that object. So we made that a first-class citizen in order to really fortify the reduction of work that Clare has to do when performing the scanning on the same images and same layers. That's the first-class citizen that got brought into the Docker specifications after image IDs. So it's not like Clare V2 ignored that. It just wasn't a thing at that point. So we kind of worked that into the solution. Yeah, there's quite a bit that has changed mostly from taking a look at Clare V2. You know, when Clare V2 was developed, it was developed at the onset of a lot of those technologies. So it benefited us a lot to take a look at. Can you tell me a bit about the... I know in Clare V4 there's now language support. What does that mean? Yeah, so there's nothing stopping us from kind of... The new architecture jumps into basically being able to scan anything off the file system itself. So now that we kind of set up this layout, we were able to implement Python scanning fairly easily and more language packages will come. But yeah, we basically just have a core set of components that when they're implemented, they just kind of fall into the order of the application. So doing that for Python, doing that for PIP, it all becomes fairly imaginable with not too much code changes, which is what we kind of wanted to accomplish. So right now we support Python and PIP scanning. We're working through two rather big features, airgap and notifications. And I hope that once we get through those, then we will be able to attack more of the language package scanning. It's actually one of the things I'm more excited about. I think it can add a lot to the solution. So I want to, you know, as soon as possible, whether it comes from contribution or not, I think that it should be like a major point on our radar. Nice. And as a reminder, again, we build upstream community releases every three weeks. And that includes Clare V4. So if you are not wanting to build damage yourself, we provide those every three weeks. Diane, any more questions for us? Not at the moment. I'm wondering if we can get some more insights into the roadmap from Joey and from the others and Louis. Again, our focus is for the coming summer months. Really the Python 3 work in Quay itself. And then Clare V4, all of its nice features. And then really focusing on what we're internally calling the next generation Quay operator. And maybe if Alec is on, Alec, are you with us today? I don't think he is on. Okay, so Alec is our lead engineer on the Quay operator. The current version of the Quay setup operator was written in conjunction with Red Hat Consulting and Customer. And they graciously allowed it to be open source work. And that's what's shipped now with 3.3, with Red Hat Quay 3.3. There is work in re-architecting that to be more operator-like. And that repo, again, exists up on our GitHub. There's some backend tooling we're using called Customize. It's very Kubernetes friendly. So there's a lot of work going on there. One, to make sure the transition from the existing operator to the new next generation operator goes smoothly, but also to ensure that we capture all the user personas that we envision. And this really is focused on the day two operations as well. So the current operator really installs, but doesn't further manage. But we're looking to raise the maturity level of the operator to include things like upgrades and day two operations. So that really is the roadmap for our team. So there was, yeah, there was something on the roadmap that Daniel put up there around extending the operator to support databases. And I'm wondering if you can talk a little bit about what that means and maybe Daniel can throw the roadmap picture back up and you can talk about some of those things. Sure, I cannot share. Dana, if you want to share. Yeah, I'm going to take over real quick. So I think you're referring to the third and fourth bullet point of the Quay roadmap. So yeah, so like Tom mentioned, today the operator actually deploys Quay and it can actually also deploy all the immediate dependencies of Quay, which is an in-memory cache based on Redis, as well as a SQL database that is either my SQL Postgres. So we deployed Postgres today with the operator. So we want to make that a little bit more, let's say opinionated. So it becomes on the one hand less customizable, but also more maintainable for us. So the more options we give people in order to obviously change the way Quay is deployed with the operator, the more complex the operator code obviously gets. And complexity is the enemy of all updates. So if you really want to go into a world where a Quay operator is essentially like Quay is a service running wherever your Kubernetes cluster is running, we need to have very robust update capabilities. And that may actually include updating not only Quay itself, but also the version of Redis that we are using or the version of Postgres that we are using in order to enable certain new features. So in order to do that with the operator and live up to the expectation of the operator actually giving you a robust Quay deployment and maintaining it over time, we are looking at going down the way you can actually deploy the database with Quay to something that is easier to incorporate by the operator. And that's all in order to help the operator to be able to manage update life cycle so that when you have Quay Free 4 installed and Quay Free 5 is released, you can actually use the operator to just change a little version spec in the custom resource. Then the Quay operator will notice that and will actually take over reconciling your release and updating it in the way we are doing Quay updates. So that whole orchestration is basically taking out of your hands and done automatically with all safeguards in place. There's potential for human errors and that may basically mean that what you can customize in terms of the database getting deployed is a little bit reduced in order to make that a robust experience. But of course we still have the ability to have you provide a database for us or use a managed database if you are in a cloud environment and that will remain available. So that's just a part of the bigger plan to make the operator really capable of these day two operations that we are looking for so that you can really just solely deny the operator to not only install but also update your Quay deployment. Cool, thanks. We have another question came in and asking if you could elaborate more on the repo organization team level structure. Can it be more flexible, more levels? He's a new user and he finds it limiting for example, how should I provide to my end users the images from upstream Quay.IL such as Clare and Quay? Yeah, so right now the hierarchy, the levels that you can have in a name are limited to that. So organization and repo and then your tags. So in terms of flexibility yeah, that's the flexibility you're stuck with. Does that answer your question? I know he says it's not flexible at all it's what he's saying. It doesn't sound really flexible either but that's... So the schema, the spec does allow for multi-levels of hierarchy in names. There aren't a lot of registries. I can't think of one off the top of my head that supports any and number of levels. We do not at this time. I'll comment on that a little further. The reason we don't support it at this time is because it's not supported in the Docker v1 protocol and is part of our commitment to full backwards and forwards compatibility. We can't support it. If and when we formally deprecate version one of the protocol which will likely happen at some point in the future because its usage has been dropping then we can investigate adding hierarchical support but we will likely not unless something changes we will likely not be allowing you to set permissions at different levels of the hierarchy. It will be a naming convention which is what in fact most custom implementations that do have quote-unquote hierarchy actually do. In terms of how to allow customers to access things the current promoted solution is you create a robot account per customer or if you don't have a problem with sharing the robot excuse me credentials you give one robot account to all your customers and you just give permissions to that robot account in the particular set of repositories that the customer will have access to. So we ourselves actually used to do this at CoreOS and it worked really well and it was really easy to track as well since we did a robot account per customer. So we would just as those customers would sign up for products our account system would just add that user's robot via the Quay API to the particular set of repositories that contain that product or rather that form that product and then if their subscription lapsed same deal we would remove the robot and then Quay's permission model would take it from there. The other benefit of using a robot account as well is that it means that the users don't have access to the Quay UI even to see things and so it's purely a pull robot and it's a credential so they can't just use it for other things. It's tied specifically to your organization. So we'll lead, I take it you haven't figured out how to unmute yourself so you can't follow up to that but you do let us know and we'll get you in. Yeah, he's already tried there's a bug in the system bug in the matrix. So there was one other question or ask for a little bit more on builders on OCP and what that entangles Yeah, so I'm Bill Dettelback I'm the engineering manager for Quay and yeah just I think that's one of the things we didn't talk about too much but I think it's a feature that we've got slated for the next minor release for a lot of folks who use Quay on premise don't realize that there's actually a facility for builds if you use Quay.io you're probably very familiar with it. We don't have too much uptake folks running Quay in-house with the builders. Some of that I think is because the setup has been somewhat manual it wasn't really designed upfront to work directly on Kubernetes. Actually the builders are the first thing we moved over to Kubernetes when we started to retarget Quay.io over at OpenShift we've been running the builders now on OpenShift for about eight months or so and so we decided that it was time to productize that bring that back into the product so with the next major release you'll be able to run the builders now pretty easily on top of a Kubernetes orchestrator we've done some internal changes in conjunction with the Python 3 migration as well we've taken the opportunity to obviously update the build manager code in the Quay container to Python 3 and while we've got the hood up on the car we're swapping out some stuff like an old RPC framework that we used before for GRPC and we're trying to simplify the code as much as possible we're also bumping up the version of the VM that we use for the actual builds inside the pods to a more supported version it was on container Linux we're moving that over to Fedora CoroS I believe so anyway I thought it would be worth to just call that out as a new feature I think with the introduction of builders on OCP and Kubernetes it should be very simple to stand those up as a new build facility for on-prem customers and if there's detailed questions on ISE Kenny's actually doing the work hard questions to him Kenny there and if so I apologize for un here there you go Kenny we have something to add I think we're having some difficulties with the mute and unmute button today so it's a little quieter than it should be on this call so there you go well then hopefully by 2022 we'll be changing the vulnerability whitelisting to either whatever the new standard terminology is for whitelisting and have it be allow listing or deny listing or whatever comes up in the nomenclature yeah for sure yeah one interesting aspect of builders is that once you think about this being something that's available on premise and you deploy Quay on something like OpenShift OOKD you get to see how the ecosystem comes together in a larger picture you may have heard about a upstream project called Tecton which provides a Kubernetes native experience to running continuous delivery and in that construct of Tecton you have abstract concepts of tasks, groups of tasks building a pipeline so you could easily see that this ties back into the builder concept of Quay which allows you to actually build container image artifacts based on external triggers either via the API or by integration with social management systems that in turn in a long run Quay could use something like Tecton to actually carry out this task and use another Kubernetes service to describe how that's going to happen so I think there are some really interesting opportunities there in the future in these services power maturing and you may know that our group here at Red Hat is also very invested in the Tecton project upstream someone had also asked if there was a deep dive on Quay Clare 4 available from past Commons briefing and I don't have one yet so we'll probably have to get Lewis and someone to do that deep dive on Clare 4 sooner than later that's if you're willing if you want just something to hold you over there's a repository I'll put the link in the chat too but we have worked on basically formal documentation in the Clare Core repository it's a little bit more of like upstream resource but it does carry a lot of weight as far as explaining how everything works so I'll put that into the chat I'll share it via the screen so in terms of Clare and community contribution what are you looking for Lewis in the short term and long term for people to give you feedback on or to help you with yeah I mean one of the big things that come to mind like we had some great contributions from VMware who added their own OS support language support would be great I think that any contributions even knowledge share contributions around security protocols around how organizations kind of package their applications internally is all really great stuff that we can benefit from but yeah as far as like code contributions I know language packaging is going to be a really big focus coming up and having some individuals who have worked really closely and maybe you know the NPM space the PIP space would be really beneficial to us and do you have a reoccurring meeting community meeting at all yet for Clare? We don't have a community meeting as of right now we do a lot of collaboration over Slack right now but it's something that you know I think as we are going forward it's becoming more and more crucial to do so we might want to put something together for that when the transition hopefully takes place over to the CNCF we'll probably get that done and then there's the Clare core link thank you but I think that we do have that SIG page for Project Quay and if you want to share that again come and that would be great and we'll come add the Clare a little bit here to that as well so to try to figure out how best onboard new community contributors and folks like that I think the bug bash which we're setting up is actually one of a really good way to get started and to start getting getting yourselves involved in the community if you're interested and if there's features that are missing or if there's a feature way down that product roadmap definitely make sure that you highlight that in your feedback and that might move things forward but also volunteering to help contribute on any of these things please do all right and Hank's having we're having some unmute issues here there you go Hank go for it try that Hank all right we still can't hear Hank we're having some serious unmute stuff another thing that I often get asked about is support for image signing in Quay especially since Quay.io has that ability built in and that's a very interesting discussion because we're actually involved in some of the community work that is rewarding around making that available in the Quay project as well I know that Joey's working actively on this up-screen the community around Notary and maybe Joey you can give a quick update on where this is currently at sure so Quay.io has experimental support for Docker Notary which was the original signing solution that the community developed or Docker developing in the community sort of accepted however over the last few years there's been a lot of community discussion about the pitfalls problems associated with Docker Notary and to that end the community is currently working on a new version of Notary called Notary V2 it's a working group there's actually a community meeting in I think eight minutes that meets weekly so anyone who's interested feel free to go search out the Notary V2 working group and take a look but the high level description is basically because of this recognition that there's needs to be something better the community is working on something new and so as a result we are part of that working group and as soon as the working group comes up with a proposal and an implementation for that proposal then we will be working to adopt that but as of this moment in time it's still very much in the development phase and so given that and the lack of universal adoption of the original Docker Notary we're taking the approach of let's contribute to the new version as opposed to adopting a flawed older version so that's the high level description it means unfortunately that we're going to have to wait a little bit longer to get a really good solution in place but since the whole community has finally recognized that this is something we should do I'm pretty confident and excited about the direction that everyone's going so let me just touch on the ways to contact us again as you've seen we're new to being open source and part of that growth pattern is figuring out the best way to collaborate with our community and right now Claire has a community as I said it does a lot of its discussions through the github issues we're hoping that over time we can sort of consolidate the communities with quay and Claire so Claire questions are completely fine any operator questions are completely fine on the quay SIG the issue tracking the Jira that we use is for both projects both quay Claire as well as all the operators so if work gets done even if there's an issue upstream it'll often be recreated as a Jira so there is a Claire channel on free note as well both to Claire and quay channels on IRC are very lightly populated I would encourage you to use the quay one which is why I listed it as the only one just to sort of get everyone in the same room so to speak but if you ask a question on IRC for sure it'll get attention if you put an email to the quay SIG either for Claire or quay or any of the other projects it'll get seen as well there's one question coming in from one of the channels external streaming channels is enhanced content coverage included in SCAP compliance scan like standalone OSCAP Podman anyone have an answer to that one does that make sense to you Daniel I think I get the direction there and the answer is no we're pursuing a different standard for getting enhanced content coverage and so we are basically looking at our Oval V2 feeds as well as CDRF data to get more comprehensive coverage on specifically now what is described and what are the abilities for Red Hat RPMs and that's beyond the Red Hat server RPM repository so we're going a different ground using a different standard for data not SCAP or open SCAP I'm not seeing any more questions out there and I know you have another call in like four minutes we'll try and if you can slide to that bug bash slide one more time and make it full screen