 Both 20th 2021 and we have another their community development meeting. We have some new participants if you want, we can do a quick introduction to these individuals. Just let me know, Jan, Lucas, Yaku, if you'd like to give a quick word on, you know, who you guys are, what's your interest in Claire, such like that. I think that would be great. Some of you who took part in the last meeting already met me. As I said, I work on Red Hat deployment of Claire, where we also have an application called Claire wrapper around it that integrates it tightly with Red Hat ecosystem of container images. And, uh, and yeah, uh, Lukas and Yaku are my colleagues who are onboarding on Claire work as well. So, uh, Lukas and Yaku, if you want to say hello. Yeah, so, hi, um, oh, sorry, if you can, if you want to go first, go ahead. Okay. So, hi, I'm Lukas, so, uh, I am working in Red Hat for like three years or so. And so far, I've been working on, uh, internal projects that deals with metadata and this kind of stuff. So, yeah, right now I'm going through or starting on Claire and I'm happy to meet you guys. Nice to meet you. And my name is Yaku, um, as Jan mentioned, we are working on the same team. Um, and I've been at Red Hat for 5 years. Um, yeah, and also looking forward to working with you all. Great. Nice to meet you guys. Cool. So we have a couple of agenda items here. Um, I'll just go right down the list and I'll click it off, uh, with just a couple of items that I have mostly informational. So, uh, me and Hank have worked out a proper Claire release cadence proposal. So far, it looks like it's thumbs up, uh, across the board as far as the Quay team. But what this consists of is that we are going to commit to a particular day of every month that a release of Claire and Claire core will take place. Um, so we'll probably stop doing these ad hoc, uh, release requests and whatever is on the, um, whatever gets merged in before this release day is what will be available in those releases. Um, this is all to kind of, um, act, let other projects treat Claire as, um, more of a dependency. Um, so something to look out for, I think, um, I'm going to bring it up today in our engineering meeting just to get like a final no go. Uh, go or no go. Um, so just keep an eye out for that. We'll probably stop doing requests for release and then, uh, release will happen once a month with a three month, um, support commitment. So if you're running, um, any other releases within the last three months, you can expect back ports. But once you fall out of that, um, three month window, you'll have to bump your Claire version for any new bug fixes. I would share the document, um, but I created it with our internal. Um, red hat Google account and this just such a burn that we can't share those documents publicly. So I'll just copy and paste it into like a public document after this meeting and put it here just so you guys can see. I mean, yeah. So I think that's going to help a lot with just the predictability of like, when is my stuff going to come out? I think we've had some turbulence, uh, dealing with, you know, oh, yeah, we merged things in the Claire core, but we don't see them in Claire for a while, but this is now will be a date where anything that's merged into Claire core and Claire will be available at a very defined date. Cool. So enrichment updates, uh, the work is underway. I'm working out the data model changes to support enrichments. If you're not aware of what enrichments is, it's the return of using. Metadata, most specifically, and VD metadata to write auxiliary data to the vulnerability reports. So, in circumstances where Alpine is missing severity, the information will be available via an enrichment. In the vulnerability report, if you would like to see more information about that, you can go to Claire repository and go to discussions and then go to design. And the Claire enrichment specification outlines this and all the work that's happening. And then if you are very interested in actually following the work issues.redhat.com, if you go to projects, yeah, I wonder if I could just do project, will that work? Okay, so in progress right now, anything with this little enrichment signifier on the ticket description are enrichment tickets. So right now I'm doing live on data model, which is just basically prepping the database and any kind of database related code to be able to handle. Multiple update operation types now, since enrichments will be a new type of updater. That's underway. And then April 26, this is what I've been working on outside of just coding. We're going to do a talk with OpenShift Commons about. The Claire index or service, the internals and how it works. So if you are unfamiliar or you're getting onboarded, Jakub and Lucas, you probably want to. Either get a recording of this or log into it live April 26. I will add a link to the calendar for that. Cool. So now there's one other thing. We got some correspondence about sent OS. I basically asked the sent OS development mailing list about if we could predictably use rel content to match sent OS and the quit. And the answer was like, maybe, but probably not things might not match up completely well. So I'm not exactly sure if we should move forward with sent OS support, especially with things moving to stream and everything being in flux. So as of today, I think we're going to take the stance that we're not going to move forward with the sent OS support. Or if we do, it will be external to Claire core, as in someone can develop the support and we'll find a way to run it in Claire out of tree. Yeah, I think, I think I agree that a an out of tree. Matcher and updater is probably the only real way forward, given that there's no. Uh, like database from the project. Yep. Yep. Yeah, it becomes a little difficult and then, you know, one of the things we want to avoid is just like. I'd rather not support something than support it badly and then have a bunch of tickets come up and have to constantly explain, you know, why this doesn't work. Well, um, yeah, that's just how I feel about it. So. All right, cool. So that's all I got. Yeah, I wanted to talk a little bit about the go binary scanner. It must be two weeks ago now, something like that for. Thank for the hack and hustle day. I knocked together a binary scanner go binary scanner basically sort of ripped out what the go tools version sub command does. And had it pull out the module and dependency information. So that's in code review right now. There's not a. Like, it's single database of a place to look for that. So it's not like it's not going in imminently because there's no data to match against yet. But the go project is working on a vulnerability. Database centralized database for go packages and modules, or I guess just modules. So. We should be in a good position to consume that when it comes up. And of course, go look at it if you're interested. And then the other thing I wanted to touch on is the operator. I've been working on the operator. Or I guess week weeks now. And it's getting somewhere finally sort of got traction on actually like running code. Figuring out co builder things like that. So that that work is progressing. There's a design doc in the. GitHub discussions. It's a little, it's a little loose. It's spread across. I think the initial post and a couple of comments. But everything's up there. The idea is is that. Since Claire has multiple parts. It's going to be individual resources for each service the indexer matcher notifier. And then they'll be sort of ditched together with some webhook logic that makes sure everything. If they want to reference each other, they are correctly referencing each other and then an overall. Easier resource that will create the sub resources for you and optionally create a database for you if you need one. Otherwise you just. That spin up the individual service resources. So, yeah, that's that's coming along. And it's looking good. Very cool. You want to go over the Maven indexer? Yeah, sure. Yeah. Would you like to? Yeah, I thought of doing a short demo. Yeah, yeah. Shall I or at your screen, Luis? Was that sorry? Shall I already your screen? Yeah, that should be fine. Yeah, hope you guys can see my screen. Yep. Sorry, I'm like, I'm working with a very bad Internet. Hopefully, like, it will not burden you much. Okay, so actually, I've been working with this Maven indexer for a while. I actually started it as a downstream implementation. And it was mainly integrated with the CRD implementation. Then later we thought of up streaming it. So, basically the Maven scanner, it's similar to Python scanner. So, it, it extracts the jar file content from the layers. Basically, the jar files which has been created through the Maven usually have a metadata. So, the metadata will usually have the package information as well as the version information. So, that's it. That is, that will be part of each jar file which is created by the Maven too. So, it will be usually in this particular path. So, if you extract that content, the package will be in this particular form. Basically, in Maven, each and every jar will have an unique identifier. That's called group ID along with artifact ID. That will be usually, will be used to uniquely identify a jar file. And it will also have a version. So, basically, I started the implementation from scratch. But later I found that there is an implementation from the aqua security team. So, I just reused in this implementation. And thanks to Luis for reviewing this code as well as merging it. So, currently the change has been available in the clear core master. Yeah. So, let me give a short demo. Yeah, before jumping into the demo, I would like to add another point that currently we don't have any local updater for Maven. That means like we are able to identify vulnerability data source for the Maven artifacts. I found couple of data sources only from GitLab. Basically, it's called ZinnaciumDB. But the thing is that the licensing model is not allowing us to access it programmatically. So, we were in touch with them and we are trying to get any exception for our product. Other than that, there is another data source from GitHub, but it seems like the data is not that much rich when comparing to the other data source from GitLab. So, I'm constantly looking for that data source. Meanwhile, those who are aware of the remote matcher, so our remote matcher from Red Hat's code dependency analytics already has Maven database support. So, current Maven indexer is integrated with that and I'll be showcasing a demo with that. So, the current one is based on the clear core command line tool. I haven't really tried to run it with the actual clear. So, the image which I'm trying to show here is basically a Java based application. It's basically used in our platform. I'm just executing this clear core CC tool report command to get the vulnerability associated with the Java artifacts present in this particular image. So, hopefully you guys can see that the report has found plenty of vulnerabilities in this particular image. Let me go to the top. It's plenty actually. I think I remember someone mentioning that that Jackson data bind package is like the canonical example to test for it because it's like the most vulnerable Java package that's ever existed. Yeah, yeah, right. I guess it almost has 209 vulnerabilities. If I remember correctly, it's pretty old anyway. So, yeah, so with this integration, we were able to analyze the jar files and list down the vulnerabilities. That's great. I'm curious when you were working through this and you were implementing this into clear core, did you find any areas where our modeling just fell off or didn't make sense? Or was it a pretty easy experience to get the support in there because we don't get that feedback too often? Actually, I haven't really drilled down into the core level. I was mostly working at the interface level and no feedbacks as of now. Definitely, I'll let you guys know in case if I find something interesting. Cool. Yeah, I mean, I think that's that was part of the goal of remodeling Claire anyways that hopefully when you're doing things like this, you've not digging. You know, you are just implementing interfaces and then go on your way. So I'm kind of, I'm at least happy to hear that. Yeah, I'll just show another interesting fact is like, I haven't changed any existing code to achieve this. So it's pretty well designed and cruise to the team. If you look at the changes, right? The overall changes is like something like I haven't changed any, I haven't modified any existing line. I just added new line, I mean new code to complete this integration. Yeah, that's great. Cool. Well, that's great. Yeah, language support is always great to see. Yeah, I did notice that there is conversation going on about whether gymnasiums lag from updating the upstream, whether that six months period is valid or not. I wanted to respond, but I was trying to respond with more of like some kind of numerical basis. I wanted to say like, well, you know, NVD is updated this often or the security databases are updated this often. So your lag of six months causes, you know, this period of time where you're just missing vulnerabilities or you're missing, you know, but I didn't have enough time to actually pull up those numbers. I tried to look at the NVD database, but I couldn't get like a diff of when the last update was. So I avoided actually saying anything until I had some time to actually, you know, give them a good reason of hey, why is six months bad? You know, we can all say like six months bad, but I'd like to do it empirically. Okay, so actually I was going through an white pepper. It is related to this particular topic. So it has some analysis about like when and how the vulnerabilities are exposed and how when like the time frame between the people getting the CV and that has been exposed publicly. So I'm just going through the white paper. Probably I can share you a link. Yeah, yeah, that would be great because that's exactly what I was looking for. I was meant to actually reach out to you, especially with your work with SNCC so far, whether they, you know, could forward you some of that data just so we can actually go to them with some numbers. But yeah, that'd be great to give me that information because then we can respond there. Yeah, sure. I'll share with you. Other than that, an information to Hank, actually, our remote matcher has vulnerability information about goal and packages as well. So probably for your goal and scanner, you can give a try using our remote matcher as well. Nice. That could be pretty cool just to do like a POC. Yeah, yeah. We just have to, do you have like an open API documentation because remote matching it would actually just hit your API correct? Right, right. So the existing implementation like just few lines of tweaking will work with the existing implement. Gotcha. Probably I can just raise a draft PR so that Hank can give it a try. Yeah, it'd be real cool. Yeah. Well, thanks. Thanks for your presentation. That was great. Yeah. Cool. So we have a little bit of time. What I'd like to do is go over open PRs in Claire and Claire core. See if anything stuck right now. And then if we do have any additional time, we will then take a look at bugs. So let me share my screen once again. You guys are seeing GitHub. Okay, great. So let's see what's open for pull requests. Bounty concurrency. So Crosy has been going in lately in a good way. He has some good feedbacks here. But he is looking for binding the concurrency for the affected manifest API calls, I believe, let me see. We reasoned about it a little bit. I think it's fine. We are moving to so previously our strategy for doing a lot of concurrency was to basically chunk the workload. So we would basically spawn a whole bunch of go routines, you know, like 100 or something, let them all complete the work. Let them all come back and then set off the next chunk. Recently, a lot of that code has been moving to using semaphores, which just, you know, creates more of the predicament where as soon as one is finished, another one starts. So now you have more of just kind of like chain of activity that continues to happen. I guess he must have noticed this. And I have no problem with this. I mean, if we want to move away from the chunking patterns and he is probably right about it being a little bit better on burst. It sounds good to me and it just follows the pattern of the other code in there anyway. So I'm into this PR. Hank, if you are interested in anything here, we did kind of chat about a concurrency number for the sem, right? So I wanted to kind of go with the number of CPUs because I knew that if all these go routine start and they're immediately IO bound, they're going to get parked. So when they come back, I didn't want a lot of contention for scheduling because then they would just hold on to that connection even longer. But the other option was looking more along the correlating closer to the con pool max, which was 30. So we just picked a static value in between there basically to try to like negotiate like, okay, well, most likely you'll over commit cores just a bit, which is what you want. I don't think it has to be a one to one, but you'll also be under your con limit, which I think is a balance. I mean, no matter what, this is a very hard thing to say definitively without actually measuring. Like you just need metrics here. So it's mostly just anecdotal, but I think at least an educated guess would be somewhere below the max con limit, but slightly over committing on cores. So that's kind of where we landed. Yeah. All right. All right, cool. So yeah, so this is open, but I will probably I'll do a once over and I'll probably approve this today. Jason blog fixed copy ops. This is yours. This is pretty simple, right? Yeah. Yeah, this we were talking about something and yeah, you you pointed out it doesn't actually honor the interface and then I spotted a typo. So I just fired that. So this also you did actually make it do the sorting, huh? The on date sort date to do to do driving each one. Yep. Okay. I think that works. Let's get that. Let's get that in needs a rebase, but when we're done with this. This is your go binary scanner that we talked about. Yep. Do you want to form one with it? Um, or you wanted to sit for a bit. I think there's I think if you if you click on it. Yeah, I think I've got a question about something or other in there. Do we want to address this real quick if you know if we do have an answer right off the top of your head since both people are here. I mean, the reason for not pulling it in is I don't know it was relatively easy to implement and I don't know a little. I don't I like avoiding dependencies though. Yeah. Easy enough. There's in the initial open. Yeah, so it's basically like in a in our model. I'm not entirely sure how we module how we like model. How we model this like is the. Yeah, how do we indicate this is a single binary and then a dependency of that or a component of that binary. We don't really. I guess we we say it's like a the package DB is the binary path. Yeah, exactly. So what I. I would do that. I would do that for now basically the treated the same way as you look as like an RPM like the packages is actually in the RPM definitions. We don't actually find them on the file system. Right. We don't have a great model for that right now. I mean to me it could honestly just like look could just like let's agree on a string encoding like. You know, database colon package will represent a package inside a database when it's solely housed inside that database, but we haven't, you know, conformed on any kind of encoding of a string to represent that. So, you know, right now I saw that, you know, we have like Python colon something for PIP. I think now that's right. I did do that. Yes. So now I think I think Arun followed that to and did Maven colon if I'm not mistaken when you're representing the package database path. But yeah, we'll just I think that's fine to follow suit with that. We don't really formalize it. We probably should formalize it. Yeah, maybe putting some thought into how like if we want to if we want to standardize that to be a non hierarchical URI that might be just the easiest way to do it. Yeah. Okay, I'll just leave a comment there just. So, yeah, do you want a formal review of this or do you want to go back with just the comments that we just made and do another pass. Yeah, I think I've got a couple changes to make and then I'll undraft it. Okay, cool. Fedora support formal review. Are you still want to keep it in draft for now? I think just like a once over just an RFC on it, whether this if this is something we want to do or I think one of the the big questions and hurdles for this is this doesn't work with our with the way everything else got switched over to having like that. Like that equally defined distribution. Okay, names. I think a six month cycle and new things showing up when they're in beta means it would be really hard to keep that list up to date it would require everyone would require users to install clear updates at regular intervals to be able to just those things. That's sort of the, that's sort of the open question the other one I point out there is like, how do we encode lagging things as an entire distribution as out of support. That's still sort of an open question. Yeah, yeah, currently we just don't. So what's the, what's the implications of having out of support distribution inside of Claire. Whether database data model, whatever it might be versus, you know, just more data that it uses. Well, I guess the question is like how do we unicate that it's out of support I guess it could be in a side channel as an enricher, but the I guess the overall idea is like, these are. We kind of know because of policy upstream policy that these outstanding vulnerabilities will never get better. They'll never get fixed. Gotcha. So, is it do we want to just, I guess, do we want to have some way in to model that to communicate that or should we just have. I mean, I wasn't thinking about enrichers when I like wrote this out but that could also be a thing that gets put in there. That's true say to basically say like this distribution is end of life as of, you know, so when we parse vulnerability databases. There might be a fixed inversion that will never get fixed because the distribution is out of support right. Yeah, basically it'll it'll never have a fixed inversion. Because we do represent, you know, like each individual vulnerability we do represent the fact that hey this vulnerability is never going to get fixed and we'll march that as flagged. Is that is this the question is like if we can prove that your like sort of base OS is now out of support not getting security updates at all. That might be something that's worth surfacing. I got you our report although again, I think an enricher might be a good place for this. Now that you scoped it like that it might very well be it might just be a new enrichment source. That you know just is the list of distributions and has unsupported and if they match then hey. Yeah. Yeah, I don't know that the the underlying OS is out of support. Yeah, that would be I think that's now another we're talking this through. I think that's a good a good way to solve this. Okay, good way to surface this at least. All right, cool rate limiter. This was just waiting on testing timing, timing tests. Oh yeah, I haven't I haven't run the timing test yet and that's on my to do this. Yep, as soon as that's done, then we'll I can make a yes or no on that the code looks fine. Like I think it's okay. I know you worked out something with crossy. Yeah, that's well this is like half like the actual mentation is in Claire. This is this bid I guess is just plumbing through making sure everything actually is configurable and uses the past in configuration and doesn't just like pull each to be clients out of the ether. Cool. And then there's this one who's been sitting for a while. I thought I agreed with this. I did ping you Hank to see if you agreed as well. This is basically just a code organization ticket from a room or a PR about where configure lives. So drivers are updater sets. It moves the configure method to live on driver updater set dot go away from the registry and then configure becomes part of the update the driver package instead of updater package. Yeah, I think that seems fine to me. I'll I'll take I'll put it on my list to take a look at the stuff in there. Okay. All right, cool. I'll just assign this to you so it shows up in your email. Okay. Yeah, great. You like that. I think that's kind of how you you said you've mentioned that before assigning it to you is helpful. I don't know if it is or not. Yeah, I mean it's more helpful than I guess. Okay. So pull request for Claire. Notifier DB usage pass. I haven't been able to look at this just yet. It isn't draft though. Is there anything it's not done for it. It's not done yet. It's it's some some groundwork for for digging into it more. It adds some profiling on like where connections are and yeah, things like that. Yeah, I do remember that. Okay. Well, when you're ready to take this out of draft just ping me. This does bring up the list. I'll circle back to that rate limiter again waiting on timing. You're not to go over that again. CICD Jan this we requested for this to actually move to Claire core. Was there any move on that? Yeah, well, I reacted with the request to really at least have a cursory look at this. So I don't I don't port the code that you might fundamentally disagree with. And another thing is that I also don't see the secrets that are available in the in the repository. And I don't know if I can make use of the quay token there. I mean, that's fine. We can add the secret is a basic logic finance leaves. Okay, I'll do this. I'll take a look at this. I think we did agree on like your reasoning around this. We're just waiting to move it to Claire core, but I'll give this one more look after this meeting today and give me a definitive answer. Okay, thank you. Okay, integration tests. Yeah, this hung for a little while, but it looks like you picked it back up. Everything I saw looked good. I thought you were doing the correct thing there. So you're just waiting for a final review, basically. Yeah, but I posted the changes like a day ago. I really had a hard time finding time for that in the past couple weeks. Yeah, it's not super high priority. So it's totally cool. It just it's very helpful for us, especially that you guys are the consumers of, you know, the rabbit MQ and active MQ stuff. So it's helpful to get your opinion about these tests free messages. Okay, cool. I'll take a look at that after this meeting. Thank you. See how bad I am with tabs. Okay, except OCI manifest for indexing. Just a back burner, right? This is you Hank. I think we lost Hank. Oh, wow, look back. His head's gone. This is a little frightening. All right. Well, I know this is just a back burner. It's probably like a nice to have. We haven't had a much request for that. But I think moving to OCI is a good idea just all around. I can't see any bad reason. So config allow disabling notifier. So Hank, the question around this I have, right? It's like, I'm not not about it. It's less of a problem now that we killed those log lines because that was really like the annoyance. But the thing is, is that disabling notifier, we don't really allow anything else to act like that. You know, we don't allow disabling the indexer in combo mode separate from other things. So I'm either for all of it where you can run combo mode and if a piece of the config is missing at a high level, then that particular service doesn't run in combo mode. Or I'm for none of it where we just combo modes combo mode. It all runs together. I didn't I don't want to like the special treatment of like just the notifier and I get it. I know why this happened is just because the notifier was spamming all the logs across like most of the services. So where do you want to go with that? But also the notifier is the only component that like needs a duration that doesn't have to be configured and you can still have a working system. So like it's it's special. Yeah. In that in that way. Anyway, like I have the matcher unconfigured and say you can't then you can't run. You can't have a clear without a matter. Yeah, but you can have just an indexer. So it's actually the matcher. That's the only special case. I mean you can't have a you can't have a clear system without an indexer either. You can run you could have just an indexer if you just wanted index reports. Well sure but then you don't have like the matcher part of the API I guess my point is the notifier like needs to be configured to do anything at all. Like you can have a complete system and not a notifier. You can't have a complete system without an indexer or without a matcher. Okay. Right. So if you just run Claire and you're not running Claire. Like and quay or and something else. You don't need the notifier at all. So like by default if it's unconfigured it should just not start it like we shouldn't throw errors that bits are unconfigured. Yeah, I mean it. Yeah, I get what you're saying I just don't like the idea that like. Here's a very special case of our configuration for notifier instead of. It being uniform and one less thing that we need to explain across the board. Because combo mode is combo mode you're expecting everything to run. I mean I guess. But I mean the. I don't know my argument is like you you have it configured to you don't have it configured to do anything. Why do you need it. But you don't have it configured to send the notifications anywhere. Yeah. Okay. Is this in a reviewable state. Probably not. It's at least six months old I think. Gotcha. Do you still want to move forward with it. I mean I don't I don't have. Strong feelings I guess one or the other. I mean I get what you're saying. So come on. You don't want notifications. Leave the notification area blank. I think that's fine. And I guess you're right in the fact that like. Yeah if you're not sending off notifications why are you running all the code using. System resources firing timers. Utilizing more go routines than necessary. Yeah and for like simple for simple setups. It's one less bit of the config you have to fill out. Okay. Okay yeah well if you do want to get it back into shape then I think I get what you're saying I think it makes sense. Okay. You want to push this into draft. I don't think I can since your PR can I. I guess not. Okay. I'll toggle it back into draft. Yeah if I can I I've never done that. And I don't know if I can open with edit maybe. Now that's just the title. Okay. All right cool. 10 minutes. Maybe we can hit clear core issues. Oh yeah so this one. Definitely worth looking at. Apparently we're creating duplicate. Vulnerabilities in the database. I am not super surprised by the existence of this bug since we do a lot of exploding of data. You know we see like a an oval and then we'll make like multiple vulnerabilities from it. I think some investigation has to be done about whether this is on purpose or are we doing this just because of a software bug. I don't find this. I think this is the binary source thing. Okay. No Austrian data sources are providing. So you think that well I thought she pointed out that quite literally. Oh you mean at what point like when we're parsing the security database and we're like writing two records now. Yeah. Okay. Okay that's a good place to start looking. I don't immediately see this as like a critical issue because we're not actually false reporting we're just like giving you things twice. It's not great. But I'm not sure if it's like a drop everything situation. Yeah. Claire before finds incorrect package versions and vulnerabilities. Again. Did you have an opinion on this one as far as where it might be. No I haven't I haven't really looked at this. Okay. I'll take a look at that. So there's no way to limit numbers of index or DB connections. This might fall into your recent work Hank so was this covered in that notifier DB work. But that graph ER goes in and sets this adds a config field starts using it. Cool so let me link those. So click or pull requests. So this was actually in a. It was in Claire. Yeah. I only did it for the notifier. I think the other ones we already do it but the notifier was just hard coded. Yep. So this before doesn't find all installed packages. Yeah I think so these really just need like investigation. The spelling investigate. So I'll try to knock those out over the next couple of weeks. There we go. It doesn't find install packages because that's really just you know finding time to sit down. Yeah and go up through some. Exactly. Yeah. Yeah. That's cool. Issues. Okay. So that this will be handled with that linked PR incorrect version parsing for DPK archive. I haven't actually looked at Jason before answering. Okay. So just another one. Let's support. Let's close this out. There's also a JIRA. I don't know if you saw there was a message to the mailing list. Linked to the JIRA issue about this. So if we're going to close this we should also close that. Okay. I'm going to end up that information to close it after the meeting. Okay. On support of scan results. I'd like to do a POC of distrilous image as everything is today with a with a caveat. Yeah, I talked to someone about this and they said that it could just work like. Yeah. Logically there's the whole where it couldn't uninstall things, but that's not the like basal system that spits these out doesn't work that way. Awesome. So yeah, as long as there's like a documented caveat we should be able to sort of move forward with it. So let's let's do this. Let's say plan this POC for the 4.1. Let's say let's give ourselves some time. So we have 4.1 coming out June 17. Let's let's just plan it for v4 to tentatively. Sure. And then see if we can just have a POC. I really don't think it's a big I think it'd be a nice win and it's not a big ask. Yeah. You want to make a milestone? Should we start using milestones? Sure. Milestone just click here. Open close nothing. Okay. I'll do that after the meeting just because we are a little tight. But yeah, I will make a milestone. I think with the with the you know release cadence now it's it's going to be a lot easier to say like here are the things we're trying to do in this. Having our own having our own release is going to make it much easier. Yeah, exactly. All right. Well, this is pretty good. You know, I'll cut it off. I'm not going to hold everyone just next week we should be we don't have a churn on issues too hard. So we'll just grab these next week. And then we'll go over Claire, but cool. Well, thanks everyone for sticking around. The videos will be uploaded shortly after this to YouTube, the open shift channel and I'll get all the announcements out. Well, take it easy.