 Recording has started everyone this clear development community meeting for April 6th. I have a couple of things on the agenda a few presenters. I'm going to open this up by going over some of the tickets that were brought up. In our last community meeting so we have this filter now it's actually public. And if you were to go to our community meeting agenda, scroll up a little bit. We're going to monitor action items in our upstream issues dot red hat. So things come up during the community meeting and we want to track them. These are also really great tickets for anyone that is watching and wants to begin contributing to Claire. Don't if you don't really know where to start coming to this link here and taking a look at these issues. They're great places to kind of jump on to. So we had a couple of things come up last meeting. We wanted to look into this drill support and sent us not there hasn't been much movement on that. Our team at red hat is currently a little busy since we're packaging up a quay 3, 5 release. So a lot of these things we didn't touch yet, but I'm just going to review them just in case it peaks anyone's interest. So sent us support is still on the table. Trills trying to understand if there is a upstream security database and whether we can actually link vulnerability data with confidence between the rail databases and sent us package databases in a sent us image. So I think we need to either do a little bit of research on that or just reach out to the sent us teams to get more information support distro list containers. Hank and I threw this around in the Claire collaboration chat with a couple of people from stack rocks as well. It seems feasible. I made a comment in this ticket that if distril list containers are actually append only and stay append only we could possibly support it today. With the caveat that if they do start actually mutating any of the per package databases we'll probably miss some things without a rearchitecture without a look at how to support that better. This is an interesting ticket that Ivan brought up from customer engineering and this will be prioritized. This is basically having quay UI inform the client when a container being scanned isn't supported by Claire. It's just a usability thing but I think it's important. Right. I think after we finish up the quay 35 release this is going to become prioritized so clients will be able to understand whether Claire is saying hey I don't know what this image is or if it's saying your image is fine it's okay it's scanned and nothing was present in it. Integration testing I came up against the wall with this because the way we are doing Claire initialization we moved it to non-blocking so Claire doesn't wait for all the data to be there actually makes this testing a little bit harder. So I'm going to punt on this for just a little bit until me and Hank probably collaborate on yet another brainstorming session on how initialization should work and then implement granular health check it's just still in the back burner it's also a really good community ticket just because it's not extremely difficult to dive into and can be sectioned out from the rest of the code pretty easily. So yeah those are the tickets that are in play as far as action items for the community meetings. Let's start looking at agenda items so Hank it looks like you have a couple items here to kick off with so if you want to just go ahead and take over. Yeah okay so the first thing is talking about rave limiting so we've I don't know if anyone's actually tried to run Claire they'll hit some rate limits when talking to trying to fetch red hat databases and we've talked to the IT department internally and they're unwilling or unable to change the way that works so we're just going to implement a rate limiter globally it'll just the plan is to have Claire just do a rate limited 10 requests per second to a given host that's just what it's going to be so it'll slow it'll slow like the red hat requested down just about everything else should be unaffected because we're not making as many requests or we're making a smaller bounded set of requests but that's that's basically the only change is the per host limit everything will still run in parallel and like when the red hat updateers start running they'll just sort of sit in the pipeline until they get their token to go do the request. So quick question yeah with that rate limiting where do we expect to actually block where are we sitting while we're being rate limited right on the client.do call yeah yeah in the fetching in the updater okay cool so that would just hang out until is there ctx driven is it still ctx driven it's just yeah like all the cancellation everything works exactly the same okay there's no there's no magic it's um the change in Claire is or the the rate limiter is in Claire it's not wired up to anything because of the well I'll get to it my next items because I need to go do some wiring in Claire core but the rate limit is there it's very easy it just uses a just a token bucket pulls it okay yeah uh yeah waits waits for it to be able to get it so the context cancellation plums through everything it should be it should be pretty simple um okay the configuration rework so as a part of this one one component of this is if we're going to use like a rate limiter obviously we need to make sure everything is using the rate limiter as in it's using the configured uh client that's going to honor it so part of the Claire core changes of this is I went back through and previously we were only calling configuration methods if the user had passed some sort of configuration object in and now it gets called unconditionally if it exists with just a um and the like configuration mechanism just gets a no-op bit for that um so there is a PR against Claire core that does all this for all of the all of the entry updateers work like this now um and uh support being called like that now so that work uh is done I ran it for a couple hours yesterday just sort of sitting there and uh with a I modified a the like testing binary to uh just have the default transport just return errors so I like configured a HTTP client and then went and set the defaults on the net HTTP package to just return errors anytime they got called and just let it set there and it ran fine everything was actually using it make their network requests so that's promising got you one thing I have a question about is do all updateers now require to have a configurable method on them uh they don't require it currently but I mean they maybe should okay we can follow that same like no like embed a no-op thing that I was doing in the metrics or the uh enrichment specification if you wanted to follow the same pattern it might just be nice because then it just looks like okay if you need to do something that's more like under the covers that you don't care about just embed this thing I just follow that from now gRPC if you like if you are implementing like clients or servers they just have this little embed thing that makes sure like okay this fulfills this interface so I'm just stealing that pattern from them which I thought was kind of neat because it's just very out of the way yeah I mean the the useful bit of the configure um interface the way we've got it set up is that that provides the HTTP client so you have to implement it yourself sort of no matter what but I guess we could make a D like a an easy to use one that just captures that yeah exactly you could think about that but uh yeah I think I'm pretty sure I have my head wrapped around what you're doing so yeah you're just basically sorry I think we might want to migrate that interface to make it not optional the configure interface right at the updater interface to make the mandate that that configure method yes the configure method on the update on the updater it should become required basically yeah because right now it's just you can opt in into implementing this extra interface and then it will get called and everything in tree does that and we don't have any out of tree ones so yeah we'll just make that change pretty easily yeah no I think I'm in agreement with that so let me make a note uh let's actually make that a note for this yeah um but as part of this I uh this I made the same change for scanners and the updater factories so if they have their if they have their configuration methods those get called no matter what so would you say make the configure method mandatory for all of those updaters scanners and factories uh yeah probably okay because I mean it's it's pretty trivial to stub it out if you don't actually use it and if you do use it it's sort of a key bit of everything yeah so if they don't use it right now will we panic because uh the default the non-default well the default htdb client is being used no uh I mean the like messing with the package level defaults is I don't know I don't know if we want to do that but I just did it for testing because it's very easy to figure out if something gotcha gonna gonna do it oh so that's how you did it for testing you basically poisoned the default and then if it was used it panicked gotcha okay gotcha yeah that's why when we were talking in slack I was like are you gonna upstream that but I guess the you wouldn't right because you are poisoning the defaults probably not yeah okay I mean I have to flag into the the test binaries so you can re yeah yeah that totally makes sense but yeah we wouldn't do that in prod that's why I was kind of in a had a question mark around all right cool that all sounds good yeah um and then so as part of this work digging into this it sort of also bled into some work with the air gap stuff um so I ended up doing some reworking of the rail updater and scanner um so that might need a little more scrutiny uh from people that care about it which I guess is all of us um but yeah so it's sort of the idea is to cut down on the number of like side channels that the rail up there is doing it was like spawning it was doing some like a little bit of code smell stuff uh so that's more in line now um and then I had an idea about how we could maybe do better import and export for the air gap support with sequel light but that would involve like pulling in sequel light as a dependency so I don't know if we want to do that or and then trying to have to explain that you really shouldn't use it for anything except this one specific thing yeah I mean I align with the interest in sequel light mostly for a offline indexer you know no database required so I'm not opposed to at least exploring opportunities to bring sequel light in uh we'll just have to leave the options because right now even our scratch binaries will like to see go now you know see go disabled that goes away we can't do that I guess we we could try to use that pure go implementation in sequel light see if that works that fits our use cases yeah I mean that might be where we want to start I looked at that I didn't really see how to like initialize a database with that so I think you still need the sequel light tooling like on the node on the container inside the container because it won't create the file type for you I think I haven't looked that hard yeah I don't know I'm not sure my my concern is mostly I think if we were to we could make so the way the offline export works right now is it just dumps everything it finds file every time I think if we had the sequel light it could maybe look at an old you could tell it to look at an old version and then it could do the like use the fingerprinting mechanism instead of just starting from a blank slate every single time which might be of interest yeah it could be pretty cool yeah uh yeah that's that's all I've got cool all right so I have a couple things more just informational database metrics are now in actually this 4.1 alpha build which I'll touch upon a little bit more in just a second but yeah we basically are now exporting all database duration query durations and just counts so you can see their rates and if you are interested in finding out a little bit more about how we expose those again we keep most are open designs in github discussions in this design tab so here's our clear and clear core metric pass one and basically if you come down to here the general API general database this part of the specification has been implemented so at this time you can basically go to Prometheus explore metrics anything starting with clear core I do have to make a small amendment this only covers database metrics in clear core there are database metrics in clear because we develop the notifier service inside the clear repository I just need to basically amend this just a bit and add the fact that there are database metrics which start with just clear they follow the same naming convention though so it's clear core the store name which happens to be like indexer notifier or bone store where we keep the vulnerabilities then the database function and then a name as a moniker saying okay totals this is a Prometheus more of a Prometheus idiom total is for counters duration is for histograms with the unit that is being measured so if you're running clear and you have been questioning how are we doing as far as query optimization you can now actually pick those details out so if you guys watching this or currently at the meeting actually want to start looking at this stuff you can definitely start submitting tickets about you know query lens durations we'll take a look at them and just make sure that we're doing a good job at optimizing sucle on our end but I'll be doing that work myself obviously there's just a couple things going on at the same time so I haven't been able to sit with that yet but it's nice it's a it's a nice community asked to say like because performance affects everyone involved with the application so so those are in the enrichment work is about to kick off so you know let me show another link there is github.com quay player enrichment spec and this is our specification for enrichment if you watch the previous meetings and you know enrichment is our way of bringing back nvd metadata and other types of auxiliary information to the vulnerability report so for instance if we wanted to bring in red hat grading scores we can now do that and and place it into the vulnerability report and then clients can use that extra metadata basically to supplement the data that's there but if you look at this link with the markdown files there's there's two ones of interest there's a specification which goes over how we're actually going to implement this this new metadata we're calling it enrichment specification and then more recently there is the implementation details with the nitty gritty of of how this is going to happen so this week I'm going to start on this work you can track this work at issues.redhat.com it will actually be in our public board I will send a link to that in a bit I also think that it's on the agenda but we'll be tracking the work for Karen Richmans and that will be kicking off this week so yeah if you're interested in that if you are missing severity details and it's really affecting your use cases with Claire just watch that work happening in issues.redhat.com I will at the end of this meeting actually put a link there if you are interested you can track it and then you'll know as soon as we get auxiliary data back into the vulnerability support you'll be able to use it once again and then this completed as of yesterday there's a four one alpha build the four one is going to be a pretty big release I mean we're going to have a ton of reliability fixes this enrichment metadata coming out so we split the release a little bit four one now has four one alpha one is released upstream you can go and grab it from the docker repositories all that information is at the repository the Claire repository but it has particular releases as far as we no longer block in Claire to come up and running we don't wait for initialized data. Notifier is more efficient now this actually touches upon some of the things you brought up Jan so we've kind of got all those changes into there there's also what Jan is going to cover next so I'll just wait and defer to him to actually say it's now in the four one alpha build and just a couple other reliability bug fixes doc changes so if you are interested in that you can actually just look at our change log on the Claire repository and the Claire core repository so yeah that's all I got so Jan if you want to go ahead and take over with your item okay hello everybody my name is Jan and I'm taking care of Red Hat deployment of Claire and I will try to share my screen if that's okay with you Luis yeah definitely let me stop sharing okay do you see my browser yep okay cool so as part of the release that Luis was talking about just a little while ago I also implemented a change that's related to Oval data published by Red Hat so if you don't know what Oval is you can check out at their site I won't go into depth I'll just say that it's some kind of open standard describing how to track vulnerabilities and Red Hat is producing a stream of data that conforms to this standard by the way all of the things you see here all of the web pages I'm visiting are also in the agent documents so you can go over them if you are interested so now specifically to Red Hat Oval this is a simple directory which looks like this you have information for L5 through 8 and then if you go deeper into the hierarchy you can see there are streams for specific products so for example Ansible 2.9 I won't be opening those because these are archives you need to download them and extract them so we won't lose time with that I will just show you how a typical vulnerability looks like in the Oval data stream so this is basically definition of one vulnerability in one stream there are a lot of them so and they look basically like this one so what is important here is that if you take a look at the class you can see that the class is patch and that means a fix has already been released and it's also visible here in that in that ID that there is a Red Hat security advisory tied to this vulnerability specifically this advisory at this link you can see more details here we have some human readable description some other data I won't go over all of them what is important though is this affected CPE list so this basically identifies each item identifies a repository you can see for example AppStream here if a record in Oval doesn't have this affected CPE list then Claire doesn't care about it that's important information for what I'm about to say but for this vulnerability we have affected CPE list so we process this okay so now about my change apart from those vulnerabilities of class patch we also have vulnerabilities with class vulnerability and there are two types of them first one is unfixed vulnerability and that's basically means security security problem that has been identified but has not been fixed yet as you can see there is no security advisory um related to this and again in the ID we don't see RHS a we see just CVE another type of vulnerability is unaffected vulnerability as you can see here this is actually a very strange item as this is basically just the confirmation that given vulnerability doesn't affect this package and you can see that there are some criteria for the vulnerability to match an unaffected vulnerability has criteria structured in such a way that they will never evaluate to true you can you can see that X and IO is installed and is not installed must be true at the same time and as you can surely understand that will never happen so these two kinds of vulnerabilities have been around for a while but in the in the near future that had platform security team is going to release oval files that will also associate them with affected cpe list and from that point Claire will start to process those so for us to be prepared for this change we need to do two things and basically the intent of the of those changes is to take those unaffected vulnerabilities and discard them as soon as we encounter them because we really have no way of using them they don't bring us any value from the point of security vulnerability scanning so as soon as we encounter them we just discard them and we go on we don't even create the entry in a vulnerable store on the other hand with unfixed vulnerability we want to create such entry and we want the vulnerability to be seen in vulnerability report so this is actually the point of my two pull requests you can you can go over them they are they are linked in the in the agenda this was just let's say high level overview I mean I think I understand what you did and why you did it so basically I mean the straightforward answer is just that those unaffected vulnerabilities are a pointer for something on the red hat side but doesn't really mean anything to Claire we would just we would actually accidentally match them right yes that's right I mean we would we would we would process them and in the end when criteria are processed we would find out that they actually do not resolve the true do not evaluate to true but that would be a lot of time spent on nothing basically yeah totally totally makes sense all right cool yeah I mean the change makes sense and I'm for it it it is it does beg the question about the usability of those things in the vulnerability database but um you know I think that's it just might not be usable for us but it's usable for red hats somewhere you know I guess so I I suppose sorry no no no I just suppose there is some historical reason for that and somebody parses it in their own way I just don't see a way for us to use it yeah yeah it's annoying that they're making a affected effectively mean like inflammable like it means that it affects it and it's also means it does not affect it yeah it's it's quite the oval hack it's like yeah just make an impossible condition but uh I mean it touches upon the fact that I think at some point I think in Claire you know v5 or something we should start considering those oval trees conditional trees it's a different architecture though because to do that we literally we need to understand the entire contents of the container build the trees and then have the full contents when we go to the match I mean the the original plan was to consider those trees because you can run logical transformations to turn them into just satisfiable conditions yeah yeah it's just a little bit of a different architecture because right now you know we decompose the report into streams so we wouldn't be able to do that anymore because to consider those trees you need the full report you need to know what the distribution is what's actually installed you need more than just hey this is a package record does it match this old right but only only for some types of those trees which are not uh common so like yeah there are um there are oval like right durian which are like file exists file contains but those don't seem to get used very much by distribution vendors like I don't think red hat uses them at all all all of their rules are written in terms of this package is installed and signed by this key sort of thing yeah I think I mean when I think if I'm not mistaken like the ubuntu databases they might actually be like it's running this distribution and this package is installed and it has this version x or something like that and then you'd have to compile that all together which yeah I mean if we support it we should probably support or we'd have to basically grok through the databases we know at least to get an idea of what conditions are in those trees and then it'd just be you know like uh evaluation period when someone wants to onboard a new distribution or security database we'll just have to ask them like hey what's your conditions what are the possible conditions and do we support that yeah I think the like weird like inspect the state of the OS like conditions are more used by the feeds published by isvs which we don't care about as much yeah okay yeah I mean it's a little bit ways off but it's nice to keep it in focus because I think you know it's also interesting because Claire has to play nice in a world where it's not just oval either so it's like that's kind of I I think a reason why we didn't immediately take that effort because it's like yeah we support oval but we want to support other things too so digging a whole bunch of time in the oval didn't make sense right now but eventually it will you know eventually we'll be able to tool it out that far yeah cool so I want to do sorry just sorry just rambling I also want to do eventually uh actual like DPE matching for real um yeah we should have a sync of another thing too in base we should have a sync up with the stack rocks because they currently do that so it might be that we can just kind of take how they're doing it and massage it into clarity for yeah I mean they when it comes up every time everyone cringes in the room so I don't know how well it works yeah but at least at least they have something working I don't think you know we there's CPEs involved with the rel stuff but I don't think it's actually CPE matching it's just like uh it's identifying tokens that match don't give me don't give me a start I don't know how product security is using it very dumb anyway all right cool so Diane Mueller added an item here for deep dive on indexing this is a talk I was planning on doing at some point which looks like it'll be either April 19th or 26th we'll probably put on a new OpenShift Commons that does a deep dive on the indexer shows um how basically it's implemented as a finite state machine how content addressability works and um basically it's functionality and try to do as as low level as we can get try to explain the data model everything like that because it just can help you learn the application a little bit more um so that's all the agenda item um let's go back to those issues for a bit because there's a couple things I want to ask about oh by the way Luis uh you don't share the screen oh cool thanks I forgot actually all right I think that was screen two yeah cool okay so if we go back to these issues I am kind of optimistic about at least taking a crack at this um correct me if I'm wrong Hank but I I see this as if if these containers are append only we can support this today and we can support it better in the future when we do model out deletions on the file system but we I think we can support it today and if we can I think we can do it pretty easily and it just checks a box I don't know if you have opinions there um I guess I mean the caveat is that it's there will be false positives um but can we plump that problem saying like okay you have a false positive because you're not following the distro list specification I mean no no I don't think so because like we we already know that people do dumb things like install packages and then uninstall packages in the same like container ancestry or in the same container in different layers in the same container and with distro lists we will never see the removals because of the way it works which is basically every package gets its own file its own uh like devian devian database formatted file um and we'll never see the deletions but per the spec you're not supposed to delete right between layers no there's nothing no there's nothing that I saw that says like you don't do this I mean like for for container best practices you shouldn't do that but we know people do that anyway so gotcha gotcha okay now I was under the impression distro list containers had to mandate that between layers you would not remove those packages you know I mean I think most of the build tools that do this just like copy everything in exactly create one layer but I'm pretty sure people then like build on that and do weird things gotcha I mean we could do support it would be it would be nice to be able to do this in a way that like flags it as beta or not ready for primetime but we don't have any sort of like graduation mechanism uh so yeah yeah I mean almost like we could throw it in if we're willing to field some support tickets yeah but we bake that in but yeah okay I think we we spin on it a little bit longer I just you know it feels like a real minimal effort to kind of say like at least we have like like you're saying at least we have like beta support for this don't don't depend on that theoretically in our vulnerability report we show the package databases where the packages were found so we could in some way document like hey if we found you know like debbie and rp and packages outside of or even in the directory where distro list containers uh or packages package databases are found consider those results beta you know yeah so all right let's spin on it for you know we'll talk about it in chat a little bit more I mean I think we are kind of busy but yeah I mean I think yeah for like a proof of concept of this it would be I think pretty easy doing it correct would do more engineering work and if we want to just do a a proof of concept we need like a document we need the documentation in line yeah like good to go so that we're not sort of dealing with that yeah yeah exactly we could just you know when support tickets come up we can be like okay look well these are just you know check your results if they're distro lists then and then we can talk that through with the kway team proper you know and just be like do we want a UI element that says this or anything like that um but that's cool um this I kind of want to take on because sometimes I like digging into kway code it's like a nice little break from what we do on an everyday basis I'm just waiting for it to get prioritized I don't think anyone wants to do a merge into kway until three fives done um but it seems seems pretty easy um integration testing I'm just going to pause for now it's yeah I have to figure out what we do because I mean I guess I hate just putting a time sleep with some arbitrary number because it's just especially for tests the let's sleep to make sure that all the data is there especially for testing because it's just going to flake yeah I mean I think I think one way to do this might be to use the air gap support yeah because then you'll be running it as an external process that you can wait on or kill if it takes too long that's a good point okay so yeah let me never properly logged into these things and so doing this might be a good impetus sneak in um like making that mechanism more friendly with SQLite yep sort of stuff because then we can basically keep a keep the database in the github artifacts and just keep updating it as time rolls along and cheat yeah maybe we should you know I think there's a lot of usability with what we call air gap and we pigeoned hold it into a very particular feature called air gap but it seems like it can be applicable applicable for quite a bit of things yeah yeah that's fair so we should just consider you know like it would be a nice relook at that um but yeah I mean I would say at least start POCing some of the SQLite stuff that you have in mind if you have the free time maybe hack and hustle because yeah I mean I just have a lot of optimistic ideas around SQLite and becoming a little bit more um embeddable and locally ran cool so this I think we just need like you know a day two days of free time to plan this out and yeah get it rolling it's just one of those things that are on the back burner yeah the the health check mechanism that's in there right now is like super minimal yeah um we should probably implement like a kubernetes style one that has a little bit of programmability in it be very nice if we were to do this in steps right what would be the very first thing we'd want to health check I would assume database because that's core you can't do anything so I think the very first thing we would want to do is the ability to name space no or node health checks name space node health checks because there's a class of there's class of health checks that is like you'll get the same result no matter which which node you talk to and then there's a class of health checks that depends on which node you talk to right I get what you're saying the cipher uh shared information from local information in health checks I get what you're saying yeah yeah I mean we can just make that like a first-class citizen in the JSON blob it returns some like high-level key it's like local or shared or yeah something like that figure out the terminology but cool and then yeah that makes sense so and then we should probably start populating then the node health checks first because those are very simple and yeah that'd be things like database connection um like base on the temp like running out of temp space so database local storage stuff like that yeah we don't really monitor this space at all no but I think we could do that I mean we can do that entirely in the health check if we implement it because yeah that could be a fun one so we might want a health check that just kind of takes things modularly spits out some kind of like string that we just display into a JSON message yeah I would I would I want to model it after the Kubernetes API health checks the way theirs works is if you hit the like root of that it returns like yeah just a bunch of lines and either I think like a 200 if they were all okay or something else if any one of them wasn't and then you can do a query parameter to just query one of those health checks gotcha see if I can find a quick example whatever I'll find it later I'll put it on the ticket but uh yeah sure I'm down to model it after that um I'll have to k8th api health checks okay database local storage we don't care about like RAM and system metrics right I just come up with them yeah uh I'll put that for now just as a starting point I dropped the link in the chat oh great yeah when you're presenting it hides the app I got it back oh great yeah cool so you're spending you know the majority of your time working on that the rate limiting stuff right rate limiting the configuration yeah that's winding down so I can take on the next thing whatever that is probably working on the notifier okay those are the tickets let me see I think the centos one I don't know man there's I don't I don't think there's much to be done especially considering stream is even less rel like mm-hmm yeah I mean I'd like to talk to someone that works around those teams to see even if we just have like a brainstorming session and then come out of it saying hey no it's not gonna work you know like I'd be happier with that but I just need I need to find the contacts I haven't really yeah I guess I guess maybe we should try to follow up with the centos security group exactly if we can get one of one of them in a meeting in our next community development meeting to talk about this with us that would be at least a definitive answer for us yep all right so yeah I'll add that let's see I'll leave that as an update for myself I'll start poking around it can't be hard for us to get those contacts I mean basically under our umbrella so cool is there anything else you're thinking about the notifier work you're talking about making it full dissist is that what we're kind of going with I don't know I don't have a specific plan but it seems like that's what we're gonna have to do yeah I mean do you think that the efficiency work did enough I mean I don't know Jan did the changes make enough of a difference or things still body well there is still this problem that if you exceed the maximum number of database connections then the notifications just get lost yeah okay that's probably a better way to start is making sure that that that configurations are honored everywhere okay and then see if the combination of that and not eating all the RAM and the universe but make it work yeah okay that's where I'll start on that then and then this lock needs to go in just because it's going to get rid of all the locking connections right or connection using locks so yeah I should I actually spent a little bit of time just like confirming like I spent just hours just testing it basically because I got paranoid because it's a big change and it's our locks so but it seems pretty iron proof so I wrote tests that basically just spawn a bunch of random go routines with a random number count and just ran that thing for like hours I didn't get any data races didn't get anything unable to lock no deadlocks so I think it's cool I looked over the code a couple times but yeah probably now I feel a little bit more comfortable with it and caught up with it I'll probably start trying to implement that stuff that should help with database connections at least yeah let's merge that into tree and put everything over to it yeah yeah um so what's I gonna say yeah if we do change the notifier to do more of like a this stuff I think we need a design doc just because that's can get a little hairy and things are getting complex enough and the need to correctly identify how things integrate is becoming like paramount at this point so we could definitely like collab on that even if you want to take charge on it if we do get to that point um but yeah I think we're both in the technically that service should work that if you spawn more services or nodes they can each parallelize the work of one notification of building one update operations diff basically into a notification yeah I yeah but I yeah I think yeah first first steps is making it so you can accurately capacity plan and not have it blow your number of connections yeah I would totally agree yeah cool all right well we're at the we're at the hour so unless you guys got anything else we can wrap this one up and I'll do a little bit of ticketing and updating this doc with what we've covered cool young thanks for presenting that was great it's good to get uh that information out this thank you for organizing those meetings it's it's really valuable the old no problem I'll see you guys on the next one yeah bye see you