 Hello. Good morning. Good afternoon. Hey Alex, are you there? I can't hear you. I'm here. Can you hear me now? Yep. Good morning. Good morning. Afternoon, I guess. Yeah. Just wait a couple more minutes and then we can start. I was hoping Sugu would be joining soon. Okay. I think we can start. So we have two documents that we wanted to review this time round. The first is the database paper that Sugu has been working on and the second is the project review process that Theron has been working on. Sugu, do you want to just give an update as to where we are on the database doc? I know that Quinton has put some feedback into the doc and I've just written up some some notes that I'd like to go through as well. Sure. Yeah. So initially I think there was some confusion about what the definition of a database is. I think many of the comments came from that. Initially, when I wrote the document first, I had categorized pure relational DB databases as well as the new SQL ones as relational. And there was no clear differentiation between the two. And I think some paragraphs were talking about the pure relational ones and some were talking about the new SQL ones. And that clarity was missing. And so I think now what I have done is actually clearly separated the two. It's talking about pure relational as one category and new SQL as the other category. New SQL is kind of slightly broad and they have a large variation in what they support. But I've tried to address them in a generic fashion. So that is one that's I believe that's the main change I had made in the document. Understood. Okay. And there was also you probably you were not there in our last discussion. No, I wasn't. I'm sorry. Yeah, there was a discussion about what is a database in general like Cassandra was considered as one whether it should be a database or not. I think it was borderline. So we decided to include it. But we already have a key value store section. And so maybe I'm thinking maybe we should talk about the fact that this line is blurring and some systems whatever get categorized as in both. Yeah. So I think that's a fair point. So I wrote some notes at the end of the document just because I couldn't find the right place to add the comments for it. So I think one of the things that we were discussing in the past was in the rest of the white paper we had kind of defined the attributes like you know performance and consistency and durability and whatever. And I think it's probably worth adding a paragraph or two around consistency and eventual consistency and things like that. And any other attributes that we consider as important for databases. So for example, one of the one of the other things that I talked about was do we need to discuss the topologies of the databases specifically in terms of things like consistency versus availability versus partition tolerance and things like that. I actually so the debate I'm having about it is those are generally considerations are not necessarily a cloud native concern. But I don't have an objection to adding it. I mean it's an important thing that people should think about when deploying databases. Yeah. I think we actually have a section in the white paper around sort of the cactus and you know in broad terms. But what I was thinking is that some of these some of these items you know we should discuss the attributes in terms of the different topologies or the different database architectures. So for example if you're if you're operating with a master slave type environment for example you have particular consistency and availability attributes. Whereas if you have a large-scale distributed system which is you know sharded or whatever else you know like like for tests or even you know things like spanner or whatever you have different consistency and different availability. Yeah yeah yeah I can I can there's there is actually the third there's a third property which is durability. So basically all these systems juggle with these three durability consistency and availability. Yeah so I mean in the previous in in sort of previous sections where we found these sort of things we even made just like a little table which kind of just said you know for these three types of topologies these are the relevant attributes that you should expect and it kind of helps to select sort of different usage patterns or you know different types or classes of database for different use cases. Sounds good yeah I have actually referenced that section in this thing it's section 9.4. Right yeah okay cool. Yeah I'm in favor of the table too and you know in terms of a table I think maybe you have to there are just so many potential databases that I don't think you can cover all of the dozens or maybe even a hundred that are out there. So first pigeonhole them with some popular examples in each category and then cover maybe in the table the topology differences the factors involving wanting to split if if this database is implemented by multiple nodes there often are going to be failure domain considerations and storage failure domain considerations that might be independent of the compute. So I think maybe covering that in table would be great. Well I think that's always dangerous even listing the popular ones I mean could we just figure out what the technical architectural differences are and separate them out and then have the list not lead with the popular examples but lead with the architect classes yeah yeah I agree with that it's just that having you know sometimes you could have a bunch of text and somewhere in there throwing a popular example might solidify it in somebody's mind if they're familiar with something. I don't want to be king makers here and nominate whatever examples we give as you know the foremost examples in the category though. So so that we don't rehash the debate from the white paper we kind of had we kind of had quite an extensive debate as to what we include and what we don't include in terms of project names or whatever else. We certainly said we should include we should include references to some popular examples that allow people to apply you know context to what they already know because that helps them understand the document. So for example it's kind of nuts to describe an object stored without mentioning this tree and it's kind of nuts to talk about the key values stored without talking about that cd for example especially where maybe some of those projects are cncf projects as well. So I think having a handful of examples is fine as because especially if they're sort of you know generic household names but I agree in general we're not aiming to be king makers here but where an example serves the purpose of clarifying the documents I think that's fine. Yeah I think I yeah I definitely agree with leading with architecture and and if for example if Spanner definitely is the leader in some others architectures so it may dare if something relevant to Spanner is mentioned there is no harm in talking about it. Yeah the one thing I'm wondering about is there is this concept of atomic clock that many of them use for a better serializable consistency Spanner that kind of is a big subject so I don't know if I'm wondering if we should even mention that. Well so so I think in a lot of these things right there is there is a question Marcus do you know do you consume this as a service do you do you build this you know and similarly for example we kind of said when we were discussing topologies in the white paper we said you know sharded systems are great at balancing out the load but one of the disadvantages is you know you have to you might have to have operational requirements to to rebalance workloads if you kind of get the shards wrong in the first place you know and those were like some of the pros and cons for each of the different topologies so I think it's completely fair to say that if you want a really big distributed scalable database and you want strong consistency and not eventual consistency then you know things like strict timekeeping are are a complexity factor and if you're considering it you know this isn't just like something you can you can ignore. Cool cool so I'll give it a shot I'll try to I'm thinking this is probably about two or three more paragraphs hopefully I'll try to constrain myself to about two or three paragraphs to see if I can cover all the trade-offs and then we can have another review of those parts. Yeah with regard to your comment about the clocks I'd say that certainly if the clock requirements for something are so tight that it presents difficulty in containerizing it then it needs to be mentioned. Oh yeah you're right yes yes we can't we can't necessarily expect clocks to be accurate in a cloud. Yeah that's a good point and then one sort of one other small point I had was you know do we want to do we want to discuss or include the paragraph for sort of caching layers and does you know does a cache like redis or whatever count as a database for example. Oh yeah redis and redis most definitely counts I mean it can be used as a cache but you certainly have the option of putting backing storage on it and I've encountered many instances where it's done so there's no doubt to me that redis is in the database category. I think redis is more a key value though. It is but the values are a form of database I mean etcd is a key value too. Yeah so that's the thing which I'm weird because we have already have a separate section for key value stores. I'm wondering may I'm even I thought maybe we should merge the two you know like because many of them are evolving into databases and databases are so many databases now start to give key value APIs. I mean honestly if that's the if that's the right call to kind of have one section which is called key value stores and databases and discuss them together that's probably fine. Yeah like Mongo just announced transactions so they're completely blurring now. Yeah and you know and we also have for example we have titanium kv the ti kv project which is used as a backing store for the ti db project and the ti kv is is already a cncf project anyway so so yeah if if you think it makes sense to merge that's not something I would be against. The question is should we iterate towards that right or should we go for the big bang and well it's probably easier to to stick to databases and then you know add in if we think that if we really think that they should be merged we can always do that as a second step but it's it's kind of up to you if you if you think it's too hard to to keep the separation then. So I think what I'll do is I will I will definitely mention that these lines are blurring and and that sometimes things can come in both categories I think that's important to mention and then maybe we can do another short at putting putting the two together and see how that looks as a separate attempt and then yeah that looks good we can merge that in. Yeah that probably makes sense. Cool so I have to so I have two two points one is to mention this that the lines are blurring the other one is the two or three paragraphs about the trade offs there was one more thing I think you say we needed to talk about. Oh and I also I also mentioned do you want to mention sort of proxies and load balancers or or things like that I don't know whether it's much of a factor or not. Actually I have a specific paragraph for that I am actually stating that a proxy is more or less a necessity to run in the cloud. Right yeah it's coincidentally Kelsey tweeted the same thing yesterday. Oh right well I know I know that for example you know things like Envoy for example are actually adding database protocol level sharding and things like that to the proxy layer so for example I know that they added support for MySQL and they added support for Redis and they actually do a bit of sharding themselves. Oh yeah so the section let me see the section is yeah the one where I had the comment says this is specific to MySQL and Postgres so we can maybe expand that out a little bit. I tried to not mention any specific proxies. I think it's fine to mention something like Envoy because it's a CNCF project anyway. So I will expand that out so I know I let me know time to start making notes. So I need to mention the lines are blurring I need to mention the three sections on three tradeoffs and proxies okay. Brilliant very cool. Thanks for all of this. Let me let's go through your comments make sure that they are all covered. Consistency and eventually we are going to cover that do we need to yes the second and first and second are a combined topic. No SQL and document databases for now we'll choose a category and put them there for now and then eventually we'll look at merging them. Cassandra I think last time we agreed that it should be in databases I think it's a clear example of a borderline case. Yeah cockroach strong consistency that goes back to the first and first two. Yep databases based on an underlying key value store. Yeah I think I cool I will I think that's a good good point we should fold that into the new SQL categories because that's where they are most relevant in memory databases and our caching layers so that we I think maybe I will add an edit for the for section nine to mention reddit and main cache. Okay so ready this already mentioned in section nine. Oh cool perfect awesome no need to change that use of proxies we have already added oh cloud provided databases that would be a can of worms because they are all commercial and there is actually like in the last year or so there's probably like hundred vendors that claim to be cloud provider databases including by the way planet scale we are about to announce our own cloud provider to be test database. Right okay all right so so maybe let's not get let's not open that particular kind of worms. Yeah so yeah I'm really afraid of that it's a very contentious subject I think that's that's fair enough. Cool cool all right so I will make this I'll try to make these edits within the next week or two and then let's have another go over this one. That's awesome all right should we move on to to erin stock in that case next. So the the the latest version of the doc is in the um meeting minutes agenda um have people found that or and the title should be cncf sandbox process template open and the reason for that is there were two docs one didn't have the proper permissions for people to comment I'll also put the link here in the chat just in case you can't find it in the agenda. Ah perfect thank you so yesterday I was uh invited last minute to the cncf closed session um to talk about the stock to the toc and they were brilliant extremely receptive and excited about this um they were gonna spend some time going through and adding comments um but they they fully agreed that yeah we we need a better well-defined deterministic process for people to understand um and how the sigs interact with the the toc long term so the it's still pretty rough but I wanted people to to weigh in and that didn't intend this to be a polished document by any means but just to solicit feedback and come up with a better process so the main points at the sea that the toc agreed on was time boxing that you know there should be a responsibility on the toc to review things within a timely manner and provide a response back um there should be a way that projects understand if they are rejected why they're rejected um and um you know with the possibility of like a re-review so that we talked about the toc saying like you know here's the criteria by which we're judging the it doesn't fit these things but we would re-review it in three to six month if these things are fixed or it's not cloud native you know by design and therefore it's rejected so I think they have a problem of not saying no and uh and or just letting things flail um because they don't want to say no and so I think that that has to stop so um and they also agreed we don't have a good process now for what the this responsibility is because I told them I feel like we end up with duplication now now we have sigs we review things we give them a recommendation and then the project still end up presenting um to the toc and we start over from scratch so how do we also make it a more efficient process makes sense to have the subject matter experts review the projects and do the due diligence and give a recommendation so um how do we do that better so I don't know Alex I don't know that we need to go through it line by line or if people just want to put their comments in and I can tell the toc that we have reviewed it as a sig and maybe give it to sig apps next and have them weigh in before we yeah definitely we can um I think we all should um I think we we should all review this so if I'm if I'm just looking at this um in terms of in terms of sections we're kind of defining what we expect out of the out of the toc and the process and the timeline and what also what the process in the sig itself should be right in terms of when they hand it over to us yes and the other thing that they were kind of knocking around that I think is worth noting yesterday on the call was having yet another level of entry into the cncf that's just for neutral ip to me that's what sandbox was supposed to be um so I just wanted to point that out it's under the project rejection recommendation project marinate and the linux foundation public group we're trying to find out what that level was there was no decision made on it but then they just talked about well maybe we we need somewhere that it just goes and and gets more contributions or fix you know fixes governance or etc and to me I always thought that's what's the intention of sandbox was so I think there's definitely still some uh cohesiveness that has to happen of what the expectations are at each one of these levels so it's it's good that we're talking with them about it I mean as we had previously discussed some of the challenges were having having sort of the this this perception that the goalposts were changing because maybe different criteria were getting applied to two different projects and I and I think having having this um having this process formalized will hopefully remove that that issue I'm not entirely convinced that we need something more than a sandbox honestly or something less than a sandbox person because I think the sandbox is kind of very intentionally broad I and I was a bit surprised by the suggestion um but just hoping that this helps because I told them my concern is one there's no time around any of these things key cloak that we propose has been in over a year waiting to be given a decision and to me that's not only has the entire TOC changed but it seems like the criteria by which their judge has changed right so things have to be time boxed and people have to understand the criteria which they're getting judged against because though I know they have good intentions then I think I hear these rumblings of unfairness right like well how come this project got in but my project didn't get in and why was it rejected and and and I think they also haven't been doing that in the public they they do it quietly in the background but that doesn't help other projects learn what they should be doing so I think that's that I think it's okay to say the project doesn't fit based on this criteria and have that be public information I don't think that needs to be done in private so I'm hoping there is more transparency that comes out of this yeah I think people may be concerned about once they know it they may game it so maybe we need to have a way to verify right yes and that that Joe beta brought that up like he doesn't want to have like a criteria and they say well I've met all these so then you have to accept me and I tried to make the language in there address things like that like these are the minimum viable criteria these are not the completeness of what it needs to to be you know I mean we it just needs to be worded in a way that gives the TOC the flexibility but it also gives a good enough direction for these projects so I don't know it's it's hopefully a step in the right direction I I I think it's definitely a step in the right direction I mean for for what it's worked my my two cents is that the sandbox is a great place where a project can mature and get and gain it's it's it's first few steps under the foundation right and I think a lot of the challenge and you know especially the comments about the gaming right is because the sandbox shouldn't be about marketing or or sort of gaining gaining commercial sort of plaudits or whatever from the CNCF because you're in the sandbox it should be about building up the community building up the governance and whatever else to get to the point where where you go get into incubating and I think the the the key thing that's missing here is that you know the guidelines around not marketing and keeping the sandbox projects as a separate category are really really important because a lot of the question marks that we kind of keep on keep keep on hitting around the moving old posts are because the sandbox projects then you know directly do get marketed and things like that so people do see the sandbox projects as getting the gain and I think that's that's a challenge here we if if there wasn't that perception of getting the gain and it was all about the community then um some of these issues would just go away well once you become sandbox project then by default you will have like a intro and deep dive session at cubecon where I think that's huge too so even if they say no marketing definitely once you become a sandbox project you get a lot of more traction yeah well that is true well and I agree and I it was unsettling to hear such diverse comments around sandbox in from the TOC board of what people thought it should be you know what I mean like I think some people have really high expectations of what they think a sandbox project should be whereas I have I feel like a long time ago we removed like the due diligence for sandbox for the exact reason that it could it could grow and flourish and expand the community there so I mean things like that have to be you know it has to be consistent for every single member on the board of what they expect or you know it's chaos yeah the the fact that like I think when they were discussing sandbox projects they were talking about like having like a hundred like accepting them in the hundreds which kind of which probably won't scale for these cubecon sick talks right if suddenly like a hundred of them want to present that's probably not going to work argumentially yeah yeah I agree so go free to add comments suggestions you can put it directly in the doc or add as a comment I I mean I don't I want it to be a community driven document I don't feel like necessarily need ownership of any of this I just would like to improve it for everyone I think we've certainly ran into some of these things you know just as one of being one of the new SIGs reviewing projects so your input is definitely desired so that's cool did you did you commit to having this sort of done by any particular date or having it handed off to the next segment by any particular date no they just it was like a last-minute thing hey can you join and talk about your doc and I said sure and then I didn't commit to dates or finalization they wanted some time to review I can bring that up on the next public call and we can figure out a date that we could drive ideally I think we would want it done by cubecon we would want to have this criteria set forth published and you know so new projects can look at adhering to that yeah I think that's a good goal to have that's a good goal to have okay that sounds good were there any other sort of specific points we wanted to discuss around this process I mean were there any you know particular process points that you wanted specific feedback on right now or or can we do this offline I'm fine with everyone just adding their comments offline okay so what I wanted to do next then is the the the last thing on the agenda is to discuss the the benchmarking and performance paper which I've only just started putting together I kind of want to apologize for letting this slip I was meant to set up a call a couple of weeks back and then life happened and it didn't quite happen as I planned so what I'd like to do just now is discuss some of the ideas and perhaps have a little bit of a brainstorm so that's I can put some an outline together and share with the with the group and then we can actually have a meeting to start sort of fleshing out some of the some of the details of this of this paper so in terms of in terms of scope for this paper what we wanted to kind of do was have a place where we can offer information on the tools and methods for measuring performance and and benchmarking the cloud native storage and what I wanted to specify as goals were going to be sort of the following three things the fine the commonly used tools and the and their test criteria the fine the the common pitfalls that that people come across and provides the ability for users to use these tools to measure their own environments specifically the non-goals so the things that we absolutely don't want to touch is we're not going to be publishing benchmark numbers and we're not going to be providing sort of our own vendor or product or project comparisons it's it's it's all about sort of providing the end user with the ability to run their own tests thoughts comments questions does that make sense um why would we not publish the benchmark numbers just curious um so so my take on this is benchmarks are very very often kind of how long is a piece of string sort of concept it is there are so many there are so many items that can affect a benchmark from the environments that you run it in the cpu the cloud instances the networking the actual physical storage how everything's interconnected how the storage is configured and everything like that that's actually providing numbers is generally very specific to a very specific environment and I think it's always kind of opening yourself up to to sort of being gained because it it kind of creates this environment where you know different people might want to publish different benchmarks and they tweak everything for a particular use case and I don't really want to get into that into that arena where we're having to argue the pros and cons or the how do you compare apples to apples between different benchmarks different numbers so what I'd really like to focus on is to give people the ability to run their own tests on their own environment so if they're if they're looking to test two projects in their or or two tools in their or two providers or service providers or two storage vendors in their Kubernetes cluster they can use um they can use the tools to to sort of compare them in their own environment as opposed to and and obviously what it allows them to do is for the end users to then actually publish their own numbers but that would be their numbers for their environment as opposed to our numbers in some hypothetical environment yeah and people cheat a lot they like turn off safety features of the databases to get better numbers and stuff well yeah I mean and and and this is why I said you know I kind of want to document common pitfalls so for example you know um storage systems will go faster if they're replicating with loose consistency or asynchronously rather than synchronously and you know they will perform amazingly if the entire dataset exists in cash and doesn't actually hitting you know physical media for example and things like that so these are things we can we can actually you know have a few paragraphs to actually document these things so people know what they're comparing but I don't really want to get into the into the complex scenarios of trying to justify why a particular system has a particular number because I think that's if if you know if promoting a particular project is a kind of worms describing the performance of a particular project is a gigantic kind of worms I'd agree with that I'd maybe even even if we give users the tools to run their own benchmarks and then publish them I'd even be go so far as to say maybe we'd be hesitant to publish links to those unless we're prepared to validate that the tests were done in a repeatable and provable fashion yeah absolutely I mean you know at the end of the day we don't want to become like an spc institute or whatever right to to to sort of have paid performance tests or something like that that's kind of not the scope of this this is this is more a case of we've written a paper and people understand the different aspects and different attributes of the storage system and what we're trying to do here is give them the capability of measuring one of the attributes which happens to be performance and we might you know that the next thing might well be something like consistency for example and we might suggest different tools to to test those kind of conditions but I don't want to actually be in the market of publishing the marketing numbers so to speak yeah I agree so the the summary is we're going to set ourselves up to teach people how to fish we're not going to catch fish for them and we're not going to run the fish mark basically yeah and so do we need to have a disclaimer that they can't also use those numbers to feed their own but like I don't at all disagree with the reasoning behind it but I also don't want it to be used as a weapon against other people saying using the benchmarks for their own purpose then I mean how will we prevent the other side of it even though we don't publish it how do we prevent other people from taking those numbers and doing comparisons and so are you saying you're afraid of something like some vendor puts out a press release saying we ran the official CNCF benchmark process and here's the numbers exactly um not that I would think anyone on this call all would ever do anything like that I'm just you know I don't want it to be used as a weapon against us that we're suddenly in the middle of well I think unless we made some attempt to ban people from making statements like that we can't stop them no but we could also have like before they use the tool they have to agree to that like these are not to be released for public this is just for personal use right you know we get some really strong legal disclaimer for the use of the tool before they even are allowed to access so so first off so a couple of things in in the first instance we're not building a tool or a framework we're describing tools which are publicly available anyway and most of those have some sort of disclaimers anyway of their own but but either way you know we're not saying we're not building a tool that somebody is going to use the CNCF storage tool or whatever right but also secondly I think the CNCF has fairly well documented things around trademarks and things like that you can't really use things like CNCF and the logo of the CNCF and all of those sort of things without the CNCF permission so I'm not I'm not too worried about that and the CNCF are actually pretty draconian at enforcing that as well yeah I agree and and the other thing in terms of stopping them from a legal perspective in the sense that you know we're pretty much on an open source license with that allows people to fork it will I don't see how since anything we'd produce would be under the open source license that people couldn't take whatever it is and declare that they had forked it therefore they can do whatever they want yeah yeah that's okay I just I just wanted to note my concern there but I think you guys are right there there should be framework in place and people decide to use it on their own anyways I mean I can't stop them from using open source stuff to justify things yeah so I guess we should talk about how to configure the tools and basically not how to configure the database that they are testing against right that's what it amounts to yeah exactly I mean mention that this benchmark tests this type of workload so make sure that your the workload that you intend to run in your production matches what the benchmark is trying to do that stuff was my plan and you know it all it also means that people can you know use the tools to benchmark different configurations so you know something like if they want to measure you know the query performance between having two replicas and three replicas for example you know they can also do that you know that that kind of thing so so initially what I was what I was thinking of doing was focusing on volumes and databases as two things to to measure I know that we could also potentially do key value stores but I don't have a ton of experience in that space unless somebody wants to help with that area but certainly you know in the in the volume space there are there are a number of good open source tools including you know like the obvious ones like a file where we can sort of document the different types of different types of of sort of benchmarking criteria and block sizes and random versus sequential and read write ratios and caching versus non caching and compressed versus uncompressed and dduped versus undduped and you know all of those you know obvious things which are which are fairly well understood and I think with with databases there's you know certainly quite a bit we can do with things like you know sysbench or something like that with key value stores perhaps there's the iscb benchmark suite which which is quite popular there but I would probably need a bit of help to to to structure that bit of the document so so what do you guys think about sort of focusing on volumes and databases as a first step yeah so we have we have ourselves run both sysbench and tpcc benchmarks and when I was at youtube we actually ran ycsb against vitas but those are kind of I don't know how much ycsb has evolved since then right yeah I mentioned that I could have the person who ran the benchmarks document about how it should be run sysbench is fairly straightforward you just point it at a database and then it runs its queries tpcc is more complicated and there's recent interest about tpcc benchmarks but for a long time nobody even talked about them are the tpcc benchmarks sort of publicly available or do you have to purchase some sort of license to use them oh it's a public standard and there are many implementations there's one actually there's actually a sysbench lua version so you can actually run tpcc using sysbench itself oh I did not I was not aware of that okay yeah Perkana published open source that project that's actually what we used for our tpcc so so nick I see I see you're on the call and I know that previously you had reached out to to say that you might be interested in helping out on this did you have any ideas on this well the only thing I was going to I was going to say I think I agree with what's been said so far it was just the section at the beginning with the commonly used tools and pitfalls um I wonder whether we also need to define the concepts and I think there was a reference to that but I wonder whether um you know in terms of latency what that really means spikes versus average latency uh throughput um and then you know even down do we need to go to the level of saying ssd right cliffs and that sort of impact or do we gloss over that yeah yeah so I mean so I mean that's that would be the sd right cliffs would be one of those things where I would suggest it would come under the common pitfalls so you know just because people run these sort of tests and and if they don't run them for long enough for example they they they kind of get to to see the cached version of the ssd and um and then it slows down over time um and and I guess you know that sort of thing also applies when you're benchmarking in the clouds because very often the client providers provide you with a certain number of IOPS for for a for a volume and then um you kind of run out of credits and it's it's almost similar to an ssd right cliff um so so yeah I think I think that's probably worth doing um so if we're if if we're kind of agreed on on some of these um concepts I could put a quick outline together um and then it would be really awesome if we could have maybe a um just a separate call for the people who are really interested in helping write some of the content to uh to get together and we could kind of split up the work if that makes sense so do we have a tentative list of tools that we want to cover I would I think we can get um a long way of the way there by using sort of covering FIO and and suspension potentially um unless anybody has sort of like an idea for a another killer tool but I think those two probably give us a lot of the things we're looking for day one cool um so in that case um I don't know if there's um a specific person you think we could work with um and and nick um is there is there a particular time we we could um we could sit down and and sort of kick off after I send out the outline sure yeah I will uh I I thought I might I vaguely remember asking you for your email to start a thread but I don't think I actually started it ah okay why don't I do that then I'll I'll ping I'll ping you guys an email and it's good and we'll start the thread that makes sense thank you and then I'll add the engineer that uh random edge marks for us fantastic that would be brilliant okay so that was the last thing on the agenda did anybody have any other um items that they wanted to cover today so there was a third document that we discussed uh which I'm trying to remember what it was I think it was uh how to run each system um the use case examples that uh that Luis was was working on right yeah oh yes that one yeah yeah um so Luis wasn't on the call or wasn't able to join today so so I didn't put it on the agenda but maybe we can discuss it next time sounds good yeah because because ideally we want that template ready for coupon as well yeah if at all possible anything else okay in that case I think we're done one's four minutes to spare thanks for joining everyone thanks everyone thanks Alex thank you here bye bye