 Hello. So as last time, we said we would not be waiting for anyone. Of course, we want to establish that things start on time. We will actually start on time. Let me just, I can't adhere the channel. Yes, Steve. So let's start. I have to admit this is the first time I'm actually coming in somewhat blank. Of course, I have meetings back to back 24-7. Toxic updates. We literally dropped off the call between TOC and SIG. The vote for Techlead will start officially today. Apparently, I had some formal mistake or whatever. I don't know. I thought we started this formally like a month ago, but whatever. We will be having the formal vote on Techlead starting today. Amy volunteered to fix whatever I did wrong. Super nice of her. I'm really happy for all the formal stuff. The other update is the due diligence of, sorry, there's one more. The TOC was concerned about the diversity level of the chairs. So they actually asked us to look for different nominations, which obviously is bad for Steve, who's also here. We reached out for both TOC and also behind the scenes, but anyone who wants to nominate someone or nominate themselves, speak up now or send email or focus on Slack or whatever. We are there. We are listening. So please, please focus. Now we're coming to due diligence updates for Cortex. There are some open questions which are currently being addressed for Thanos. All open questions have been addressed. So Thanos is moving to public comment phase. That thread is already on the TOC list. Everyone here is encouraged to also voice support if they want to voice support. It's ongoing. It's good. And feedback from TOC is basically that due diligence is good. They actually called out the Thanos doc. But just for anyone on this call, the Cortex and the Thanos docs are actually pretty much the same from the structure as an example of how to do a due diligence well, which is super nice. And I think Bartek deserves a shout out for driving the main brand of that work. So yeah, that's it. And else, I informed TOC that we are doing a little DCP doc and that we are doing the use cases for data analysis and we'll start working on this. Nice. I think one more thing I got from the call we had literally one hour ago was that CNCF is really encouraging seeks to use the CNCF block space. So we are welcome to produce any content we want relevant to our topic and this will be published in the CNCF blog post. So I think recently seek storage or something like that, expose something on the white paper. So we are encouraged to do that as well. What do folks think as an idea for a white paper or rather a blog to identify opportunities for new contributors to come join our second due stuff. I think we have a lot of like potential future ideas identified and in some cases we already have GitHub issues that are marked with, you know, good first time issue. I bet we can expand that a little bit just by going through the charter and blog about that sort of introducing this big scope and just a general call for contribution. That is a pretty good idea. I mean, we could definitely announce, you know, kind of seek a sort of written explain a little bit what we do there as well. Yeah, there was a blog that went up from, I think that you'll see a little while ago, a couple weeks ago, that caught me a little bit by surprise. So I didn't actually get a chance to respond before they printed it. But that was sort of from the outside in perhaps now it's time to do a blog from us to the broader community. I can check with CNCF if that's content which they would like to see. And if it's content which they would like to see, I can just start a doc and throw it in the channel. But I like the idea. It's a good idea. Because the inherent problem we have is most people who are here are here because they already actively watch what is happening within CNCF. Whereas the blog might be a chance to get coverage on the outside of CNCF as well. I suspect especially end users in such like most of them, not all of them as we see in this call, but most of the end users will not actively consume anything which is happening within CNCF in absolute inner workings. So we can spread this more. Is anyone interested in spearheading this? Or I will if nobody else wants to. But I'd rather contribute than drive it personally just so we can fan out. If no one else is volunteering, I can do it. I'm starting to get used to having all blog posts anyway. I would be overall interested. This is a mare. I would be interested in contributing to the blog post, I guess. You know, as a writer maybe in the beginning. Cool. So do we want to start it as a Google doc and just collaborate that way? Yep, as per usual. Yep, sounds good to me. Just a moment ago, I just set the action point before that actually to reach out to CNCF if they want to see this. By the way, everyone is more than welcome to also chip in on writing the meeting notes, which mainly unbox me for. So the next part of order, the topic is recommendations for OLAP system. And there are tons and tons of suggestions within that document. You actually have two documents now. Just a moment. Here are the use cases. Should we just start walking through the use cases to see what people like, what they dislike, and then just start accepting stuff? Okay. You can all see my screen, I guess. I probably need to resize to match a more normal screen later. Okay. Can I assume everyone has read this or should we walk through it slowly? I think there are literally last minute changes by Ivan or people literally, you know, two hours ago. Okay. So I'll just give everyone some time to read what's on the screen. I'll also resize some more. That's the problem with ultra whites. I don't have a concept of what other people's screen look like anymore. No, I'm in this with the wrong account. Bartek Matt, if you can add my work email or else I can close this window. Yeah, let me do that. Sorry, what? I just realized I can't. I can't. Yeah, I'm doing this, no worry. Recommendations. Should be now. Oh, I see. Now? Yep. Thank you. I suppose it's a little off topic, but do we have like the ability with CNCF to have a shared Google Drive just for our SIG so that we don't have this like... I can ask with my Prometheus head on this was a rather long process, but we needed, like with Prometheus, we needed probably different stuff. Yeah, let me take this as an action point. I can actually take this one. I've been a little bit between vacation and things a little bit, not here as much as I'd like to be. I can chase it down with Cheryl. And we have a service desk request I can make, I think. Yeah, you probably want eHor, but yeah, either works. Yeah, I'll take it. Okay, cool. I put it off topic. Okay, so the first edition is this one. My own opinion, it's valid. The only thing is, should we define strongly? I think it's somewhat obvious in as much as you want to carry the same metadata between the different things. And being someone who has argued for years that something like Logan needs to exist, I know where this is coming from, like you need to have the same metadata on all the things to make jumping between the different systems. Metric flow is easier. So I would tend to just accept as is or do we need to put verbiage of what strongly means? Maybe I can explain a bit more. What I would like to see is, and the problem we're facing right now is there is no way for us to relate all this thing together. Like you mentioned, there is no metadata for it. Locate didn't exist until very recently. And we cannot offer users even a single way to look at metrics, logs, traces, error reporting in a consistent way. And there's no single dashboard for all of those things. So it goes further than just anomaly detection. Having all this data together and accessible through a single view, for instance, would be a huge improvement. So I absolutely agree. I mean, I'm obviously biased working where I work, but yes, I agree with everything you said. I think for companies that take more of a, I don't know, a lot of people on this call are just super jazzed about the topic, but at the end of the day, time to remediate an issue, time to diagnose an issue, MTTR, MTTD, all that stuff. That's like real money. So I think this one in particular, this correlation and just the time it takes to do this manually without something like this is one of the bigger costs that makes it an opportunity and then something worthwhile chasing. I know we're not prioritizing, but if we were, this is the one that I see come up a lot. Yeah, no, it's a valid point. That's implicit in, I think, everything we said, but yes, it's probably good to make this explicit. So I would just accept this. I made this addition, but I don't think that's contentious. Three, two, one. Good. So data governance specific use cases. I think it's not really part of analysis. On the other hand, it is a valid point which will probably play into the end results. So I would tend to keep this in the question, is it not location specific? So maybe we can find a better place for it, but the statements in and as of themselves, I think a valid is even on the call. I looked through the attendees and I don't think so. Yeah, it looks like more like requirements in the same way. Hey, system should have this latency. That's kind of the same. Should be secure. Definitely worth remembering that. Not fully. This is a multi-tenancy sort of kind of issue also, right? And the one thing which is missing, no, it's absolutely multi-tenancy, which is a very valid point. And for example, delivery ignored with impermitence, but it's a problem which people have. In our case, it's a problem we have with lurking the first place. So we use elastic to cover that and it kind of sucks to not be able to switch over, for instance, just because multi-tenancies is kind of harder. Yeah, absolutely. Like multi-tenancy is one of the hardest things to put into a software if it's not there a day one. Of course, it basically flips everything within the data storage, which is usually one of the most painful things to touch. Okay, so I think we can accept this as valid use cases, but we need to find a different place for this. At least this one. So let's talk about... Just to be clear, as our goal here, just to quickly confirm that these are all valid use cases, but for stall discussion of either priority or extrapolation on the use case, or do we want to kind of go deep in this time today? No, the intention as per last week was to just collect use cases, get quick consensus on all use cases. Are they valid? Are they not valid? And we have quite some of them. To then in the next iteration, without needing to be interactive and in real time, basically comment on what that is. And that means obviously priority station, hybrid, and I'm going to actually... How can those use cases be solved? Awesome, thanks. So basically, here we just have... Is this valid? Do we agree? Do we need to be more specific, less specific, cover other things? Like for example, the MPDR comment or the multi-density comment. This is exactly the kind of honing the statements which we're looking for, and then just accept it and work based on this. What does this actually mean? Like for example, to have a specific example within Prometheus, multi-tenancy is deliberately ignored, but then it makes sense for... With my Prometheus head on, to get a challenge from CNCF or the wider community or whatever, what are you doing about multi-tenancy? And with my Prometheus head on, the answer is we deliberately ignore this for very good reasons, but then at least we make this more explicit. And I know with my Prometheus head on, we have this in our documentation as to why we do with this, blah, blah, blah, blah. But then it becomes a checkbox, and either we tick it or we say, well, we don't tick it for a reason X, which with my sick head on is, I think, absolutely valid that projects deliberately ignore use cases, but then we can say, okay, this project doesn't follow the use case, unless it's super high priority, and then maybe it becomes a hard requirement. But then the SIGs cannot directly impose any requirements onto member projects that was made very clear during the SIG creation process. Of course that's what actually it does. Okay, so it should be possible to extract a subset of data for specific use cases while keeping others protected. I don't know what to make of that statement. It seems to be a copy of this statement. So for this statement as currently marked, I would tend to delete it. Or am I missing something? Yeah, I think that's essentially multi-tenancy. Yes. Or to phrase it differently, extracting is just a specific case of accessing, so we can just kill it. Correct. Okay, cool. And this is what I just put in, because this is also something which will start to bite more and more data sovereignty laws. Like this data must not leave a certain jurisdiction. Singapore, China, European Union come to mind. This might be solved with just deploying in a different zone or something, but we should at least think about this going forward. So yeah, I would tend towards accepting those two then. Comments? Yes, no? I was just back, I wasn't sure about this, but I was assuming that this also meant things like EPHI or PII. PII is a very valid point. For health information like EPHI as well. Just to, can we accept those? And then we should jump directly to PII. Anyone against accepting those? Three to one? Good. Yes, PII is absolutely, I need to filter out PII. Yeah, and then the last kind of bucket of data sovereignty also, at least in the US, is the CCPA or the consumer protection stuff that wanted to affect where a consumer can say, you know, that shall delete all of my stuff. And this is a problem for many log aggregation, and that's another scenario that I see. Like all of these bucket and under this data sovereignty or data governance compliance. I mean, those two points or at least this here points towards needing mechanisms for rewriting the storage and such, where for example, Thanos has a really nice storage, whereas Prometheus and Cortex not yet so much. I mean, Cortex is getting there, Prometheus. It goes so far, but yeah, I think those are valid in those cases. I realized I added it, I didn't make suggestions, but should we add those? Anyone against it? Are these valid use cases? Very good. Let's just make this a heading one so we don't forget this. Oh, funny. I thought people would have thoughts around this, but it's probably self-explanatory, but what I meant with this one. Okay, Rob, are you on the call? I can't see the participant lists. Ah, here you can see it. No, he's not. So I think this one is a valid use case, but it's not really an atomic use case because it's basically just a vehicle for saying, I need to access lots of data, which is kind of the point what this whole analysis push is about, or am I missing something? I mean, this is doable right now at basic Prometheus queries, but if I assume and if I infer that they really mean on large sets of historical data, like over the last year or two years, like that, that gets to be more at least into this OLAP or analytics use case versus just a straightforward computer rate using metrics. Yeah, it looks like long-term storage essentially with the same kind of aggregate, well the same details across years and months, yeah. So this is all basically stats over queries over my years in a nutshell. If you want to sum it up in five words. It seems like I just don't want to delete it without Rob getting back to us. I mean, coming from, initially coming from, from the mainframe world, this concept of interactive queries versus batch queries where batch queries can just run in the background and I don't care when the result comes back, I just care that it comes back is probably something which also plays into here. The one thing that may be different here is that you might be aggregating over dimensions other than series ID, like promql. You can only aggregate over series ID, which matters for aggregates like median, where re-aggregating makes a big difference. So one of the things that I think is getting at here is aggregating over different dimensions than just series ID. That's a really good point, yeah. I didn't get the point, sorry. He's saying that one of the one of the harder things to do if I understand is to aggregate over things that are not series. So there are some aggregations that involve revisiting all of the data or reformulating it or, okay. Do you know, like outside of just the promql box where you can aggregate on series using that if you already either have other attributes that you want to aggregate on? For sub-attributes of the series, for example, getting a median grouped by the namespace is different than getting a median grouped by the entire series and then grouped by the namespace. Yeah, but you can do that all, you can do all of that with promql, no? Because you have different labels, different dimensions as part of the metric. As far as I understand from promql, your initial aggregate will always be on the series and then you could re-aggregate. But again, for some aggregates that doesn't give the same result as aggregating initially just on namespace. Yeah, so you can definitely do multiple aggregation layers and you can do that even across different periods of time if you have aggregations over time as well. This is kind of recently, like a year ago, thanks to sub-queries. So you can build totally kind of complex queries and you can totally mix whatever you want to median group and then have different aggregation grouping across different labels that are resulting from the initial grouping or things like that. Unless I'm missing, I think there is no limitation for that from promql. Sorry, what I mean is you can, in promql, you can get the median by series ID and then you could get the median by namespace. But getting the median by series ID and then median by namespace is different than getting the median by namespace to begin with and that you cannot do with promql. There's always an underlying lower level of aggregating by series. Maybe this isn't important, but for some types of analysis it actually ends up being important. Nice. Sounds like it would be awesome if you can provide some example like offline even here and then we can look at that. Kind of makes sense, yeah. Yeah, like say you're using like timescale or something like that so it's postgres underneath and you're remote writing metrics to it, right? You might, yeah. That's a good point. I'm happy to accept it and move on and just to take an action to write this up in more detail. If you want to do that directly, you seem to have the best handle on it. So to suggest us to basically also just like I'm obviously always torn with 20 different heads about how much to interject when we diverge in interesting discussions from the actual topic at hand, which is always super hard to juggle and any feedback as to how much leeway versus lip cracking people want is highly appreciated because I don't know how exactly people want to have it. Yeah, this one I think is a valid use case capacity plan and we can just accept as this of course. You can generalize a bit to trend analysis or predicting time series because it's not only about capacity planning any form of metric you'd like to predict what it's going to look like in for instance a year or in a month. It doesn't have to be just how much can I do to scale my cluster in a year. I agree with the sentiment expressed absolutely and that's precisely what this talk is about for the use case. I think this is already a valid use case. I mean we can start deleting words like for example it doesn't matter if it's a Kubernetes cluster it's just whatever work environment you have on the other hand having examples is not bad either. Yeah, there is an I'm saying that is I think below there is another comment from Ivan or suggestion from Ivan about pretty much the same thing. You want to do prediction of what you need to do later on. Let me see if I can find it. Oh yeah, you mean this one? Yeah, exactly. Yeah, I had written a similar one too. There's a prom-con talk from Munich last year. They kept really from GitLab about this talk. Yes, we can already start condensing this down. This makes sense but also when we come to here we can just start removing stuff. No, it makes sense. It can be even wider. I think this is suitably generic. I think I can only accept the whole thing as one. I had to accept it but I'll just mark it as such as something which is under discussion. Yeah, we'll have to dedupe later and call as multiple passes. This one as marked probably maybe we can rephrase it like this. I think this covers the intention and also goes with should the comment is gone because I accepted stuff. Okay and I would actually say that we can move this one to here. Everyone agree? I need to learn to ask if anyone disagrees. So anyone disagree? Three to one? Also looking at the time we maybe have like five minutes more and then we need to flip over to the next topic or we decide as a group to stay with this document. Okay, all of these are valid but they are not actually use cases. So maybe we can put this into a consideration section or an example section. So if we put this, if we phrase it like this, is everyone okay with this? Anyone disagree that we like this or maybe for context this is this is one of the tricks from from ITF where you have the actual intention of the document and cover this in this case use cases but then you just have a consideration section which basically allows you to put noncore stuff into the same document while clearly keeping this distinction between about you actually want to talk about versus what is good to keep in mind. So I would tend towards doing it similarly here where we can where we can have a thing at the end to put considerations and examples. Yeah makes sense. I like it. Anyone disagree? Okay let's just okay I'll just accept everything and we just walk through it because as I kind of flip did anyone see if I closed any comments? I think I only accepted edit and I hope I didn't close any comments. No I didn't close anything. Thank you. Okay this one we can kill because we we have that so how can we phrase this as a use case? I think it's valid but isn't that anomaly detection again? It kind of is which is like I think honestly anomaly detection is largely out of scope for this effort for the simple reason that if your dataset is large enough you will always have correlations. So you would need actually domain specific things which which can look at stuff and optimize and those things are usually human of course machines depend on pre-tagged data even in cutting-edge machine learning you still need humans to tag data so all of this is a complicated way of saying if you do just pattern detection and anomaly detection you will always find the wrong stuff. Well yes it suggests to use root cause analysis in the chats. So to flip this into into a use case which involves the human? Well true but you know there might be systems that allows easy definition of how anomaly detection should be run and you know if you can at least create some integration that will pull this data from where you want and this will be defined by humans this is already part of the system to to make that to allow that right to define the API that would satisfy this need so this is still kind of body-to-use case to me. I'm not saying it's not valid I'm just saying we need to flip this and for the generic machine learning use case I'm not convinced this will this will work anytime soon but yeah so how can we flip this into a use case? Is this something which covers all intended meanings of what we talked about? Yeah I think it covers it because if you look at the second bullet points below there's something like for instance symptoms versus versions of software and you could have a human saying well for this version we expect this for this new version we expect that and suddenly we don't have this normal state anymore. That's kind of practically what you what you wrote in the end. Okay but this would be covered by this? That's what the second bullet point hints at least. Yes yes I mean yeah yes but yeah and sorry let me restart the sentence my point is I'm trying to reduce text as much as possible as per usual of course writing more text is easier than writing less text so I want to really condense into atomic use cases and then we can start exploding from there again. So the question I think which I was trying to ask is is this covering everything in here or did I miss anything? Because this one and this one like these two are to me basically the same thing just traced differently and by extension these examples fit here at least that was my thinking but I might be wrong. So if I look at the next one there is also a question of integration with things that are not observability related for instance there is a mention of ETCD but you can imagine that it could be anything that's not Prometheus, Sieger, Loki and so on. Yeah this one would be going into this section I think that's more this system or just this type of problem. Okay so to close this mentally this one I can I can delete everything because it's covered by the sentence above correct does anyone disagree and you're more than welcome to disagree I actively encourage you to disagree of course again I might just be wrong. I think it's not only detecting anomalies it's also correlating the causes for those anomalies like the problem is it does not to detect the anomalies also to say I've detected X anomalies and they are all on this specific software version which I don't think the sentence covers. Maybe just detect the anomalies and their causes in an automated fashion. I'm not convinced you can detect the cause. Maybe symptoms rather than causes. Yeah correlation and causation are like. But then we can just replace anomalies with symptoms of course basically anomalies is so I'm not convinced we can actually detect the root cause automatically because this this presumes encoding everything into the system and I don't think we as a field are at that point we can. I'm making an unrelated remark though because on the agenda there are still 11 minutes of topics and there's four left. Thank you very much very very much let me check guidelines for new metrics and list of projects landscape document okay so should we leave off here and actually try to condense this down into use cases because these are more like these are literally bullet points not atomic use cases so let's do it like this. Anything below the axis is not or in between the axis has not been. That's cool that's our grooming line and once again thank you for calling out the time I didn't catch who it was but thank you very much. Yeah so Arthur are you are you with us? If not I can kind of introduce this topic Arthur. Yes I am but I would prefer if you could. Essentially looks like you are looking for kind of information regarding the observability and this is actually the topic that well we should be responsible for and we have like CNCF project flags which are looking into increasing improving observability overall and you had a couple of questions and those questions are like totally valid and and I think we are missing some I don't know good guides and maybe tutorials that will be giving you this answer immediately. I think I answered everything you can kind of check like below however what I would try to kind of talk about during this meeting is how we can make sure you would be answered without our help but with I don't know some content that we provide right so my question would be for you Arthur what were the sources that you were trying to read before this question did you try to reach I don't know did you look on any videos or any any because you had some knowledge before that I could see. I have some previous knowledge because I have been working for me just for quite some time but this is the first time I'm implementing a new matrix. I've tried looking for any guides on Google, CNCF repositories but really I couldn't find anything. Can I ask her a question maybe because this is specifically talking about metrics but isn't the issue also broader than that in the sense that observability guidelines are not really there how do you implement tracing or logging for a service for instance in a way that's actually useful. I totally agree because flux problems is not only with metrics it's observability everything related to it tracing events logging everything you just said. Exactly and I think you just started with metrics as the first thing but the plan is to go forward right with flux. Anyway from in my opinion like we already have topic around working group for blog posts or videos or even white papers so those questions are are definitely giving some kind of direction that we should follow and I don't know do you have any today to the kind of CNCF seek team do you have any suggestions how we could take it and make it actionable something from it? It's the first thing that come to my mind on the beginning of this meeting someone said about the CNCF blog official blog could be this a good place to start with. Yeah that makes sense from my side however this is very like one of things that we could write and it will be just you know forgotten after some time so generally it would be nice to have some kind of yeah Viki or some kind of space that someone can yeah some kind of index of knowledge right this is what seek observability could try to maintain I would say to have like a starting page for those kind of observability features for like CNCF application right and then you know for metrics you go there for logs you go there and like I would say something more discoverable than blog posts but the blog post is better than I think yes. Now a few places of documentation in official projects if you look at Prometheus there is like best practices into how to create observability and metrics into your to your service but it's not enough or it's very specific to metrics and that's not the only bits that you want to show. But that's a good point because you're right that each project has its own documentation that is that is growing so it would be nice if this feedback you know we can be an entry point as a seek observability and then we can forward this feedback to the project and improve their kind of documentation however we should be kind of a starting point which means we should first even forward those people to those projects documentation but that's yeah that's very good point because you like your questions like they the answer for those should definitely be part of the Prometheus doc right. So that's fine when we have questions around this where I'm working we always refer to the Prometheus documentation when considering metrics because a lot of those questions are actually answered already it's just they suffer either from a discoverability problem or the people just don't immediately think about okay how can I like figure this out on my own. I think within the six space having and I think Bartek was looking for BCP's best current practices as this knowledge base of what is currently the thing to do or to suggest something like I'm not sure if it makes sense for for Prometheus and others to basically refer back to a shared documentation that has its pros and cons there is also and I know that I'm the one telling on that one open metrics is finally making actual good progress and we have tons of considerations in there as well about how to do this and those are already seen for example in open metrics and open telemetry at least to some extent blah blah blah blah so I think having this is good but there is already ample pre-existing knowledge which should probably be reused and then we can just take the same thing make it as a blog post make it as a bcp and basically use the same thing twice and get trusty coverage. I do miss some sort of a white paper describing what would be best to implement like people are talking about golden signals or you guys should have service level indicators that's very hard for newcomers to say this is interesting and you should expose it. Okay so looking at the time what are the next steps here should that be based on on what we have within other projects IG, e.g. Prometheus so we can so we can get this into a publisher forum or what are the next steps here? Well I would say entry point would be would be next step I think Arthur got we can help Arthur like in the meantime in this issue but entry point with maybe what Arthur asked about is a good starting point for entry point so entry point would be some index page whatever in our repo maybe it's a good idea okay so something that Michael is adding right now okay yeah I'm happy to start that yeah for free to decline or edit or approve whatever this is just free notes I think this is what we want is that is that right are we agree with entry point for like index to other projects documentations that are kind of official it also ties nicely with the next point on the agenda which is the landscape list of projects thing yeah I agree cool I think this is a good next step and we have like four minutes for a list of projects actually we have 55 minute meetings but yes let's take some some needs to run um let me pull up the next one so I think Bartek you want to take this one correct actually I can stop I guess Bartek yes no yeah sorry I didn't put that on agenda I was muted oh I thought you put it there sorry I can talk about that I can at least give a context about the issue I created initially about the list of projects I thought we can add a nice kind of overview of the kind of into the light landscape which is kind of interactive this landscape map if we can have some kind of filter based on the seek relation and looks like Dan kind of pointed me that there is observability and analysis kind of topic so you can filter by that and I kind of I think we we changed our repo even to to to give this link and it might be good enough that was the latest kind of news on this topic so there is still there there's nothing for a seek relation but there is like topic that is kind of related to us like one to one and yeah I can I can give the the link as well so whoever put that topic on the agenda um question is is that fulfilling it or yeah what are the next steps you mean you mean adding this to all the other things in the landscape sorry you mean you mean adding this information to all to all the things um you can just PR it I think I added but yes correct um the the other question is is this really what we want and is there anything missing because some of them are incubation some of them yeah I think this is kind of highlighted and you can filter by that so the questions isn't up to date I'm missing something in it where basically it says monitoring logging tracing and then chaos engineering but maybe monitoring is a big category right now I think you can split that further into metrics tracing I'm pretty sure um there are things that are a mix of those as well so it's not exactly black and white either on the on those categories either yes I'm thinking about a vector for instance which can do metrics and logging at the same time correct and open telemetry is tracing even though it's also other stuff right yeah exactly that's a body point for the feedback for this landscape I guess however we can totally use that I'm pretty sure okay I will add PR if it's not there to mention this and um sorry what's your name again um Michael Michael can you just put an issue like comment on an issue maybe um about this feedback or even start an issue on the landscape if you can do that that would be so awesome because this is an issue on this landscape that you know this is not entirely true what if the projects have shared shared topic let's say yeah I'll open an issue on the landscape awesome go on we're over time anyway and we're through sorry for not catching for not catching um we're at the working dock early enough and thanks again for for catching me I mean to to put it to other stuff okay um I highly encourage any everyone to to keep working on the on the OLAB document so we can just rather stand with the next time ideally yeah awesome thank you thanks bye bye