 Hello. Hello. Yeah. Happy Tuesday. I think it is. With all of the working at home, the days can blend. Oh, cool, Michael. It looks like you're on the holodeck. Richard, if you're talking, I can't hear you. Yes, I've been talking all this time. Can you hear me now? Yes. Very good. Very good. So anyone who's in the call a few more. Talk to you soon. Into, into the thing. And also into the duck and also as Matt just rolled ideally lock into your browser so we can see who you are. But if not, it's probably not a hard requirement. Um, I think we should get started and by and large establish a, a precedent of, of starting ST not CT. To be mindful of everyone's time and not just sit here for like five minutes. So the territory apart first, the first one, as you can see, I did my thing and just write in stuff which will be talking about this bullet points. And to start doing this if you want, but still we are following the discussion as is, but that, that saves typing for later. So, yeah, remember your right to write yourself into the attendance list for all these reasons, of course, that gives us kind of an overview of the long term development of this call. If you're on the scene CF slack a few more than welcome to just toss your hello your name you're whatever into the public channel and just say hi to everyone else. If you're interested, also ideally set an avatar for your, for your account in a perfect world that'll be a photo and also a short biography. So we know who you are, what you're doing, where you're doing it. That's everything territory from my and any other generic items. Cool. So, um, to see status. Well, last week, this week, last week, last week. So, I poke them again to please vote for the chair and please vote for the technical course we had zero replies there. They promised to do so but I haven't seen any updates, but I didn't check today at least in question of the user survey. Basically, it was not fully specified but basically we can do user surveys and surveys if we want to if we think they're useful and then then the liaison can can use it if they want to. And they will adapt the official process so that that documentation actually reflects the guidance from last, from last, from last use you call, but that hasn't happened yet as far as I could see. But we are basically okay doing user service. And also, as we were having our monthly call with CNCF with my permission set on. Yesterday, I also pulled Chris about taking this to do back to to see on the internal to see calls to please get some on on the chair and on the technique course. As per the official documentation by CNCF we're actually not a real working group yet course we don't have the requirements for it. It's blocked on or by TLC so I think that should be fine. That's the TLC status. Next one is cortex into incubation due diligence and just to get the feeling of the room who actually managed to read this document before we had that call. So that's to keep your hands up so I can count. Okay, from the people I can see my video it's for it's five. Okay, so that's good. Hopefully it'll become more and more. As a reminder, we are trying to establish a practice where people read the documentation and working documents and such before the cause so we don't have a case of assisted reading, but we have an extra discussion on the contents and trying to get an understanding what what was being talked about. So, let's move to the document. I'm going to also put this into chat for everyone's convenience. Does anyone have any high level comments as I would just start walking through this top to bottom or maybe Bartek you want to walk through it both is fine. I don't care. So, hi, can you hear me. Yeah, like, before we start the walkthrough I just want to add that the only main thing that is missing from my opinion is user interviews and I wasn't sure how to do user interviews before but I think I will first create a form that has different questions around like what users like what users don't like which companies they work for, and I'll share the form of its observability and once it's approved I'll share it with the users to get collect user feedback. Having said that, is there some precedent on how user interviews were done before in other projects, some existing forms I can look at. I think they have a complete form. No, I don't know that there is a dedicated form I can can ask and seek chairs if they do, but I'm not aware of a form a specific one. I craft some typical questions and send it around then. You can also poke me offline if you want. I have some questions from from Prometheus team which we can reuse but maybe we can take that one offline if you want. Cool. Now Bartek can or you can lead the reading. So, um, yeah, let's okay let's do a quick walkthrough seeing as three quarters of the people on this call didn't didn't actually read this document beforehand. So, timelines status we can probably ignore. The project is self governing governing and I would recommend that as the sick, we put the comment that we agree that there is self governance within the project. I'm doing the same game of consensus as I did last time. Gotham, can you make me an editor or I just make keep making suggestions. I can make that. Thank you. Thank you, Mr Grafano not the private one should be fine now. Okay, so sick observability comments. So consensus as last time. I'm going to mark I'm going to write it I'm going to market and then we can we can have the game of consensus. The comment of the sick as I propose it would be sick observability agrees with. Do you mind sharing your screen. So that very good point. Yes, I do mind for religious reasons. Can you see my screen or one. Hopefully only one of my windows. Yeah, so the project is self governing. And my comment as sick would be that the same degrees is everyone agreed or are there any is there anyone who doesn't agree. Perfect. So, the next is about about having a code of conduct and I would propose that we also put the comment on every zero point. So the next question is, if the project has any production developments, which are high quality and high velocity, given that two companies are doing actual business with cortex or at least two of them, which I'm aware with books and for fun labs. I would also propose putting that we agree. Anyone disagree. Am I allowed to disagree given that I run one of those. Sure, go ahead. No, I'm joking. Take one of your pretty good kitchen knives. So I usually it's a good idea to link to the adopters in the file if they have one because they should actually have like the publicly available ones if they have it. I think Cortex will have had one. That's what we did for other. Yeah, so that that should. I usually point out which ones are using it in product that's the easiest. That's part of document. Yeah, so we only list projects and like companies and adopters are empty if they're using it in production. Further, we are working on adding case studies so we did one case study with go check. And this or like, yes, this week I have my second case study schedule with the Reve, which is a German grocery chain and slowly they're going to expand the case study section. Also on a higher level note, I'm trying to walk through this document. Ideally, as if everyone would have read it, that being said, we will obviously take the time for anyone to get up to speed. But the concerns literally covered here. So, yeah, does everyone agree with section three that there are production deployments which are high quality and high velocity. Yes, I'm sorry. What does high velocity mean that was one of the questions I had when I read this. I would say it's, it's adoption speed and also growth within the deployment and with my Grafana had labs, Grafana has had on. I can confirm as a first hand witness that this is the case. It's, it's Brian here from Weaveworks. Hopefully. Because he's publicly said multiple times they literally continuously deploy master. Add as high velocity as you get. Yes, of course. Again, I wasn't, I wasn't disputing that is the case just wanted to make sure we understood what have velocity meant in terms of definitions that meant. But that's a good point we can actually have a velocity of adoption by net new entities or velocity of commits to production or test or velocity of new code appearing. Amy is here. Maybe she could comment because all of these questions if I'm not mistaken come from the template for the diligence. I think part of what we'll be doing as the sick is to maybe try and help hone down those questions a little course some of them seem a little bit ivory tower ish. And they can probably be clarified a little bit or maybe there can be some consideration or guidance section on parts of this. Probably into a working document. Maybe not so much life discussion for the first round, but I agree that makes sense. Of course, we will be revisiting the same questions again and again. Just one more comment on this and I don't want to derail so we don't have to discuss it but I'm curious what is the process for listing adopters like do the companies agree should be they be the ones that submit this PR. They can get their approval and then can add it to the adopters. Like, what does that process look like. So we, at least like we asked companies to submit the PR but in case we open the PR we explicitly need someone from the company to say LGTM on the PR before we merge it. Got it. Thanks. Yeah, we have to, you know, there's some companies that don't want to talk about the projects they use as well. So you have to be pretty sensitive. Exactly. That's why I was asking. I think for the, for the sick. Default we trust unless there is a there is a reason to mistrust because else we have issues anyway. But it's a good point that I think that's actually within within the policy of the project how they how they get to list people the requirement is just that they that they confirm that it's happening and ideally this happens with with names and not just with claims. You can also check the history of adopters and you will see either the company submitted it or we have an explicit approval. I think interestingly cystic had in one of their blog posts that they're using cortex recently. We should ask them to Well, they wrote a phrase it differently like they're using part of from the same cortex code. Yeah. E-railing the document a little bit. All right. I just want to actually deliver the spec courses the one deliverable, which we have for the next to see Colin I want to make sure that we, that we have that one in. Do you want to do you want to keep action items in the document or would you prefer to keep them in the meeting notes. Absolutely in the meeting notes. Okay. And also I would, after we are done copy the consensus items over back into the meeting notes. Basically after after being done with the due diligence document. So, is the project committed to achieving CNCF principles and do they have a committed roadmap to address any areas of concerns raised by the community. So, I have no idea what CNCF principles are and googling it didn't help. So I guessed that they could be talking about this. Okay, I read your answers to this. And basically for anyone's benefit, the top level bullet points in the section are copied from from CNCF documentation. I would agree that all of those are met except for the last one, but that is basically the to do item for for TOC. And the union cortex operates cloud native so the the piece of the question is also positive for the answer of the guest of the question. So, same for this I would propose that we that we have consensus on sick observability agrees. I think definitely for future projects it would be good to know exactly what the CNCF principles they're referring to are, but yeah. Yes, arguably, arguably cortex helped shape part of this. I would give more leeway here because it's more or less rubber stamping for for something just as advanced cortex anyway. That being said, I would take an action item back to TOC to ask them to please clarify what they actually mean with those things. Have we gone and ask the the maintainers if they're excited. I don't know you're a maintainer. Ken, Ken's here are you excited Ken. I'm very excited Tom. Gotham you're one of the maintainers are you excited. I am. I think so that's three of the was eight or nine maintainers that are excited. Is there anyone else I missed on the call. So, um, again, I would propose that we have consensus on on point for also that sick observability agrees. Any other opinions. That's one fifth document that the project has a fundamentally sound design without obvious critical compromises that will inhabit inhibit potential widespread adoption. I would be arguing that we are already seeing widespread adoption. Anyone who who read this or who's currently reading it is also totally fine. All agreed that we also agree basically on the reply by Gotham or by the cortex project. This is, this is another super open in that question. Like, how do we respond to this. The second part is easy, right. We're definitely architected in a cloud native style mean to a full. Useful is subjective. I would also argue that that the question itself can be a lot improved a lot but that's the question we have in which we need to answer so we need to to have some interpretation for what the question means within the context of what scene CF is and how it operates and what they what they graduate and let incubate. I would say that both my knowledge of the project and what we're seeing documented here is is actually a positive I left some comments to to maybe tighten up that section a little bit, but I would still be I would suggest we we agree with the section as is with the request for for Cortex to to tighten up a little bit. And one more point is all the adopters md users they're all running in cubanities. They're monitoring their cubanities deployments with Cortex. And I said that we're also working with a new adopter who is running it on bare metal so it's not tied into cubanities but it works really well with cubanities. And by by as a function of it being basically the premise model. And of designs for for the underlying premises of course it's and then as and so call for consensus sick observability agrees but requests tightening up the above section. Yeah, I mean, having done a fairly deep dive through the architecture I would I would certainly concur that it's obvious. Obviously architected in a cloud native style. That's, that's, that's clear. And I do agree though that the the useful feedback for the to see is that the useful moniker is not terribly useful in this, you know, in terms of in terms of due diligence so it almost seems like there's a request implicit in the marketing or selling of the solution. I don't know if I'm the only one that reads that, you know, is this useful why would you care. You know, you, you know, if you're running kubernetes or if you're running a cloud native data center, a data center architected in cloud native sort of styles or buying time and somebody else's. It's up to it's up to the project to put in here, you know, how is this useful why is this useful. It's almost a cell. What do you all think is that is that we also read here, or to me it reads as if most of this document and the questions have been written from the perspective of, hey, that might be a good question, but not from the perspective of, if I'm in the position to, to actually answer those questions. Are those well sculpted and useful questions and then useful is a subject that, but I think they're just a little bit I read every power should not really battle test it yet, but we can feed this back. And we absolutely should. We should have a small document and we should just basically send the PR back to TOC with hey please adapt your your template like this. I think the mean useful can also be measured by we have users right so clearly. Yeah, but we're again we're in discussions about the meaning of the thing there. We're typing up the question a little bit. The more tidying and typing up would make it out easier for all sides to actually come to agreement. So to cycle back sick observability agrees but request tightening up the above section. All agreed. Just just a point of order Richie like are you asking for whether we agree or whether the sick observability people agree everyone on this call. Okay, obviously I agree. Absolutely everyone on this call because that's the decision making body we have. That everyone who's in this call is part of sick observability. By definition. The great part of why we why we need to write down who's who's taking part to to if we ever need to see who actually was on the call and to agreed or might not even have seen. Consensus either they hum or implicitly agree. So, next one, document that the project has an affinity for how seems if operates and understand the expectation of being a CNC F project. That is again, political and touchy feel a question but I would agree with the guest course, obviously cortex already operates in the very cloud. I mean, we could also add that with the overlap with the Prometheus maintainers between the cortex maintainers. Like we already are used to the, you know, the weekly monthly calls with Chris and presenting a cube con and, and all of the things that go with it right like we have been through that motion those motions. And understand I think what this entails. Yeah, good point like essentially you know expectations right. Like assuming that's what it's assuming that's what they're getting out getting out. You know, we've done the Prometheus keynotes and the, you know, talking to press about Prometheus launches like, you know, whatever the CNC F needs from us I think we've got the experience doing. Even reinforce that I mean, if you just throw a rock on the internet and look at the number of talks meet ups and other various things just on YouTube alone. I think that the project is not only understood the spirit of that but it's been demonstrating it for a couple of years now, at least in my judgment. Thank you Matt. I realize that dangerously close to sunshine up a tailpipe. I mean, just even on the, if one wanted to get started and dive into what is cortex and how is it architected. There's, I think something like 12 or 13 talks you can you can go spend, you know, a day or two just listening to talks and that's been my experience. Yeah, absolutely. Call for consensus on number seven sick observability agrees and request tightening up the above section also put an action item forgotten to to refer to what Tom said. Anyone disagree. Next one document that it that is probably cortex is being used successfully in production by at least three independent end users which focus on adequate quality. I would also say that we have agreement. The last thing that we just said requires tidying up of above section. Yes, that doesn't make sense. What would you suggest instead expanding. I mean, heighten up the above section above section is exactly one sentence very open ended in my opinion can expand on the question. I think the opposite of tightening what we need expanding. We can tighten up that one sentence if we want to but we want to expand. It's a fair point, but that's precisely why I'm doing it this way because I fully agree and I just copy and paste it from from the above. That's where it's coming from. That's precisely why why. Thank you. So let's let's invalidate and do it again. Sick observability agrees and request expanding on the above section. All agreed. Anyone disagree. So next one. I think we have documented or there is documentation that there are more than three successful users who have quite some quality requirements. So again, I would I would put the comment that sick observability agrees. All agreed anyone against. I'll just add that there is a space between Grafana and cloud. Thank you. Thanks. Have a healthy number of committers a committers. Hang on just to just to be clear to so I think Michael I made the point separately. The three independent end users in terms of using it does that mean since we even Grafana cloud are kind of obviously driving the project that they don't count like in terms of those three. I think there's still plenty that are but should we just call out that like if you're if you're paid to work on Cortex you're not an independent end user that's all. Yep, it's fair enough to say that. I'm not disputing that this requirement is met. I just want to make sure that we're super clear so that there's not the appearance of a conflict of interest. If you're taking it this far then end user are end users the ones running the software or using the software. So maybe as an interactive we have the very same question about a project that we reviewed for at delivery and had a discussion with to see and we're still in like the final wording but the interpretation that we are kind of stuck at which is not the final one is obviously an end user. It's like an actual users not stuff like a provider because we had exactly that very discussion. So you would not consider like we've for we've got when they're selling it to customers and end user because it's really somebody who's like not building the software but really using it. And also not you wouldn't also consider them like a main company like if you build software for yourself but the CNCF is not entirely clear there but it's really something not selling software but using the software. Would you would you count customers of weave and Grafana Labs as end users. So the point is, this is where it gets really tricky because you would actually count somebody as an end user with an end user of the open source project and not a commercial offering on top. But there it where it's a bit blurry honestly so. In particular for something like Cortex which is specifically meant to be multi talented and something that someone could use to provide as a service to others. In addition to someone like in our company where we're using it directly as an end user. Yeah, again I don't I don't want to hold this up and I think the requirement is clearly met I just wanted to make sure that we're not double counting Grafana and we've in the list of adopters as as counts towards end users for this. Yeah, I think that's perfectly reasonable so let's let's delete the sentence about weave and Grafana and we still we still meet it but I think but. Yeah, yeah, again, just I just want to make sure it's the administrative side of this that I just want to make sure we're super crisp about what's in or out to Michael's point. I think a few days ago in the document. Yep. Would this be the correct. I went ahead and didn't comment. I deleted the service providers from the upper section and put under weave and Grafana cloud that we have end users course there are two, there are still two interpretations. I would delete that like I think it's contentious I would just say that the while, while these two are running cortex we don't consider them as end users you can just start a comment like that. Yes, but the customers of weave and Grafana are by my definition end users contentious. We agree on the essence, but the questions are not very good and they need to be. Yeah, that's fair I mean I can actually say like, for example right now today I'm actually, you know, in addition to rolling out cortex internally in, you know, in the coming in this quarter we're also a end user by that definition of Grafana clouds hosted by the end users, but I don't know if it's actually fair on the merits to say that when I'm using that I'm an end user of the cortex project. You know, to me it looks like a remote right endpoint and I have no idea what's behind it. I don't, I can know I can know I don't even need to know that it is cortex it's just a previous remote right. So, I agree, this is really more of a yet another slightly ambiguous language from CNCF. On the other hand, it's it's stimulating this discussion which is probably the useful part. So, yeah, I'm happy to move on. I would actually prefer to have specific guidance in the questions and again I think we should we should actually submit something back to to see which then demands the specific argument if if it's required, as opposed to basically having a lot and happy accident and talking about this. That's a great point to but I mean just being in the context of the due diligence for cortex. I don't think we need to sterilize it like, you know, I don't want to wait for the to see to define it if we, if again we have way more than the requirements to exit sandbox for the actual projects. I see them as slightly orthogonal. That makes sense. Yeah, I agree with that. Just to make sure because I think I had as it wouldn't be green. We did have the call for consensus. So the next one have a healthy number of committers committers defined as someone with the commit bit ie someone who can accept contributions to some or all of the project. If you can see that tab that looks pretty happy to me. So call for consensus. I would say that the same agrees. I think I think we agree but like it would be nice to kind of measure activity maybe at some point. I know, I know people like who are contributing so I kind of, I know that most of them are active but you know, just for future projects would be nice to maybe check that I'll go ahead and delegate that to the tech lead to be Well, we don't have that. Too late. So far. Our next call. So do we agree with the section, or do we not agree. My, my suggestion for a consensus is sick observability agrees. Artic will get numbers or get more detailed numbers. I agree. Anyone disagree. Yeah, I can say like that. I think it's fine but yeah, we can get numbers for sure. So I agree with the statement. Very good. Demonstrate a substantial ongoing flow of commits and merge contributions. I think that one is obvious. So, again, call for consensus suggestion being sick observability agrees. If you could like it for those on the call you can also open one of the dashboards. I'm trying to low level people to do this before. There you go. I can also make this broader, by the way, if you want, I just realized last six months. Yeah. Wow. I mean, I can also do this, but this will probably break all of your screens and all of your shares. But if you can just open the commits, or like even the PRs, you can see that we're doing 15 ish commits every week, 20 ish commits. And we do squash merges so that means we are merging 15 to 20 ish PRs every week. Which I think is at least on par with, with most everyone else except for communities. So, sick observability agrees. Next one, deputation of CNC F alignment. Bartek suggested to just copy in the incubation stuff, basically. I would agree with this as long as there is also a statement by cortex that all of this still applies. I would be content if anyone from the cortex project would just really confirm overrided in that then do the legwork after. Yeah, I think it's exactly the same as template for the PR that we had to do so. I mean, we can also require. Let's modify this. Instead of referring to other resources, how about we just request that cortex writes it down course that that basically makes it a self content document. So, I have a sense is sick observability agrees and request writing it down specifically. I have one question. What do they mean by existing sponsorship. That will be our sick observability and so. But the same communities and any existing sponsorship. Put that maybe as to do. Of course, we can take that as like we as in the sick can take this to the next to see call and ask for sponsors. Now that did you or know that the due diligence will soon be done. Well, we're actually running into timing issues, but at least I can stay until the full hour. So call for consensus sick observability agrees and request writing it down specifically. God agreed. Anyone disagree. I agree. Good technical and architectural design and feature overview should be available. That is obviously the case. I'm going to open it quickly. We have an architecture. Famous picture, a famous Twitter picture. You should make it ugly to treat it and say, hey, we updated it. Anyway, call for consensus. I would say we agree. Anyone disagree. What are primary what are the primary target cloud native use cases which of those can be accomplished now or something is available. You can read more quickly. You will all be shocked and surprised that my suggestion for a consensus is sick observability agrees. I would I would actually say down sampling is on the roadmap. Yes, but then so is everything on the lake. I would say when I wrote this down, this would be like in the next six months. Okay. Yeah. Down sampling is not the next six months. I think for the first, the first piece of this, the primary target cloud native use cases, you can actually add a fourth one that, you know, the most of the back end services for both object storage as well as for chunk as well as index, you know, work on major public cloud back ends today. It's not a need to run your own, you know, object storage or Cassandra or something like that. You can use big table or dynamite to be or GCS or S3. So in terms of being just readily usable in public clouds agree. Yeah. And as an operator, it's one less thing I have to run. That was. As everyone is probably noticing, I'm taking liberty to to just write this in the not wait for for the cortex team to write it in. Yes, thank you. That's fine. I'm just to move through this course. Ideally, I want to, like we're like halfway through this document and I want to actually approve it. Or disprove it before before this call ends. So, all agreed. Shit. Okay, next one, what a current performance scalability blah blah blah you can really yourself a little quicker. I would also tend to agree with this one. All agreed. Yep. What exactly are the failure modes? Are they well understood? Have they been tested? Do they form a part of a continuous integration testing? Are they appropriate given the usage? IEG cluster wide shared services need to fail gracefully. Even as someone working at Grafana Labs, I was interested in reading this and I really liked it. If I do say so myself. I mean, the failure modes will never be well understood. That's an unachievable aim. Like, does anyone understand the failure modes of coming from networking failures are endless, but you can contain them and I think it's always the networks fault. That's also correct. And DNS and DNS. Yeah. The main cause of all plane crashes is gravity. Time to innocence. One of the main metrics of success in networking. Anyway, I think we've we've got over three years of production experience running cortex. That we we've experienced all of the obvious for all the obvious phase and fix most of them. There are new and exciting ones to come for sure. No, as written, I would just agree with what's written. I mean, as a bunch of engineers, we can sit around and say, how could we make this better? Like, for example, you know, one could one could write one could have a CI or CD testing negative failure cases that simulates or mocks out S3 barfing or having operational issues from the cloud provider side. With these, I'm not I'm not sure how deep we need to go. I'm not sure if you've read what, you know, the point is basically RV with our sick heads on happy with what's the current state of the art to let someone progress from sandbox to incubating. That's the exact scope of what we're currently debating and within that scope, I think it's it's well achieved. I agree that they can more and I think they should be doing more before graduating, but for incubation. I think that's fair. Personally. So next one, what trade offs have been made regarding performance, scalability, complexity, reliable security, etc, pp, and that's empty. Yes, I got I didn't feel anything, but like, there are trade those right and I commented. One of them but yeah, what do you think if none have been made that should be written down explicitly. I think I agree with what you said Bartek, except of course we put a lot of work into simplicity recently. I'm not saying it's super complex but then you prefer complexity from what I seen versus, you know, lower performance. And I think something we could we could also mention is that obviously you prefer consistency over availability, right. That's some clear trade off here. No. The simple one dimensional trade offs are are never entirely accurate, you know the consistency versus availability one is not, you know, there are cases when we prefer the other like each one of these is tuned to the use case and configurable like you can have it such that I guess if we wanted to trade off generalization here, it would be configurability like we, we favor being able to configure the system to behave how the user wants and leaving that to the user to decide. But we don't I don't think that necessarily implement sorry introduces complexity anymore, because we should default configurations and we make recommendations. So you don't have to understand these trade offs if you don't want to, and we've helped many users deploy it and not had to explain every single configurable. I agree, but still, still people look on the, you know, thousands of flags and and are crying and screaming so you know, there's still some kind of complexity there I'm not saying you know, it's there are definitely ways of mitigating that but there is some trade off here, which may make sense but I mean if we want to define complexity as a number of flags then sure we have complexity so does Kubernetes. I think again, because I'm going to do something very cherry. That section is currently not been answered. There should be something in there but I think we can expect a reasonable answer so I would as a call for consensus, maybe say that we expect positive feedback, but it's not written where we expect we expect sufficient feedback. Sufficient feedback. Yeah, I mean we've got to we've got to actually answer this right before we can agree. Sorry, sorry this wasn't answered. Yeah, no, it's not an issue. Also, this isn't graduating to graduated this is exiting sandbox so you know we should. Yeah, no and I mean expect in the positive way so we we. How do you phrase it some native speaker. You know I would say the sentiment is we've already met expect you know we've met any expectations but we need to document them. Yeah, it's. Call for consensus seek observability agrees that expectations have been met, but it needs to be written down. And we got them are going to take you what are the most important holes know h a no flow control in adequate integration points. I mean what's a whole. The absence of a thing. We haven't bundled a window manager with it yet. But that is part of the you know if we're going to look at like what's the what's the biggest pain point operating a cortex cluster than I agree it's the statefulness of the ingestors. I think that that wasn't how I read the spirit of this question. I think it's sort of like when when the project is leaving sandbox to go to the next phase, like say there was no h a story, or say like hey this works but there's no way to deploy it unless you know how it works. Good quality health charts would be up the biggest hole then like we don't know. I was going to say like as someone looking at cortex like you know we're we're we're taking a lot of time to go figure it out. It's not like immediately. But again that's that I wouldn't call that a hole in the architecture or a hole in the the offering. I don't see those types of holes prevented from leaving sandbox. And is to make sure that we don't have things leave sandbox that are like fundamentally not done, or not ready to be, you know, in that in that middle tier. But I don't think that's the intention of the question I think it's just like be honest what are your biggest openings and if it's that type of stuff doesn't block moving from from sandbox to incubation stage. Yes, I don't think anything's blocking. Yeah, maybe we'll add this to the growing list of feedback for to see around ambiguity. No, but I think the question actually makes sense, because like honest what what do you what do you think where do you have an open flank. It's that's not to block it's just to honestly surfaces. And I think that has been met and I think the so sick observability agrees, not a blocker that would be my suggestion for a call for consensus. All agreed. I agree. Code quality. Does it look good bad make yogurt blah blah blah blah blah. So have we talked about types versus spaces. I'm, although I feel like we don't make enough use of global variables. That's a very good point we should take it as an extra for Tom personally to migrate. We've got to we've got to laugh at some of these questions right. Yes, I have a call right after and we have six minutes so I would say safe observability agrees all agreed. I think you're about your comments but like a valid. Yeah, I mean I tried to assess it from high level but like it definitely good. And there are even you know, kind of proofs that you're seeking for even better quality so yeah. And just to make it explicit with with the chairhead on. I am fully okay with cortex not answering but the tech lead to be answering. It's basically worth even more from from the six point of view. What do exist and do they seem justified. Maybe not agrees but sick observability is happy with documentation. So call for consensus sick observabilities happy with documentation provided. Everyone agree. Yeah. What's the release model version scheme every dense of stability blah blah blah. Again, I would suggest this for the consensus and I would move Bartek's comment in line to make this part of the stuff we're happy with. Yeah, definitely solid versioning. Yes. Sorry for picking up speed a little bit. We won't be able to make it. I don't. I would also be happy with documentation provided. Quick quick quick question. Can other folks run CI CD for cortex, like I noticed that there are circle jobs and whatnot but you know if I wanted to run the same tests in my own environment on my own staging is that what users of cortex can do today. Or if I wanted to provide additional augmented data. So hey I'm running it with, you know, in these scenarios is there a way for for me a user to feed that data fact to make the project better or is it still the CI runs within a certain set of places and it's not necessarily reproducible. I mean you can run the unit and integration tests yourself, like wherever you want to run them. So that's, that's pretty straightforward the there is provided what we call the test exporter which is something that exports a sine wave, and then queries for it. And you can run that in your that would be the integration test you would run in your own environment to check the things working. And beyond that, that's about the limit of what we've got. If I wanted to extend it my encouragement would be to contribute back to master and have that benefit everyone but but there's no reason why you can't build your own tests on top of integrate your integration environment does that answer your question Matt. Sort of and again I'm not sure that this is maybe maybe we take this up in a subsequent chat but to reproduce all of the tests or just to bug something as a user. You know, and again, maybe I need to educate myself a little more but you know, can I run the full suites of tests. If I have my own I do if I have my own circle account, like is there a. Oh yeah yeah for sure that's in fact that's how I run them. Okay, cool. And that's how I mean all of the non maintenance have to run them that way. I'm happy. I just haven't gotten to that point yet in cortex as a developer. Yeah, see I configured to support that. So call for consensus sick observability is happy with documentation provided. All agree. Yep. Very good. What licensing restrictions apply blah blah blah blah blah apache to. Yeah, can confirm. In addition, we are happy causes the default license of all of CNCF, but just to make it explicit call for consensus sick observability is happy. All agree. Yep. Good. What are the recommended operational models specifically how it is operated in the cloud native environment such as on Kubernetes. It's made to be run on Kubernetes so. Call for consensus sick observability is happy. We actually had to do a lot of work to make it not one on Kubernetes. Now the case. Yeah. I don't mean Richard let's not try and rush this right let's do it. Let's do, you know, let's. I do believe I'm doing it proper. If quickly I just looked at the clock and realized we wouldn't be able to go through all of the rest. So very good first. First go at it though. I think I'll send an email to the list requesting to form an opinion on this by the latest by the next call ideally maybe within the next week. So we have a consensus or at least a rough and implicit consensus on everything within the week. So as not to as not to read another five a four pages interactively in Mexico and just like basically already everyone has read it and just we yes or we already put comments where we have concerns in such Maybe some feedback for the TOC should be that this doc should be reviewable in one hour. Yes, and not as verbose and not as undefined but again it's it's it's an ivory tower document it's not the battle test of document where the people writing it in my opinion have had to actually answer all those questions and actually walk through everything. Or maybe I'm wrong and maybe I'm not seeing something. Yeah, thanks a lot. It was really well prepared. Thanks. Same for me for me. Also, also just a point this took like one full day or more to write. So I like, I expected to do it in two weeks but in the future we should set expectations that it's going to take some time when preparing the doc. Am I right in thinking Cortex is the first project to go for incubation from the sandbox. I don't think so. The first. You mean CNCF wide. Yeah. I don't actually know off and I was trying to think of one like is every project have to go through go through this checklist it seems quite onerous. And I love that word on us. I can't speak to CNCF wide. I do know that the sandbox graduation process, I guess in recent quarters has undergone some changes. So it could be that we're trend setting here. Just to be sure, just to be sure, like this is not like there's no rules how to create this recommendation from the sick. So we just grabbed the due diligence doc and just, you know, had that as a checklist, but I don't think like that was required. I could see that those six were just, you know, composing their own decision recommendation based on something and then that was that's what it and rest was done by the TOC. We went kind of. Yeah, me too. Sorry. I will say too, much of what we're doing is paying it forward for graduation graduation because many of these questions are actually from the Yeager doc. They graduated. So, so we're probably doing a much higher bar here, but go bigger, go home right. Thank you very much everyone. Thank you. And yes, everybody and thank you. And everybody have a great day. Bye bye.