 Hello. Hello, welcome. Thank you. Hello, Ben. Let's um, yeah, I'll put the meeting notes out in the sick channel. We'll give everybody a minute or two to file in here. We have a full agenda for today. I don't know that we'll be able to do it all, but we can certainly try. Hello. Hey, Bartek, welcome. Oh, hi. So while we're still, while we're still waiting for the, for Richie and a few others to file in, maybe a quick round of introductions for once. I can, Brian, Michael, Dottan. This meeting was being a little weird. I have to join via browser rather than yeah. Oh, that's strange. Well, the first time, Brian, you do that. And Richie is coming as well. Yes. He should be. At least he's had a reminder in chat like a minute or two ago. So you might be running into the same issue. Yeah, I used to create my account with Zoom. Yes. Okay. So while people are filing in and this is recorded, I'll just open the meeting and say this is our sick observability meeting right before Thanksgiving in the U.S. on the 24th here. If this is a CNCF meeting, the CNCF rules and conduct and all that apply. Everybody be cool. It's never been an issue, but we always should say it. If you've not been here before, do you want to take a quick second and do an intro? Well, we have a full agenda, but I see some new faces. I'm familiar with the names, but it's at the first time I've seen here. So feel free. I guess I could jump in. I'm Ben. I work for GitLab, and I'm also a member of the Prometheus team. And I work on observability at GitLab. Awesome. Welcome. Yeah, I'm Brian Brasel. I'm one of the Prometheus developers and also involved with orthometrics. And for anyone who's not familiar, check out robust perception. There's a lot of good tips there on the blog. Anyone else before we reach you? All right. Anyone else? Okay. So I put the meeting notes in the channel. We've got a pretty full agenda already. So I guess, Partak, why don't we start with you in order here? Sure. It should be like a super quick thing. So essentially, we talked about kind of collaborating better with other, well, with many CNCF, sick observability-related kind of projects, but not only with other projects as well. We definitely want to have a stronger connection and communication between those. I already kind of spoke with a couple of teams, you, Matt, spoke as well. And we had this idea of those introduction talks. So I would say, let's go ahead and kind of craft some agenda. And I know for a next talk, like in two weeks, we will have kind of a talk from Captain Project, which is kind of impressive. They are kind of collaborating with, well, kind of working on improving the Prometheus scaling of configuration problems. So that's pretty nice. But we would love to have all the, you know, kind of observability-related projects to maybe introduce themself and tell us about, you know, maybe near-time actions or like plants. So we can understand better and maybe hop in on the problems that, you know, to not reinvent the wheel, essentially. And just because we all have the same problems and challenges, let's say, not problems as the CNCF project. So that's the idea. I think we had quite nice introduction to open telemetry a couple of meetings ago. But yes, I would say I'm announcing this and hopefully we can have some volunteers for the next meeting. So why not? What do you think about that moment? Personally, I love this idea. I would love to hear more about different observability projects. And I think this is a great forum that I kind of think. No worries. We're just thinking about kind of the idea of introducing each project. I think, anyway, it's up to, I guess, us. If you have strong connection to the any observability project, please go to them. Or like, if you are from this project, please, actually, I should probably start some doc where we can, or we can maybe put that here on our agenda to essentially schedule this and have, let's say, 10, 15 minutes of talk, introduction talk of maybe relevant issues as well that you are facing right now as the project, or stuff like that in the following meetings. I think you'll be amazing to maintain this table here. But anyway, this is for your awareness. If you are a maintainer, if you contribute to this project, please let them know so we can have, we can learn, essentially. So that's it from the center. Sweet. I would suggest, if everyone, if this makes sense to everyone, we just make a GitHub label to track these kinds of things. And we can use GitHub issues on the Convun board to keep track of all these different projects. Because there's a whole bunch of them. Litmus, we talked to a couple of weeks ago, they're planning to make an intro as well. So does anyone object to that? If not, I will just take an action. I have to make it. And yeah, just add, yeah. To reinforce the point, again, I maintain we need to move more stuff out of the call and not more stuff into the call. Because there is limited time in the call and we should, we should focus stuff into the call where we actually have to communicate with each other live and not put more stuff. But I only have half the context because I got my computer. That being said, I will now officially use myself as chair of the SIG for the rest of this meeting, or at least until we are not talking about open metrics. Yeah, down on the agenda, where we're going to do a review today, we'll take up a good chunk of the time on the open metrics due diligence they've applied for. Yeah, but first we've got a couple things before that to kind of buzz through. So, Arthur, you're up. Yeah, sure. The first topic I've added to the agenda is just a report on how we are doing with the like paper for the beginners on observability. We've sort of organized all topics that we want to talk about. And right now, each one of the members of the working group are, I don't want to say randomly, but just what we are most familiar with, what we are feeling comfortable for writing about, we're just picking a one-stop subject and writing, elaborating, and yeah, that's what, how we are doing right now. And of course, we are open to anyone who wants to help us write or just reveal what we are writing. And the link I'll just post on the chat. I don't know if Michael or Simone has anything to add. No. Cool. I mean, we've, yeah, I mean, just for the others that joined now. What we did was basically to meet outside this meeting here, just to agree what we want to write about. We have an idea now, but I at least started offline before I can have text in a shared document that is worth reading. Yeah, and we basically created, we have a task in the board, so to say, and we set up a deadline as well to have something to be delivered. So we have, I think was beginning of January, right? I have at least January first two weeks as my mental deadline here before, I mean, this is like the final submission, but before that we have to iterate. This is great. I'm sorry I couldn't join on Friday. We were dealing with the Docker apocalypse fallout. And I'm sure I'm not the only one who knows what that means. I noticed that there's two issues. Are we using 16 and 19 on the board for that? Or were you consolidating to one of them? Yeah, I think they, they're different things. One is a white paper and another is an index page. Oh, I'm sorry. Yes, I looked at this wrong. So we closed rather on 28 and we're going to be using 16. Great. Yeah. Just one another question. Are we still meeting on Fridays, the white paper working group or not? So you're planning to meet every Friday to sync until or every two weeks? It doesn't need to be every Friday, but just to like, just follow up, see that the project is going forward or not? I think we can use the issue 16 to coordinate on that. I don't think that we need to use everyone's time for that. That's fine. We have a Google docs. We have our sections assigned who can use issue 16 to coordinate. Cool. Great. Thank you. Is that it for the white paper? And on to, if I'm, if I'm moving quick, it's because the open metrics is going to be a long document. We're, we're trying to maximize time on where we know we're going to have to spend some, the index page proposal that's next, right? Yeah. Let me just get the request. I'm adding to the chat as well. I'm just started a Markdown file, table-driven, sorry, some, a set of tables with where we can add useful information for those who are kind of lost and doesn't know where to search for information. There is a lot of questions on the PR, and we don't need to go over everything right now, but it will be cool if some, if people just give feedback to the PR, then we could just move on to the open metrics. I was curious for this one too, just to plant the seed here, you know, we could probably use like a, either like our radar, maybe not a radar with like adopt, you know, adopt assess hold kind of radar, but maybe some other sort of graphical visualization of the different domains of observability that might make sense once we've got, you know, the raw table compiled. Maybe as a follow on or something like that might be pretty cool. Cool. Okay, any other comments on this one? No, we need to definitely spend some offline time to make it pretty, but we should incrementally add that and work on the PR essentially, yeah. Yeah. Do we want to do, do you want to talk about the Prime Migrator design doc first, briefly to, I'm guessing to sort of introduce it and then set the stage for maybe in two weeks where we could take a deeper dive through it, and then I guess we could, or I'm concerned that we might not get to yours if we start in on the open metric stuff before the end of the hour. I'll try to make it less than two minutes. So pretty much we started the project. The idea is really simple to allow you to migrate Prometheus data between the various Prometheus long term stores, and it just uses the already existing remote write and the remote read APIs in Prometheus. And there's just kind of a pipe between reading out of one and the writing to another. The idea here is that you can move things between existing stuff in Prometheus into Thanos Cortex, Promise Gel, what have you, or the other way around, or between any of the long term stores. So the basic design doc is up. I don't think we need to go through it in this group necessarily. I just wanted to put it on people's radar. So they are aware that this team, that this tool is being developed. And maybe if people have expertise in remote read and the remote write, they could go into the doc and put their thoughts in. Right. So that's cool. Thanks for bringing this. Thanks for raising it here. I did have a quick kind of a question. This is primarily I aimed, it looks like at the doc at historical data and backfilling versus. So we've been going back and forth about that, and there will be great if people could chime in. As it is, the first version will probably be for backfilling. But we are wondering if we wanted to do an online version of this as well. But we don't know because Prometheus kind of has already an online thing with remote write. So, but thoughts about this are definitely welcome. Yeah, I think it would be amazing if you can share this with like Prometheus essentially mailing lists or at least hook into discussions we have with about backfilling and our storage kind of ideas behind that because it looks like we are solving similar kind of challenges here. But maybe whatever you learn will be useful for future backfilling and maybe export it as well. So it's really connected a little bit. Yeah. Yeah, I know that the horror question who is the main developer on this has brought us up on Prometheus IOC before, but we do intend to send it out to the Dev mailing list as well. Yeah, there's another, I'll just be brief, there's another adjacent idea that we had talked about over the summer. I know Thanos has done some work around it and I would love to spend some time on it as a Dev. I've not had that time over the past quarter, but a sort of a mirroring proxy or a smart sort of L7 if you will remote write protocol aware proxy that could do buffering, mirroring, and things like that. Right now, for example, we run a dozen or so clusters, big ones, and we have Prometheus doing remote write to both staging and production environments and it puts sort of the load on the sending side. And it might make sense for some scenarios to have, almost like a PG bouncer is to postgrass proxy sequel is to my sequel, something like that for the remote write protocol that would be a man in the middle, so to speak, and it would support scenarios like canaries, buffering, some degree of fault tolerance or retry semantics like things like that. So that's why I was asking if it's a sort of, if your design spec here is more of a bulk backfill tool or meant to be more like a bump in the wire inline smart proxy or protocol aware proxy. Now that you mentioned it, it makes a whole lot of sense. We'll think about it. So thank you. Cool. Yeah. And again, thanks for bringing this up and we'll look forward to talking about it in the future. All right. So with that, before we dive into the open metrics due diligence, is there anything anyone wants to mention before we spend sort of the remainder of our time there? Okay. I guess Bartek, take it away. Sure. So okay, the goal of this kind of section of this meeting is to go through the due diligence kind of template. I asked for metrics to kind of to get in together and all members, you are kind of happy to, you're welcome to, you know, comment or, you know, kind of point any maybe gaps or actually, you know, praise this and essentially go through this and agree if we want to recommend open metrics as a project that should be in the incubated stage, not in the sandbox as it is right now. Right. So the team, the open metrics team spent a lot of time on doing this. So it's actually, I read through this and it's kind of have everything in my opinion, but we can, I mean, the plan is to go through all of this and essentially review this together. And maybe we can do this in a similar fashion as we did previously with Cortex and Thanos. So please feel free to comment and essentially we go through most of the sections here. Maybe we could use the reactions feature or maybe put stuff in the doc itself, but just to, I'm trying to gauge how many people roughly have actually read through this doc and have had sufficient time for feedback. I know that I would like to spend some more time with it, but that's not to say we shouldn't walk through it and see where we're at. Yeah, I've skimmed that I haven't had an opportunity to add comments yet. Would definitely like to take another pass as well, but I've at least read it the first time. Okay, then it sounds like just a set of expectations we're probably, as much as we would like to just do it, you know, it sounds like we might, let's spend the time today to walk through it and then see where we're at as a group, but I do want to make sure that we have sufficient time for review by the community. Maybe as a reminder of how the Cortex and Thanos due to the gentleman's handle there, we basically read out one section and then had to think of all agreed or comments or disagreement, because else unless we do divide and conquer, I don't think it's realistic to get even through half the document. And I think reading it live is the only way forward, but at this point, then we are basically back to how we did it with Cortex and Thanos, read the section out and then all agreed or comments or disagree, but I'm explicitly not saying that we have to do this and I took off the hat. So it's just a reminder. Sure. Well, do you want to do that, Bartek? Do you want to kind of read through this and talk through it section by section? I can help scribe or take notes in the doc or comments or we can as a body just move through it? Well, we can do that, definitely. I think with the Cortex and Thanos, people had like at least weak noticed, but I think it's much smaller as well as the project is essentially about the protocol and open protocol. Okay. Yeah, it's an established protocol as well. So most of this should not be controversial or new for folks. So yeah, let's dive in then. So do you want me to kind of, do you want me to kind of drive us through section by section and call for votes and all that? Like I have to say the way Richard did it last time was quite ruthlessly efficient. I definitely won't be that efficient, but we can try. So first of all, governing, let's go. So open sort of ability. The governance was kind of established I think kind of recently. Maybe, I don't know, maybe Richie and essentially maintainers can walk us through like TDR of the coordinates. Does it make sense? It's basically or it is literally the Prometheus governance with three changes. A, it allows for subprojects. B, it replaces lazy consensus with rough consensus as per ITF and C, it has a project lead. And those are the only changes against Prometheus. Yep, that's what I got as well. We have 14 members and you are the project lead. That's the kind of TDR. Correct. But the same kind of voting and consensus of majority vote rules apply, super majority vote. The same thing we do on the Thanos cortex from Q's. There is even off-boarding, onboarding list. So yeah, that would be, that would be it. Does anyone has any comments? Any questions around that? Can I request that? Oh, go ahead, Matt. No, after you. I was just going to request, is this one of the ones that we can revisit? Because I want to, I did not have a chance to read this link. I read the doc, but I didn't read the links out yet. So I'd definitely like to revisit this one if we could. Yeah, what question I have in a similar vein is where are the meetings and are they, you know, are they open? Are they on a cadence, moving forward? They are on a fourth night of cadence, though we had a ton of meetings recently. They were closed as everyone is aware for the simple reason that even people who left for just one or two months, it was impossible to re-onboard them. Of course, we had everything in our head and needed to frantically get it out. That being said, obviously, there was documentation as code with incline Python code, with incline Python and within Prometheus proper itself. Going forward, everything will be public. Because we basically, we thought we would have a short period of crunch and sanity that was longer than we expected or anticipated. But obviously, the intention is to make it all public going forward. Okay. In this case, I think we can essentially give some time for review, I guess, this and then we can get back to this later. And let's move on. Is there a document? One last question I had around the governance and the go forward plan. Is the application to ITF and the various back and forths that will be happening there? Is that expected to be done in the open as well so folks can listen in, even if they're a fly on the wall and not a contributor? So within ITF, this is way outside the scope of governance. But within the scope of ITF, you have the OpsWG mailing list, where everything is discussed in public. I did send the version which we have to the OpsWG chair to do one read through if in case we have any obvious mistakes. And then it will be submitted to the ITF-RFC tracker or internet draft tracker, where everything is public. It will be discussed on the mailing list, which is public. And as currently there are no ITF meetings, it is done through video, which is also public. Oh, super. Great. Yeah, I do realize it's out of the scope of governance, but wasn't sure where else it was. So let's move on. Regarding governance, I think there was a mention that this is basically a copy of Prometheus. I don't see Prometheus's governance. Can someone add a link to that? It looks like there was, but it's been removed. Yeah, I would like to... If you have a Prometheus governance into Google, and that's the first hit that we can also edit there. Yeah, we can link in the dark zone. It's on the website. I see. I see. That's why I'm looking at the GitHub repo, and it looks like it was there, and it's no longer there, or it's moved somewhere else. Perfect. Thank you. Yeah, I think it's in the docs repo. I see. Got it. Thank you. Okay. Any other questions? Should we leave that for now, and then go on? Yeah, I mean, I suspect for most of these sections, we're going to have to revisit this for a final go ahead. I mean, again, I don't know, I could be wrong, but with Richard Bartek, what do you think? Speaking as a project member, in the scope of the question, if it is self-guarning, I think we can answer this as a yes, and as such, I don't think there's much more to be discussed outside of just this yes-no question. So with my not-to-chair head on, just the project head on, I don't see any reason for needing to follow up. That being said, if that is the group consensus, obviously the project will yield to this. Sure. So why don't we do what you did last time, Richie, and have a call for consensus? So a call for consensus. Is anyone object to being done? I agree. It's certainly self-governing in line with the CNCF principles around project governance. Any objections? Cool. Bartek, I can't edit in this, so you're going to have to scribe. I can suggest it looks like. All right, yeah, I'll do it. Great. I think we should all just like, I'll speak for everyone and say, Richie, we assume that for the remainder of this call, you have your project lead hat on, and I applaud your being specific. I will always be specific when talking about anything procedural, but yes, let's just move on. Indeed. Cool. So the next one is a very straightforward code of contact. Is there a documented code of contact? Team members linked the CNCF code of contact, and I just checked it is linked in the main kind of repository. So to me, all is good. So any comments? I think let's call for consensus. Any objections? Once? Twice? No. Okay, so it looks like we are happy. Next one. Does the project have production deployments that are high quality and high value city for both? Yeah, this is related to us here in this case. So yes, two of the most used Prometheus client libraries have support open metrics, and that's true because I maintain one of them. Oh, no. Yeah, actually, Go doesn't yet. So it's only Java and P9. That's true. Go does. We should probably link them to the document as well, if possible, just so that when we have a reference for what Prometheus does overview. So we can link to it after the consensus so we can move. Exactly. Cool. What else about clients? Also Prometheus server and Datadoc have been supporting open metrics. Yeah, we are here already, and we are negotiating open metrics as well. And there is exemplars support added as well. So like the very powerful feature of open metrics. So, cool. Apart from links, I think, any comments and call for consensus, I guess, straight away? Do we, like, is that enough as a production deployments? I think for me, it's enough. I personally think this is enough given. Okay, please feel free to just speak up if you have comments or questions that are totally okay. Next one, is the project committed to achieving this? Briefly, I know that originally getting vendors to support both format, to support the format for devices and other things that are not just a client library was talked about. Is there any references we might want to add to folks actually that have implemented open metrics for, you know, either... This is something other than client libraries, in other words, like embedded devices that are using those or embedded devices that are writing their own client libraries. How broad is the current user base? Offhand, outside of the client libraries, I think there's only three or four things that have directly implemented it. For example, I know that Dovcott has, although I think the Peore's tech is still open, but I'm not aware of any embedded hardware devices have done it yet because it's too early in the standardization process. Sure. Okay. I mean, my goal here would be if there are even others that have started to use it, even if it's not finalized or whatnot, it might make sense to add it. So, again, when our liaison reviews this and then the TOC reviews this once we... Because the process here is the SIG makes a formal proposal that, yes, this should be advanced. Then a whole bunch of people that are not quite as familiar as the folks on this call are going to then read this document. So, however we can, you know, widen the base or make it a more compelling argument that's not disputable. That would make sense, I think. But yeah. All right. Let's move on. So, is the project committed to achieving the CNCF principles? And do they have a committed roadmap to address any areas of concerns raised by the community? So, the answer is it's committed. There is no roadmap, but that's a natural standard. A sister project to specify a wire format. I guess this is like a planned thing in our roadmap. A sister project to specify a wire format for locks is next and will happen under the umbrella of up and telemetry. We're not aware of any concern by the community. Steve, do you want to maybe... Yes, I put one comment on here. I actually have two comments. One was just, are there any big rock things that could be noted? I think given that most of it hasn't been open to date, the community as a whole probably hasn't had a chance to really comment on it, so that will probably change over time. But does the, at least the committee today, are they aware of big things that need to happen? Could those potentially be documented? My second comment would be this one specifically calls out the CNCF principles. I don't see any explicit mentions here of them, like no king makers, one size fits all, standards body. It might be good in this section to explicitly call out the CNCF principles and ensure that it's adhering to it. So, for the first one, the one thing which we are aware of, but this is firmly out of the scope for 1.0 is high resolution histograms. The reason why it is out of scope is because we as a project made a hard promise to retain compatibility to Prometheus, so we just couldn't make that work. But this is something which we see as one of the highlight features of a future version. Beyond that, there's nothing super pressing because else we wouldn't be comfortable releasing this. As to the CNCF principles that is with my project that on how we replied to the official TOC checklist. So that is basically the question in its entirety. I fully agree with everything you mentioned. I do think that we adhere to all of these, obviously. But like giving this feedback to CNCF TOC to improve the checklist makes absolute sense. Maybe this will be a good one to review next time we meet too, right? Because I did not compare it to the CNCF principles. Yeah, we had the same experience with Cortex and Thanos. So, yeah, also for Cortex and Thanos, we had the same process. Of course, again, this is the TOC thing. But are there specific concerns or not? I'm not familiar enough with the CNCF principles to comment. I have a question around the alignment with open telemetry. I mean, you mentioned it somewhere at the very bottom. And I'm a little surprised that you mentioned the wire format for locks here, but not the wire format for metrics. And that is certainly something that I find quite worrying. That is something that I would have expected in the roadmap to be very clear on that this is your goal. Do you need to rephrase your question? Which part? The part where you said that we talk about locks and not metrics, because the opposite is true. We talk about metrics and not locks. The sentence here says, assist the project to specify a wire format for locks is next. Okay. That's just a second. I thought you were talking about the section where you left your comment. I didn't leave a comment in that section. No, not in the section, but in the section. No, no, down below. I was looking for your comments, so I was trending. Sorry. Yes. I have a follow-up question for that about the open telemetry alignment, given that we're approaching, again, sensitive incubation under the roadmap and also maybe mapping to some of the principles and not flowing to that. Any more explicit notes on the roadmap on alignment with the open telemetry? So as a wire format, there's nothing directly to be aligned with. That is the one hardcore technical question, because it is a wire format. And as open telemetry supports a variety of wire formats, there is no direct support in this direction. Steve, correct me if I'm wrong. Yeah, I think there's two parts to that, though, right? Like, one is open telemetry supporting sending to open metrics destinations. In theory, that should be possible. I don't know if any due diligence has been done to compare the OTLP protocol with open metrics and confirm that it actually can be fully translated. But I think the bigger question here, which might be being asked is, why does open telemetry have a metrics format? Why is it not using open metrics formats? I think that's a good question. So as someone who's been sitting in on quite a few open telemetry metrics calls over the last year or so, but definitely not all of them, there are some incompatibilities, especially around how histograms are with lower equal or greater equal. And also about having deltas instead of counters, which makes it easier to discard state immediately with an open telemetry. But then you have a more or less mandatory sidecar, which rebuilds that state to be able to talk to all endpoints, not only Prometheus type endpoints. Those points have been raised repeatedly. I think another was the suffixes on some, doesn't the OM standard kind of dictate a little more firmly total and suffixes for counters that is sort of optional. I don't know if open telemetry adheres to that or not. But those are all questions which are coming not out of a point of view of due diligence on open metrics or getting something wrong. Because this seems to be concerns for open telemetry and I fully expect that some of those will come up and I absolutely commit to working with open telemetry to keep everything on the happy path 100% percent. That's the only thing which makes sense, personally speaking. But again, this does not seem to be directly related to due diligence on open metrics. I have a suggestion actually. Oh, sorry. The thing that confuses me and I see myself as a proxy of many of our customers who are equally confused is, is there an intention from open metrics to enable any kind of open telemetry support or not? And the thing that confuses me when I look at that is that you clearly spell out the wire format for logs, but not for metrics. And I'm trying to understand, is that intentionally so or not? And if it's not so, then we should spell it out here in the roadmap. Does that make sense? Okay. I think I know, I think I understand what where you're getting it. So again, as a wire format, it's next to impossible to support an instrumentation library, which supports a myriad of wire formats. That is the wrong way around. Like, I cannot with a wire format support an instrumentation library. This instrumentation library can support a wire format. Obviously, it is the case that open metrics is modeled after what client Golang and client Python and such do. And obviously, it's the case that's also one along the lines of what Prometheus has been doing, in which basically has been has been in existence since 2014, based on Prometheus Exposition Format 004. All that being said, again, that does not make seem to make sense. We can definitely put some more verbiage around the metrics part in that logs part is more of an outlook of what is logical next steps for open telemetry as such. Too many opens too many opens. So it's the position another way. This is the roadmap section. And metrics aren't mentioned in the robust section, because metrics are just some extent done. However, logs are a potential future thing, which is wider in roadmap section. So this is all very future looking, whereas metrics, I get that, I get that, I get that. But if, if I understand it correctly, then the argumentation that Richard just used for metrics and the wire format would also apply to logs or not, why are logs different? No, they're not. Again, the outlook section. So for the outlook section, it makes sense to talk about we will specify metrics. Of course, we just did this. Right. And is it does it make sense in the same way that you say here, we want to specify a wire format for logs under the umbrella of open telemetry that you also, if you plan to do so, that you are working under the umbrella from open telemetry to get open metrics as a wire format in there? I think that's a typo. It should say open observability. You're right. It's, I know that it's this, there are many open, open things there, right? Yeah, open metrics, open telemetry. Now it is much clearer to me. That does make worse that stuff. As a point of order, I have a suggestion, maybe again, like I know some of us have been reading in steeped in this, this noun soup, but we might want to jump to the context after this section on roadmap at the bottom of the document, particularly for folks that might be joining our call for the first time or are not, you know, or that would just sort of set the stage for the rest of the document, and it might be a better ordering for discussion and review. Sorry, what do you propose to? I propose that after we finish with discussion of section four, and we come to do a call for consensus, we jump to the bottom part of the document. There's a section called context, the last section, and that sort of, in my mind, that would be one of the first sections if we were, again, just kind of, because it just sets the stage and lays out some of the nouns and such. It might facilitate the rest of the sections of the document happening a little quicker. Okay, a quick time check. Do we go for full hour or minutes as usual? I'm fine to go to the end of the hour. But yeah, let's finish out section four here, for me. Still, we have some kind of questions around what exactly those CNC principles are. Yes, but again, I think into this. As we have, are there actual concerns, active concerns, or is this a case of not having read up on the CNCF principles and or of TOC checklists not being as good as it could potentially be? Because this is literally the same checklist we went through with Cortex and Thanos, so it's. Yeah, so let's break this into two parts. Number four asks two questions, one's around CNCF principles. I think if you're all comfortable that the answer is yes, that's probably sufficient. The second part here, though, I think is probably the one where we might be having more of a hang up. Even in this forum, I think multiple people are raising potential concerns that speak to the need to a roadmap. Perhaps that is worth talking about next meeting and trying to address some of the concerns that have been billed pointed here. So as a reminder of how we did this for Cortex and Thanos is that the SIG gave homework to to the project and the project basically fulfilled this in parallel. And then a consensus was spoken out on the condition that this homework was fulfilled. I think we could adopt the same. I think we could adopt the same process here. All right, so I think you're both saying the same thing a slightly different way. All right, so we've raised the concern the project and more in parallel can elaborate the roadmap and we can call for consensus that's contingent on that happening in good faith. And then when we make a final final determination because just based on time, we're not going to get through this. So in two weeks, we can revisit and close it off and move on. Anyone object to that methodology? Yeah, can we have a call of consensus on this that we're happy with the section above? Essentially elaborating roadmap more from open metrics. Well, does that make sense? Because if we think we are either saying that's fine or it is open, otherwise we need to revisit it again. So Are you raising the concern that depending on the roadmap, there may be concerns? Right. And we would just address them as they come, right? I mean, Yeah, I think this isn't the final sign off, right? Yeah, we're near that. Sorry, but I just reread the thing. Do they have a committed roadmap to address any areas of concern raised by the community? So this is not about concerns about a roadmap, it is about current project, which are then being addressed by a roadmap. So if we're worth everything, and if we are being exact, then we need to be really exact. And then the question is, are there current concerns? And if there are current concerns, are they be, are they addressed by the roadmap? Yeah, and I think there are several listed in the bullet points above, right? So the ask is to comment on which ones are applicable to open metrics, and what would be the roadmap for those items? With my project head on, I don't believe that there are any on open metrics as a project beyond working in good faith with the other projects. But if that is not the group consensus, then obviously it can't be the consensus. But with my project head on, I would at least ask for a consensus on this. Okay, I think let's move on. And I don't think there is consensus in my opinion. Yeah, we'll move on. And we can also work offline in the next two weeks in the doc and comments in the CNCF Slack. Yeah, I think we've made some progress on this one, at least on the first part. And we have some action items for the project to come back and clarify. Yeah, let's clarify this. It would be good to know, just briefly jump would be good to know specifically what the outcomes would be desirable for that work, if possible. The only concern would be doing it completely asynchronously would be that there's some comments made and some work's done, but we come back a few weeks later. Yeah, we're spinning into December. Yeah, I'd like to ask exactly what the concerns are here, because my reading is that there merely needs to be a roadmap. And like in this context, the roadmap is getting into ITF. So if you don't understand what the concerns are about the roadmap. But the line says committed roadmap to address any concerns raised by the community, not roadmap for the project as a whole. And so I think some of the concerns are around, and I think we'll see this in some of the other items too, like how it impacts potentially open telemetry. Okay, then please write out specific concerns and we can take this question off and pivot this into specific concerns, which we can address specifically. Right. Well, that's what I'm worried about is that people like it seems it's very vague right now. And I want I would love to specifically address concerns. But I don't even know how do how do we get to those specific concerns? Or if do we even have the right people? Like, how do we fast track this to making people feel like making people feel like we're addressing the concerns that needs to be addressed. And to me, it's not 100% clear that we have a process for getting there. And I think one of the problems is we're kind of doing it live right now. So I guess, Matt, maybe question to you, do you want to put like some sort of timeline around when people have to provide comments on this doc? So the team has something tangible? I would propose. So I think, Rob, what you're saying, as well as Brian, is completely fair. And in my mind, a specific enumeration of concerns is what would be requisite to actually commit to a roadmap to address them. And if you don't know specifically what's being asked, then there's ambiguity. And then we're we're into December and next year, God forbid. So, so why don't we time box this for two weeks? If that if that makes sense, I think I think the folks that are passionate about this, you know, that that's plenty of time and we could probably even do it faster. But I would suggest that in this document, you know, we make whoever has those concerns, just take some time to articulate them in a in a in a clear, unambiguous way, so that the probe so that the so that the project can can address them with with a roadmap that is seen and rational. And I would imagine that there are more people on this call that might want to the project might want to bring in or or what have you. But there's two weeks from so our next meeting is that I would like to close on this. On this being two weeks, of course, if it's two weeks, then we basically get the list of or in extreme, we could get another one final concern basically right before or even during the call. Okay, well, I'm sorry, I'm at two weeks for the entire activity. So that's a that's a fine point as well. So do we want to say like, by Monday of next week, or whatever, I mean, no one needs to write soliloquies or sonnets. But you know, we could we enumerate the concerns in the next week and then and then give or a week from today, rather Tuesday of next week, and then the project could have another week to prepare. Because yeah, we don't want all this to drop the day before and then we're effectively out another two weeks as well. I think a week is reasonable. Anyone object to that? I want to try to strike the right balance here, you know, so that we can address all of these concerns and but also be time bounded. Because I think there is I'd like to I'd like to move this forward. I think I think the community needs this to to move forward as well. If I could be so bold. So cool, cool. That makes sense. I think then spending last four minutes to rush through the points is not very Yeah, unless there's some super non controversial low hanging fruit that we could just bang bang bang and at least get a get a couple more. Try just read out point number five and see if we have consensus. Go document that the project has a fundamentally sound design without obvious critical compromises that would inhibit potential widespread adoption. Might already got controversial adoption. Okay, what about I think we already have pretty widespread adoption. I think so. I guess the only concern I'd raise is again open symmetry is going to come up if open symmetry is the client library for all data sources and it's not compatible fully converting to open metrics is that not a huge inhibitor to potential widespread adoption. The question is which way the incompatibility goes if you go from premises which even predates the CNCF and which is supported by literally all projects which in within the CNCF and by thousands of others. I would strongly argue and did argue in the open telemetry metrics calls that that there is an installed base to take into account. Yeah, totally. I mean, I totally agree. I think the counter argument though is the client library is just for metrics. If I want logs and I want traces, I need to go use something else. If open symmetry provides a single client library, then I think it's pretty critical that open metrics be fully compliant or there's an adoption problem. Now, maybe you're right, maybe this isn't part of this. I have to have a good reason because I already have a wired up kit that has metrics, logs and traces and I see no value in switching to open telemetry for my company. Sure, I definitely can go both ways. The question is does it potentially inhibit widespread adoption? I think it actually inhibits open telemetries widespread adoption by not supporting Prometheus. Everyone's data pipelines already set up to ingest Prometheus metrics. All the major vendors support it. It's widely deployed as Prometheus obviously is by CNCF folks at a CNCF and users. Not supporting open metrics is a huge inhibitor to open telemetry being deployed in any real deployment. As a vendor, I can't recommend people use open telemetry until it supports it. That's generally what I have seen in the community. The advice for any company of any real size is do not adopt open telemetry just yet because you fundamentally can't get it to work with the current vendors and Prometheus installations that are out there. That's the viewpoint that I've seen with most major end users as well as with people that are interopping with vendors. It's an issue, but it depends. I think it 100% can be resolved one way easily which is open telemetry being compatible with this. It's just a wire protocol. It doesn't try to enforce anything on how you use the wire protocol. At least that's the way that most folks around the project have been looking at open metrics and how it relates. That's my assessment as well. Prometheus being a CNCF graduated project in sort of the de facto way to monitor all sorts of things, not the least of which is Kubernetes. The installed base is effectively Prometheus and at least my understanding from reading the open metrics back and this due diligence is really exactly what you said. It's just getting agreement on the wire protocol so there's a lingua franca and it's modeled after what already exists as a drop in with a few tweaks. If it's scoped just to that wire protocol for metrics, does that alleviate some of your concerns, Steve? I mean that by making a by fundamentally agreeing on what that wire protocol is that enables open telemetry to actually implement it in a go forward way that's deterministic. I'm just raising the potential concern. That's all. I think this falls in both camps, both the open telemetry and the open metric side, but it feels like it could potentially inhabit adoption of one or both projects. It's at least worth raising the concern. To end on a positive note maybe, and we are way over time. My feeling from taking part in the open telemetry metrics calls is that there is a high interest in making all of this compatible and as open telemetry is a client library with explicit goal of supporting a myriad of wire formats, I think maybe we are just having a little bit of a dim pot discussion, because as it is the fundamental goal of open telemetry to be compatible with pretty much everything, at least as far as I understood the calls, that is the goal anyway. It seems to be just another check mark on the list of open telemetry to implement an open metrics bridge and be done with it, because you're not talking about how StetsD or Cortex, Prometheus do stuff either, and those are all on the list like Cortex, Thanos, Prometheus, StetsD are confirmed, at least when I last looked and talked about this, as confirmed targets for open telemetry as such as you have Cortex, Thanos and Prometheus, that directly transports to open metrics, because if you look at it, the one thing which is really breaking is a different timestamp, and you're not supposed to be doing timestamps anyway, and that's it. The rest is compatible. Yeah, and I think maybe that's the biggest call out, right? As long as there's like commitment to collaboration on both sides, and maybe that just addresses this, there is a potential compromise, a critical issue in that if open telemetry does not, then adoption could be impacted. The mitigation there is open metrics commit to working with open telemetry, and open telemetry commits to working with open metrics, thus we believe this is not an issue. That seems absolutely to be what the way forward is, and that is what I've done in the past and what actually I already set my calendar again for the next call, and I also wanted to be at them. Oh, by the way, I sent you a note in the in the meeting notes of the last open metrics, open telemetry metrics calls about my calendar link to do a group explainer of open metrics, if you haven't seen it. I haven't seen it, no. I know you went at that meeting, but I highlighted you, Bogdan and Josh. Point being, I wouldn't have set in all those meetings starting at 21 local, if not for an honest desire to be interoperable with open telemetry. People yelling at me, of course, I need to get into a new call. Yeah, I think we've made good progress here today. Thank you, everybody, for staying a little bit late. Let's continue this online and MC in two weeks, and for folks in the U.S., or wherever, have a great Thanksgiving if you celebrate it. Yeah, thanks for having us on the call. Oh, the calls are all, please join at will. Thank you, everyone. Bye-bye. Thank you. Hey, nice to meet you, by the way.