 Hey Steve, in hilarious apocalypse news, the landlord has said that they need to do an inspection of my apartment for fire safety issues. So therefore I am wearing an anti COVID mask, sitting at my desk waiting for the person to show up because after that might air will be contaminated for a little while. Yep. Yep. So that's why I look a little close to the clock a little bit today. No, I mean, sometimes these things come up unexpectedly and you definitely need to have precautions. So what we understand. Hi. Hey folks, Richie and Bartok and others should be joining soon. So we'll just give it a few minutes for folks to join in. Hi folks. Good morning. Hello. Hello. I would like to start on time, but Bartik isn't here. So I do suspect he's relevant for the discussion. Looks like he's on now. Oh, there he is. Perfect. Perfect. Okay. So Arthur. Yeah. Perfect. So Arthur moved his thing so we can start right in. Just one point of order. I read recently that most people are at the year of being locked in the pandemic for one year now or close to that point and that a lot of people are hitting mental and emotional walls. I've had a little bit of a fight with someone in Prometheus team last week as well. Long story short, there's humans on all sides of this. So please be gentle and assume good faith. That being said, let's start. Also you need to write yourself into the NT attendee log. If you want to, of course we have way more people than are currently in the list. I'll share the doc once more and we will be starting with the due diligence. If you saw the channel, there are two new documents or two new statements, work pieces, whichever one by Alelita on the progress of open telemetry signals, which we'll be looking at in a second. And by Bartik, basically I had a discussion with him to try and unblock this call a little bit. So the thing which we'll be trying is to detach the discussion points into a distinct statement, which will allow us to walk through the rest of the document hopefully at pace and just keep resolving issues and comments and just finding consensus or finding that we cannot reach consensus. And then it's basically up to the TUC to make a call. But at some point, sick observability should be done with the due diligence document. By one call, we are now at the record of calls to get through due diligence. So let's try to not look at two. Oh, no, actually just the third call, then we're on par. Doesn't matter. So Steve, do you want to share your screen again for the document? Sure. Happy to. And also just to make sure, Bartik, so my assumption is that you're basing basically you will make a statement or some such. And then this unblocks the detailed discussion, correct? Yeah, that would work. OK, so please put that in at some point so people can see. For now, unless there's something which needs to be actively discussed, which is not covered in that statement, then obviously speak up else. We're just referring to that statement. So the first comment, which is still open, is also from Bartik about forked distributions versus core. I think we can close this unless fish is on the call and wants to actively object, but I don't think that is the case. OK, so Bartik, you're fine because you wrote perfect. Are you good, Bartik? Can we resolve this one? Yeah, so Bartik, can we close this? Yeah, I'm just digging and clarification. Sure. Thank you. OK. OK. Let me click please so we have one singular thing. OK, I hope everyone read the update by Alulita, which is linked in the next comment. Should I pull that out? Do we want to review that live or hand it over to you? I do hope that people read it because it was shared before. But if people need time to read it, then please pipe up. I read it. I think Bartik also read it. I think quite a few people who read it. Yeah, so I'm happy with this. We just referred to this document. The one question which I would have is for let me mark it here. Steve, can you go into the other document for a second please? Yes. I think two calls ago. It was Steve or Morgan, if I remember correctly, told us that the completion is expected for not 2021, but for early 2020. So that's the only comment I have if there is some link to supporting documentation or how that half a year speed up is. Richard, I can address that, Steve, if it's OK. Again, this is based on that specific phase of the data model and the specification being completed. And for that phase, work is already in progress. As you can see, drafts are available, but the final is not done yet. And again, there are different teams, AWS included, who are adding more engineering to this effort. So hence the expectation of an earlier completion. OK. So for the intents and purposes of the main due diligence document, basically we can assume where stuff was written before that logging is currently not a focus area for this year, that this is retracted and replaced with more resources are being put against the logging signal and component. Correct? That's correct. So I just put a comment in. So this is persisted. Maybe we can also, or maybe at a later, you can also put it into the main text. But else, I think this is a decent document. So sorry. Making up for this talk, Richard. Sorry, what? Making up for this talk? Not from my side. I would propose we just acknowledge it and accept it with SIGHAT on and move on. So let's actually write something in. The beeping should stop in a second. Sorry. And keep the beeping. It's a nice background. No, it's not. It really is not. It really sets some urgency here. Your toast is ready. All right, carry on. Yes. So OK, I just linked it in here, Alulita. Thanks, Richard. Thank you so much. OK, so we can close this, I presume. OK, we have this dance of partial, of finding partial consensus points and working our way forward. Should we still try and do this? Or should, oh, OK, if we go above, Bartek copied his thing. I have copied the statements we were working on in the very beginning just to have clear, I don't know, expectation in the very beginning of the document here. Oh, very beginning. Sorry. Where? Oh, right here. I see. That is the very beginning indeed. So I guess one initial comment I'd make is, especially when it comes to things like not reinventing the wheel, that goes counter to the CNCF principles. So I'm wondering how we resolve some of that counter. Like it's OK for people to have different opinions. That's fine. But blocking when it goes directly against the CNCF principles, I'd have a bigger concern with. OK. There's no king making that's. Yeah, well, there's no king making. Yeah, the no king making rule and the CNCF principles state that you can have two conflicting things, and that's OK. If I'm reading this suggestion right, it seems to indicate that, like, they can't. Like you have to use open metrics. You have to use fluent bit. You have to use something else. That seems counter to my interpretation of CNCF. Sorry, which exactly point you are referring to? I'll highlight it right here. But you mentioned that this is like a founding goal of open telemetry to do. To support the specification, but using it internally or not having another alternative to implementation of it, that seems counter. Like, I guess I'm trying to understand how that meets the criteria for incubation. I can understand that that's kind of an opinion. But the CNCF principles don't allow for king making in the sense that there doesn't have to be one solution to a problem. Yeah, yeah, I get that. But as you notice, the kind of using open metrics directly or maybe some suggestions like this, those are prefer. So it's not mandatory for me. However, that compatible with primitive use was something that was mentioned by open telemetry. And overall would be nice for users. So that's why it fits in this rule. But also, the no kings making rule is a rule that says that conflicting solutions can be passed to further stages. But it doesn't mean they have to be conflicting. And they cannot work together. Definitely. Yeah. I think I guess I could use real quick and just mentioned that in case it hasn't been clarified, like open telemetry is super, super intent on being fully compatible with Prometheus. And we're actively working with Tom Wilkie and Richard Hartman and others to ensure that the metrics work we're doing is going to be something that Prometheus users would want to use. So yeah, and in fact, I agree. And Bartek, as you know, again, inviting you again to actively participate in the work group and the work that is being done, including the designs and the implementation. So again, please get involved and enable this to be done. Right. But you asked me to get involved into adding yet another API to overly complex collector already that I personally would not recommend. I'm not sure if that's what I can do. I can make sure to not, you can import Prometheus code. So we are not reinventing everything. But yeah, I'm kind of conflicted here. I'm not sure how should I behave. And that's why I'm putting this very, maybe blunts, sorry for that, but I don't feel this is a moment, a good moment for incubation. And by the way, all the things you mentioned about, yes, you started working with Prometheus, Open Metrics, to make it compatible. So I think that's a great progress. However, I would see, let's meet when that is done. And then we can discuss incubation maybe. That's my proposal. By the way, this is only recommendation. Like we will, I mean, the final decision is on TOC side. So that's only the recommendation. So don't be too mad. Like maybe that's only my opinion. She lives there. Yes, sir. The work people are not here yet. So I can take my mask off so you can actually hear me. So I wanted to represent kind of two perspectives here. Number one, with regards to the collector. Number two, with regard to Open Telemetry Go, because I am one of the Open Telemetry Go approvers. So I think with regards to the collector, part of my remit on the GC is to represent kind of the interests of smaller players in the Open Telemetry ecosystem. And what I can tell you is that the collector design is something that is widely adopted by end users because end users favor having one agent that they can run, one agent that can route things. And then it offers significant benefits to end users in terms of, oh God, the noise has started. Okay. Sorry. So it offers significant benefits to end users as far as being able to route telemetry easily. And yes, I do have concerns over code increase in terms of size and complexity as far as memory footprint. But the collector team has shown willingness to create performance metrics, right? To make sure that people are not adding too much stuff that causes the memory footprint to increase. I think that kind of addresses to me items number four and five and in terms of thinking about who is the collector serving, what purpose does it serve, is it fit for purpose? I feel like that the end user adoption and people successfully using the Open Telemetry collector to route telemetry to multiple providers, to internal open source backends as well as to vendors. I think that's a sign of it doing what it's supposed to you rather than something that is not fit for purpose. After just a quick question of clarification, when you say the collector is like what people want, you mean the model of a collector and not this specific collector, right? I just wanna make those two things distinct. Sure. But I think it's hard to disentangle these things, right? People want a collector that is configurable to YAML, that they can basically configure with a config file rather than having to make code changes, right? That is the weakness of the SDKs is that you have to make code changes to change where the telemetry routes, right? And the collector addresses that fairly well. Yes, you could probably, we could nitpick the design of the collector to death, but at the end of the day, the collector is the thing that is widely deployed and seems to broadly work. The model, I agree, the implementation, I don't, but just a quick question. Are you going to insist that we rewrite the collector as an incubation criteria, right? Like, I appreciate the feedback, right? Like number one, right, like, yes, if there are reports from users of saying, we had to move back to open tracing and we can't use HotelGo because of memory leaks and foreign issues, right? Like, okay, this is kind of the first that I've heard of this, right? Like it's something that I will take back to the HotelGo team and that we'll investigate seriously, right? Like if you can link us some specific reports and issues that have been filed, I'm happy to look into this, right? But that's at least a little bit more specific feedback than kind of the, we don't like the hotel collector design, right? Well, you know, we need something to fill that gap, right? Like that it works in practice at production scale for like, you know, dozens of users, if not hundreds of users, right? Like it's harder to say, like, you know, that we should rewrite the hotel collector design just because, you know... Right, so I think I'll just say this and I'll butt out completely. I think the one thing that's underlying in a lot of these conversations is this like more core and abstract thing, which is that I think a lot of people think the scope of hotel collector and really all of hotel is simply too large and that anything produced in the scope is not gonna be ideal for anyone, right? And so everything flows out of that. If you don't believe that, then nothing makes sense. If you do believe that, then yeah, that's the core disconnect here, I think. So I think the thing that's, you know, as you said, it's not gonna work for anyone, but based on the fact that they're adopters, it does work. So that's a separate question. I mean, I'd also separate out like, I'd also separate out like opinions though from due diligence. Like for due diligence, is the answer to that question matter? Like there's not a criteria unless I'm missing something. Well, it's like what direction should the CNCF be going in? Right, I think that is completely valid, right? I think part of the job of this process is to say this is or isn't architecturally, conceptually, what we wanna point towards. I think that's valid. It doesn't have to be motivated by specific use cases and memory leaks and stuff like that. I think the time for doing this kind of work is now. So I just wanna jump in from what I'll stop raising my hand. So like, it's a little ironic in my mind. Well, actually, I don't wanna be too convergent of that. I see your point, Peter. And actually in some ways in a past life, I agreed with it in the sense that creating open tracing specifically was meant to be a narrow scope. So we did that. And then ironically, the CNCF like brokered the process literally to create a broader scope because that is absolutely what we heard from people out there like not like vendors to end users that they wanted to have a single surface area for telemetry integration. I would absolutely agree that the scope of open telemetry is extremely broad. And I think one of the founding principles of the project is to try to be modular about it as much as we can. I think the modularity runs into some fundamental issues in the collector, but I would also argue very strongly that the collector and open telemetry themselves are not like super tightly coupled. Like it's possible for open telemetry to succeed and to use open telemetry without adopting a collector. And that's a mandatory thing as far as the charter of the project is concerned. So I guess I would argue that for people who don't like the fat dependency in the collector don't use it. The open telemetry project does not require the use of the collector. And I think that seems to be the focus of a lot of these discussions. The idea of a single instrumentation API that just describes the process of what's going on in the software and then emits relevant telemetry is what actually really led to, I think the realization from folks in the open tracing side that something like this was necessary. So I'm not saying that your opinion, which I assume is your opinion, I mean, you phrase it as a question but I think you've made your opinion clear. I'm not saying it's invalid. And I actually, I think the point I'm trying to make is the times I've agreed with it. I think that what I learned from the open tracing project primarily was that it actually isn't what people want. They actually don't wanna think about instrumentation, which means like having the fewest number of projects they depend on in order to instrument their software. And that's what led to open telemetry. So I just wanna help people understand the history of the project, including going back to 2015. So that is important context. Are you like, should the observability is like absolutely unequivocally approved this? No, like, and I think it's fine if you don't, frankly, we'll just keep on working. I mean, this project is going to be adopted regardless. The CNCF actually encouraged us to incubate the project because it would be a good thing or whatever. And that was kind of the goal but honestly, like I'm pretty convinced this is the right overall direction, a broad telemetry project that's decoupled. That's, those seem like the two important pieces. Tightly coupling it is bad. It's narrow, will be hard to adopt. And that's what led us to the current design. I think a lot of the issues I'm hearing here have to do with interop with other observability projects certainly including Prometheus. I would second what Ted said, we're extremely committed to figuring that out. It's just gonna take work. And then the collector per se, which I just don't want it, I personally like the collector, you might not, that's totally fine. I just wouldn't wanna throw the baby out of the bathwater if you don't like the collector because it's one part of a much larger project. And I don't think it makes sense to split it off into its own thing because it has so many tendencies itself on open telemetry, nor do I think it makes sense to reject open telemetry because you don't like that part because you just don't need to use it. So that's my two cents. I don't know if that's useful but just from like, I feel like we're talking about existential charter stuff and that is the context to how the project was created. And the irony in my mind of spending two years to basically comply with the CNCF guidance to create a broader surface and then being rejected because it's too broad is sort of beyond description, but like, if that's how it is, that's how it is. But I just, I wanna share that because that is how I see it. And on the flip side, right? Like Honeycomb sat out, right? Like we sat out open sensors and open tracing, right? Because there was not convergence, right? Because there was not a clear roadmap for which of these two projects is going to, you know, be the future way that everyone is going to want to instrument with, right? And when open telemetry came along, right? Like that was a sign to us of we should be developing and converging our SDKs in that direction, right? Like I think that that has a huge amount of power as opposed to people duplicating work, you know, writing their own proprietary SDKs in terms of people developing against open sensors or open tracing, right? And having to maintain these two parallel implications, right? Like I think that there is a lot of value in what Ben is describing as kind of the unified SDK for instrumentation. And then, right? Like the problem the collector is solving is a separate one, which is how do you configure that, right? How do you route the instrumentation once it's generated, right? And that's kind of the separate problem that is being solved is once you start doing this at scale, you want to configure multiple points that you're writing to, you want to be able to do it at runtime and suddenly that becomes a new set of considerations, right? That is now, that is unlocked from having the first set of, you know, let's get the baseline instrumentation standard that we're taking care of. Let's have a protocol for communicating that instrumentation and that telemetry and then let's have a routing layer, right? Like those are kind of the various layers of the stack that arise from each other, right? Like that are empowered by each other but are not necessarily, you don't have to deploy it. If it does, you know, if you have a simple app that only needs to write to one telemetry sync, you don't need to be open telemetry collector, right? I just want to jump in and agree with a point Liz make from Datadog's perspective. We also set out previous attempts at a lot of this and see telemetry as a wonderful path forward in convergence, in telemetry and instrumentation. And just to reiterate again, you know, from AWS with the distribution that we've rolled out based on, you know, all the work that we are doing in open telemetry and the whole community is, there is a tremendous amount of interest and adoption from the customers, you know, large and small. So it really is an exciting time and that's what, you know, customers are looking at. Yeah, I would like to point out the project does have clearly defined boundaries. Like we're not attempting to create a backend or database specifically, we're entirely focused on solving this tricky problem of how would you let open source libraries natively instrument themselves and how do you let operators have control over that telemetry pipeline? And so I don't know, from my perspective that the project is pretty well scoped. And we have had to do a lot of design work upfront to make sure that we're actually meeting these requirements because on the telemetry side they are deeper requirements than if you were just trying to talk to one system but that work feels like it's actually been going very well. So I just wanna point that out that it feels like it's been going well to me. Having been involved in all this stuff from the beginning. Maybe also some feedback on like project reviews in general. I think we're mixing here two things when it comes to incubation. One thing is whether you like to design of a certain project or not. I mean, that's personal taste and people take different directions on how they build things. I mean, I have four other six review projects as well where I'm not necessarily 100% on board with the design but I might have taken different design decisions. Not saying I'm doing this here, but like in general. One topic though that comes up but the thing that can be addressed by the open telemetry people is like audio performance tests that exist. What are the current performance considerations and considerations regarding stability or the tests we tend to forth. I think that's a valid point to make. And also something to be answered by the project team which I assume exists or can't be made available. So that's the fear point but also you can't force a project to interact with other projects necessarily. You don't have this situation where like one approach to move them all. You can make a recommendation honestly or you can make a recommendation. You should work with those other projects in the industry, the rest of the vital industry. I think that's also why I look at some like especially the first one. Like if the go libraries are supposed to be not stable then I think it's up to the open telemetry team to show like proper tests, quick proper performance testing of those libraries if they exist, fine. But it would have to sign it differently to an entirely different question. So for everyone who's not following the chat, please follow the chat cause there's some synchronization usually going on. I was trying to find the balance between letting this one run cause once again we have new people and I don't want to shut anyone down. On the other hand, I want to try and focus back a little bit on the due diligence document. So the intent behind trying to get Bartek to write his concerns out or anyone else is to detach the concerns from the rest of what we need to be doing as part of the due diligence. So we can finally have a statement and a finished document which we can submit to the TOC. And I, yeah, no, doesn't matter. So I agree with this point that mandating from the outside of the project that any particular data model must be used internally seems overly broad. And I'm saying this as someone who is like literally the founder of Open Metrics but yet I agree with my chair head on that this seems overly broad. So I would suggest simply removing this or moving this to a different, I don't know to your technical assessment or wherever but not in the scope of that it should or maybe an ITF sense must be revisited from your point of view. Are there any other points? I think Steve or anyone else, you actively disagree with, of course, I'm on me, I'm sorry. Oh, God, my intention is to get the statement out, have it locked in and then the rest of the discussion can just base on that. And then we make progress. Yeah, I think my general comment would be many of these make claims but I don't see anything to back it up like this one. I don't know where the state is coming from or even this one like the go SDK right now is not stable. So if that was true for like go for Java or .NET which both are, maybe that would be a bigger concern. And some of these I'm assuming are not tracked in like GitHub issues or what have you where most of the work is happening. So I'm not necessarily sure where all the side conversations are occurring but it's really hard to address feedback that is kind of like second hand or third hand that's not being tracked openly in the community. So I guess they're just being asked for more transparency. We're trying to do this all in open source. Everyone's welcome to participate. If you find that's not the case then there are code of contact violations. Let's go address those. But otherwise like ideally there should all be on GitHub tracked in issues. And if there are beliefs that like users are dominating over vendors or what have you like let's we'll have that conversation and go address it. But otherwise it's I'm not sure what we can tangibly do with some of this feedback given that there's nothing to back up the claims. But sir real quick question on framing here is this process of like incubation or not? Is it default pass or default allow? Neither. Sorry default pass or default fail. It's neither I can tell you from experience from getting projects both into graduated and incubating stage that it is long and tedious but there is no predetermined outcome unless Amy starts disagreeing violently because she's on the phone. I mean I am on the call but I was mostly just giggling quietly as far as like the there is no set path as far as like when a project is ready for incubation and what the outcome is. Is that more of your question? It's more like what conversation are we having here or is like the one team trying to convince the other ones to let them in or is the CNCF trying to like convince that like prove the thing isn't ready, right? Like what is the argument being made? Like what is where is the conversation going? Cause it seems to me that a lot of people believe it is default pass and a lot of other people believe it's default fail and there's some tension here. Like who's doing the convincing and that's interesting to point out. I think the issue is more that this was seen or at least told to us that this was a technical review of the project. And as Steve is mentioning, it feels like a number of the things here are more like opinions or things that don't feel like actionable from a technical perspective. That's this why we're kind of circling back to that. I think that we can definitely have like a broader discussion but it feels weird to have that broader discussion in the context of like a technical review. Sorry, no, that statement feels a little bit unfair. So let me just, I don't know if you saw this document, let me link it. This is the actual long form. I would agree if you say that the summary is not phrased neutrally, that is a fair point to make. I would not agree with any statement implying that this is all basically happening in a vacuum. So I don't know, Ben, if you saw that statement or the sort of document. I maybe it has iterated since I last looked at it. The main, my main goal was to just make sure that if there's a technical review of the project that's going out, it can certainly have opinions about like the project. I don't wanna say it doesn't, but I wanna make sure that it also like reflects what the project is or is trying to do. When I last looked at it, it had things saying like there were lots of forks of the project and things like that. So my main goal is just to make sure that it's actually reflective of a judgment of like what we're actually trying to accomplish. Yeah, but so just- Yeah, I think the key thing is, no, I have a key. For the fork here, sorry to just reply directly to this, to the fork thing, which we literally just closes and I don't see this in this list of remaining concerns. So I do think that, like again, I would agree with Steve that this needs some sharpening and sourcing and so for example, Liz's comment is absolutely correct that there should be a link. I know by happenstance course, Frederick from Kubernetes stick instrumentation and also Prometheus team mentioned that's why he stopped using the goal library because he had too many memory leaks but I don't know if he ever filed an issue or not. Yeah, we can fix them unless we know about them. So like what's happening to fix that? We want to fix it. I agree, I agree. So I know this has some backing in reality but I have never taken a look myself but I'm just trying to get back on track. So for the intents and purposes here, should we just put an action item for Bartek to actually point to a specific thing or is then you can just tick it off and we can also remove it from this overall list and assuming that the rest of the issues or concerns or whatever are resolved. Richard, can I request that Bartek file an issue on the telemetry goal report? Yes, exactly, that was a suggestion. So we just second. And also for looking at stability we should be looking at java.net or one of the SDKs that's actually declared that it's 1.0. Go hasn't done that. Yeah, yes. Oh, we'd like to feedback. Yep. But yes, oh yeah, we thought we still had to feedback. Oh, okay. Okay. I think one other thing that might be helpful, Richard, is I don't know if there are, going back to Peter's original question, like what are we doing here? Is this a pass-fail? I think maybe an open question is what is the goal of the due diligence documents? So my understanding is there are certain criteria that the CNCF TOC has dictated or specified, maybe dictated is the wrong word, that is in bold, there are different section headers in this doc and the goal is to respond to them. My understanding is that SIG observability's goal is to review those and either say, yes, we agree or no, here's our feedback in that area. So I believe the goal is to address section by section and try to point out where there might be concern or a non-consensus. Yes, the issue which I was trying to solve and I thought we had agreement last time that we are trying this approach is to just detach a bulk of concerns out of the normal document. Of course, as happened just now, we are basically going in circles on a lot of discussions, which are super interesting on a technical level and in particular as new voices come in and voices I personally value, that just makes it more interesting, but that doesn't solve that we don't make progress in the actual document. So the attempt... Yeah, near-proposal production stability because the project is that big, which was a bit for another one to have more production uses than just three because like three would be like specific to like maybe a very small subset of the project. But if you're concerned about certain areas for us to have specific interviews and like more of those projects, that project like it goes one of the concerns here. With the standards, a lot of the CNCF rule of projects that are obviously smaller in scope historically, that would be a fair proposal that production uses for all of the major components would be a recommendation which then can put into interviews. I think that might be helpful here. I didn't get your point, sorry. Hello. So one of the criteria for incubation is just reproducing users. Like three production users for a project that is that big, like just like three Java only. If you want it for like all of the major like language instrumentation, that would be a fair recommendation and a fair statement to say you want to have production users for all the major components, say, racing for all the major languages or all that have reached 1.0, which is actually what's not stable. That's a fair question to the open telemetry, right? Like the scope is so big that if I mentioned sample, it's already part of open telemetry. If I mentioned anything about telemetry, it's coped to the open telemetry. And that's kind of the problem with the scope and it just hard to work with. And I mean, this is only showing this kind of problem. And again, some recommendation is what we discussed is that maybe we should put incubation kind of label on tracing part, but I heard there was like, yeah, you are not agreeing with that approach. Or maybe we should put a separate kind of stages, you know, on collector, separate on SDK, separate our spec. We can definitely work with that and improve this process. It would be much, much easier, right? To think about, even for users. It wasn't the other part. Are we doing that for other projects? Like we didn't do that for open metrics. That's not the case for Prometheus. Like there are client libraries in open metrics right now that are not stable. We didn't decouple the project and say open metrics is not ready for incubation. So I guess I'm having a hard time following that argument. Like the project's going to continue to evolve and it will continue to become like more stable in different areas. And for larger projects like a Kubernetes or an open telemetry, there will always be a certain amount of components that have not reached that maturity level yet. But there's a well-defined path to get there. There is a definition for it. There's a GC, there's a technical steering committee, like there's processes in place to mitigate all of those concerns. Yeah, and as we saw, I mean, even in the update that I compiled together, there is a path forward, very clear planning, as well as, you know, progress being made in all of these areas, right? So... Sure, but this is like just small progress. This is like essentially like just movement and showing that this is moving. Yes, but it's still not incubated, kind of high bar of kind of... And incubation, again, means like a step towards graduation. And I talked about that with Chris. We talked like both with Richie and kind of Ami as well. And, you know, there was this strong statement that we are not here talking about only incubation and let's swipe all of other problems for graduation to later stage. No, it should be like a certain pause, certain break that we stop and think, hey, is this going to good direction? And I'm super surprised that you are saying there are so many successful users because I don't know any. And maybe we are in the bubbles, but I just want to point out that from my perspective, from our perspective, people who I talked to, this is not that clear that is going in the best direction. And... I think that this is highlighting something that's really interesting around kind of who is adopting this and who is being brought into the cloud native community. Like one of our kind of, you know, I understand this as being recorded, but like one of our customers is a large financial services company, right? I can't name them by name on a recorded call, but like they are coming into the cloud native community and adopting OTEL. And they're not necessarily going to be in these kind of conversations with SIG observability, right, but they are very successfully adopting OTEL as their strategy for getting instrumentation, right? Like I worry that there's a disconnect between kind of the community base that we're serving, which is people who are looking to migrate legacy workloads onto cloud native workloads and the sort of people that this SIG is talking to, right? Like from a Honeycomb's perspective, right? Like I would easily say at least half our deals now are people who are interested in using open telemetry to compare central vendors, to compare against Yeager, great, like, and it's really exciting, right? Like, so I think there's this weird kind of disconnect where we just have communities that are not talking to each other. Maybe, maybe, right? But like I know Alex, who is like VP of, I think, JP Morgan, a kind of engineering part, observability part. And he was actually adopting tracing with Yeager and kind of technology back then, but like a year ago on American Express, I think company. And maybe you've seen his comments on kind of assessment doc and he mentioned that like it's at this point, there is no good path to use open telemetry because of those reasons. So it's not like, and yeah, I will be curious like what makes those people that you refer very happy about this path and what makes, you know, people I heard not very happy. And, okay, I just want to make one more point. Maybe it's unrelevant. So, but I want to kind of leverage that, okay, I agree that this production stability of open telemetry tracing part on go is can be kind of controversial. But what I mean behind that is that, you know, we had open tracing client already for like, I don't know, four years or three and we suddenly moved to something worse. And this is like kind of showing the direction, right? That is not going in the direction. That's my point here. It's kind of this background and complex also that matters. And I also think that because like one of the things that incubation means that you have users in production running this, if you go incubation without having like the locks part or anything that around that part around like the metrics like stealing beta, I'm kind of saying that like this would not meet the bar for incubation, the locks part only. And like, which part of the project need to meet the bar to be incubation? That's more like the question I think. So that was going to address, actually that was addressed by the latest last week, right? It is very common for projects to have things that are experimental, right? Like how else do you build components into a new project that isn't ready yet, right? Like you don't just deploy your production magically. But actually just because we have Amy here as a resource, Amy, is there anything you want to turn in to help us make sure we understand the scope of this review? The next steps here, like I like what Aloya is saying about like being able to have the production users that TOC can interview because really next steps here are making sure that the project proposal comes in officially into CNCF. Right now we're kind of like a little out of order, a tiny bit out of order, but being able to make sure that the official paperwork is in so that we can get someone from TOC to be able to come and look at this. That seems like the valid next steps here. I think the recommendation is at least at this point kind of orthogonal towards being able to help answer the questions that are currently on deck for incubation graduation. And Liz, I see a bunch of things in chat. I'm happy to pass back to you as far as like other pieces to go on. And also just to synchronize course, I basically gave up on being able to pass this. We are already three minutes past officially. We'll probably run over as per usual to the full hour, but afterwards I have a hard cup. Just to onboard the people who haven't been here before, the initial suggestion to do a partial due diligence of open telemetry tracing was coming from Steve, I think in October, but then that was taken back to just for completeness for the matrix of overview of different adoptions. That's actually one of the action points which we had in the first call, if I remember correctly. And then basically it was too complicated and indicate to explode the n-dimensional matrix along all orthogonal dimensions. As such, we dropped this. I still think it would be super useful to have this as to the experimental particles that keeps coming up. For Kubernetes, there's a lot more stuff which is in production than which is experimental, the inherent thing. And I'm just reiterating the discussion which we had, I think at least two times in the context of this call for the due diligence, is that unless the signals are stable, it's hard to declare any components collect or anything else really stable. Of course, if the signal is changing, you automatically must have some sort of more or less breaking up upgrade path or you're locking yourself in a lot. So you need to have at least a solid beta for a signal before the rest of the thing can be considered as super stable. This is just a summary of what we walked through the last two or three meetings which we had in total. And from end of last year, just to get everyone on board. Steve, should I take this as a request to basically not make that statement from Bartek part of the document but basically have those discussions in the detailed bits and pieces of the due diligence document. Is this correct, this interpretation? So I think there's two options unless others have other suggestions. I think one is direct feedback in each of the sections because I believe that's what we're supposed to be doing here is addressing the individual sections. The second option is what Bartek did previously which is create a separate Google doc and attach that to this as well. Like with like a link at the top of the doc that says additional information over here if it can't be folded into each of the sections. What do people think? I think both is fine as long as Bartek is okay with the form of representation of this end cause it also shouldn't be just hidden. I think that's also unfair but I'm fine with either. I deliberately don't have an opinion. I deliberately would like open telemetry as the applicating project to say how they prefer it but we can also try and do a call of consensus while in this call or I don't mind. So Steve, as you submitted, I take you as the person who leads this course. Yeah, I mean, I think personally in my mind what is better is if it's possible to add direct feedback to how the bolded things for due diligence are not being met. That would be ideal in my mind. If that's not possible or there's an overarching bit of summary or feedback or what have you having a separate doc that we can then reference to, I think is also acceptable. So that would be my recommendation. Okay, I'm totally fine trying to do this. In this case, I would reject Bartek's or no, I would not reject it. So people can comment on it and ask for clarification, have an argument about the underlying things. So we get some substance to it. Of course, the underlying concerns won't magically go away. So maybe we have this as a discussion piece or maybe Bartek can move it into his other doc. And then next time we try and I mean, we still have seven minutes but we try to go through the rest and we can just start doing this now then if there's consensus on this. Sounds good to me. We want to use the seven minutes and try to make progress in some of the areas. Yeah, I think that would be a good thing. Okay, cool. Bartek, you fine with this? I mean, yeah, yeah, we can move on but I would kind of want to keep the statement. So it's kind of verbose and visible. You know, just my recommendation, just what I had to do and I spend lots of time on this. So, yeah, we can discuss. I think I would like to propose how we do this in Prometheus team meetings there. And obviously I'm explaining how we do this. We try to find consensus. If we don't find consensus, we find smaller pieces of possible consensus, find consensus on those smaller pieces and then build a complete consensus which you can currently see in point three of this document. Me trying to achieve this. And if we find that we cannot find consensus on it, we note the descent on the consensus. Or if there is too many voices which speak against it, there is no worth of consensus, then it's just kept as there couldn't be any consensus. So this is how it works with Prometheus team and has served as well in the past. It seems reasonable, especially for what we did here for number three, like I like the breakdown and trying to see if we can reach consensus. So fine with that. Okay. So let's try and, I don't think we find consensus in this short time. Should we just walk through one or two comments so we get through those maybe? Sure. So I'm assuming this is related to the doc. So I'm not sure we can address this one. Bartak, are you okay with moving on to the next comment? No, I was already down in section four, but we can, I mean, it's, sorry, I'm just kind of, I would have, yeah, it doesn't matter, we, so maybe let's walk through the actual concerns, get Bartak to put need to wear or non-need reproduction, so we can have this on firmer legs next time. Do I understand correctly that we should not consider go as part of the complete thing and we should only be looking at .NET and Java within the context of open implementations. If you're talking about stability specifically, so only Java and .NET have announced stability for tracing at this time. Python, Erlang, I think are in RC, both of them are in RC, someone keep me honest. I believe that's true, go is shortly behind. So when go declares 1.0, which I think will be in a month or so, then we could look at it for stability, but I wouldn't do it until they claim that it's ready to be reviewed for that. Yep. Okay, okay, then in this case, I think it's fair to just, like we still should get or Bartak and I can take this, we can ask further to file an issue. I hear he has something going on with Compro, so maybe he can help quite nicely. Yeah, and then for the intensive purposes of this, go will not be considered. I think that's a fair statement to make. For this, I don't see any, Steve, can you go up again a little bit? So we keep jumping up and down in divergent directions. Okay, no concerns here, actually no concerns here, no concerns here, no concerns here. So we get to number four. Number four in this, so Morgan, your statement is that the collector should not be considered part of the main open telemetry usage, and that it's that the specification as in signal and the SDK is the actual focus area, correct? No, I was just responding to a comment that appears to be saying that the collector is demotivating and distracting work from the spec, which is not true. Yeah, that is not true. Okay, so your reply, okay, then your reply is, you have enough resources to do everything. Yes, yeah. Yeah, which is what is in progress right now. I didn't realize we were here, Richie. Sorry, I was on the wrong spot. Yeah, no worries, I mentally jumped back and forth. Of course, I just wanted to at least get those out of the way because we will not be making major progress in the document anyway. So at least we can try and get some meat onto the concerns. So we have a better starting point next time. To do in pair. Okay, Liz's comment is fair, but will not be resolved right now. Putting on my non-chairhead for a second, I also would prefer more different distribution of complexity but chairhead on again. So yeah, the comment here is there's only one data model that we use for each signal type in the collector. There is nothing inside the collector that is vendor specific, right? You have exporters which are clearly done through a separate API that are in separate repos that can come into export data to an endpoint, but those exporters adapt from the core open telemetry data model to whatever data model the exporter author wanted to use. A, sorry for opening the Pandora's box with my side comment. No, I'm referring to this next one, Richie. Oh, okay. Yeah. Okay, Ben. Okay. Sorry for the confusion. I'm talking about the inherent system complexity. No, no, no, no, no, this was the API state. Yeah, it was the API state. Networking person. So I always care about the state. Okay. Yeah, I mean, oh, I flipped around. What I meant here, right? It's like, I know how collector works nowadays, right? And that this inside collector, there's like a single API that all receivers and exporters have to implement, right? But the problem is I want to use a certain vendor, which is for example, big like data hubs, Splunk or something like that. And, you know, a user now has to use some kind of vendor specific API because those probably not yet support the hotel P that core provides, right? Or maybe, I don't know, like that's my impression, right? That there are this specific, so the specific vendor plugin has to be installed. So you need to use country or distribution. And then it is a complex stuff that you have to do this user and not a vendor applying to the hotel P or like whatever standard. So yeah, so, so first off, there are many vendors that do just natively use hotel P and my sense is a lot of them are migrating in that direction. I know Amazon uses it. That's amazing. Yeah. Yeah. And my hope is actually more and more of them do it. I think when I was leaving Google, there was talk about using it internally for their cloud ops products and others. So I expect that to become more common. The exporters are there mostly to adapt. So if that is happening, do we need the distributions that, you know, all this kind of collector? It's a fair question. No, the distributions is a fair question. I suspect that we're going to need distributions long term just because vendors will want their own fork, even if it's just a snapshot, like even if there are zero changes to it, just so that they can make a quick change and meet their customers SLAs for support. For reference, like, I know, we don't know each other super well. I was at Google until January. Now I'm at Splunk. Google the plan there is not to have their own distribution. They're using the pure OSS one, no special distribution. That's really good. And I was part of that. Splunk has their own because we're pulling in some older stuff until open telemetry replaces it. Eventually ours will be pure open telemetry. We'll probably still have it just because we've customers with like very expensive SLAs. If we have to fix something to make that SLA, we're going to do it there and then backport it. But this also has a pure OSS distribution. You know, what is offered is support. And there's no need to have vendors support here to do this. So OTP is supported end to end through all the client libraries support it sending directly to the collector today. And open source standards like Yeager, Zipkin and hopefully eventually entirely Prometheus will be supported as well. That's the goal. So like you don't need a vendor received during the collector. The primary reason for vendor things is for exporters for a backend that hasn't adopted OTP yet. There are lots of vendors that have been around for a decade. They can't move that fast. So if you don't provide a path forward, they can't consume this data which is not good experience for anyone anyway. Yep. I think the design principle of open telemetry is extensibility. And having vendor specific exporters is completely viable as well as having distributions because from an AWS standpoint we want to provide performance and security guarantee to our customers. And we can only provide that while pointing to a specific set of bits that we tested. And just to point out Kubernetes has distributions and you know, CNCF has been happy with that for a while now. All right. I think we're out of time. So maybe out of respect for everyone we should probably call it here and circle back next week. Yeah, we actually weigh over. Usually it should stop at 50. But interesting discussion. I really like it yet. Let's try and actually finish the document next time. See you all in a week. See ya. Thank you, Richie, see ya. Thank you, Richie. Thanks guys, bye. Bye all.