 Great. So I'm on my phone without my computer in front of me sadly because it's died, but I can try to do my best to empty anyway. Just remind me if I'm going off track because I can't be looking at the agenda and talking at the same time. But good to see everybody again. Please write down in the agenda document if you're in attendance. So we have an attendee list. And for the agenda today, I have two items that I think are really important. One is a proposal to add the trace parent headers, header values to as accessors to the span contact object and open tracing. And the other is a proposal for more formal RFC process for versioning the open tracing APIs and specification. So that's what I've got on the agenda for today. I'm not sure if other people have other things they'd like to talk about. But kicking it off with trace context, we have Sergei here from the trace context. W3C working group. And I was wondering, Sergei, if you're on the call, if you wouldn't mind starting this discussion with just a little bit of background around the project and what its current state is. So we started this W3C specification was made to allow people in a modern world when you have a bunch of services and some of the services owned by you and hosted by you and some of the services hosted by cloud or maybe mined by some other render. Or maybe it's service mesh that already implemented some tracing in some form. So we want to enable all these scenarios when you have different components and different ownership of these components to correlate the symmetry across them. So in order to do that, you need to have a standard and we thought what the place will be for the standard and W3C seems to be the right place for it to be. With regards to who supported like basically all big vendors including like clouds, like all three clouds supported this idea and we built this W3C standard. So main idea here is not to make a standard that everybody can use. So like generic, so generic that everybody can squeeze whatever they want into that. But really with more specific what scenarios we want to achieve and like it feels like everybody will need to do effort to migrate to this standard. So some people need to shrink their spend ideas. Some people need to like compromise on the size of this header. So there are like all sorts of compromise that needs to be done. But the idea is like again to you if you want to satisfy if you want to comply to this header, you need to introduce something. It's a little bit more prescriptive than it's not meant to be a place for like cage all kind of header where you can put whatever you want. So yeah, this is a context and right now we are in community group in W3C it means that like the result of this community group is just a basic recommendation. We're working on making it working group and result of working group is typically specifications that W3C endorses. So it will have more power. Yeah, I think this is enough for introduction. Is it if you have any questions. This would kind of be the only place you can get a trace ID out of open tracing. Yes, it would be. So that's kind of revolutionary to call out there because that was sort of assumed to be a vendor specific detail until now. Interestingly enough if I'm an open tracing vendor. I'm only and I might let's say my trace IDs are crazy and they're not there's eight integers and I don't you know have 128 bid values that that's so I'm going to be at the mercy of whoever is providing me this parent header. And if I'm trying to provide a feature like hey use this ID for logging in and then you can go and search in your elk or Splunk or whatever for this trace ID. I'm now going to be giving people this accessor for a trace ID that I didn't prepare create. You know if I'm accepting a trace parent from an upstream service that's not using my open tracing implementation. Does that make sense. That's not not quite. So you're saying the scenario is when you get a great parent header handed to you over the wire. It could come from someone else's choice about how trace IDs are formed in this case, 128 bit hex arrays right and basic tracer for example uses 64 bit arrays 64 bit integer. So basic tracers IDs aren't 120 bit strings. So if I were trying to say hey the trace ID for my company's tracing system is this value but actually the the accessor that they get access to is this value chosen by some other vendor because it came in the trace parent. I'm kind of I'm not actually able to give my trace ID. This this is sort of like the it's sort of someone else's trace ID. That's why I made a comment in the dark that I think referring to trace parent is misleading what we really do in is adding accessors to spend context. And that means that you're talking about your own trace ID. The only relation to the W3C trace parent is just the fact that the industry seems to be agreeing that yes we want trace ID and spend ID concept to be present. And therefore that there's a link there, but it's not really that we are we taking that header and basically making it available in a span context. Yeah. I would agree with that the intention was simply because we appear to have industry alignment on exposing fan ID trace ID and a sampling bit over the wire that and those are things people have been asking for to expose and open tracing. So it seems, given that there's convergence around this trace parent header. There should be general buy in on being able to expose these accessors for those concepts on the span context. But when I designed the values that they produce, I chose a more generic transport indicating these should be string values for the IDs, or something similar, possibly multiple different kinds of format you could access them, but not to overfit to the specific sort of 16 bit text ID that the trace parent header defined that they would be could be a looser looser definition. Okay, so that like Yuri was saying the name of this isn't to provide access to exactly what's in the trace parent incoming header. Or that's not I mean that it's related to the fact that those concepts exist in W3C standard but it's not it's not guaranteed that's what you'll get, for example. Yes, that's correct. These are for correlating between your tracer and some other system that isn't a tracer. So presumably those other systems don't. So it's not even necessarily useful I would argue to expose a very specific value type, because the thing you're trying to correlate with doesn't isn't going to care your structured logging system doesn't have a value type for 16 bit trace ID, you know, it has a value type for fights or spring or something like that. So it's doubly not particularly useful to concentrate too much on that that format. And also I would not expect people to be using these accessors to be doing kind of trace header injection or something like that. I would expect them to use the tracer inject calls to be doing anything that was tracing related. So it's just for other systems. So let's say we get an incoming trace from Amazon x-ray and it's got, I assume, some kind of x-ray stuff in the trace parent header but I thought we were also going to be using trace state to kind of store some of that vendor specific stuff. So I would imagine. Go ahead. I'll go ahead. I was going to say I would imagine your tracer would be the thing that would be extracting that header. And so it has first passed it deciding what it wants to do about all of that. By the time you have a band context object and can ask it for its trace ID and span ID, your tracing systems already sorted out what it wants to present there. And I don't want to rehash like an entire the Seattle workshop or whatever, but I mean the long term goal for the W3C project would hopefully involve like genuine interruption. Cool. Thank you. I looked at the specification in the GitHub issue. Do you plan to expose trace state at all in the spans as it feels important for some customers to be able to to get vendor specific telemetry from vendor specific headers from with a specific information from headers? Which which field of these do you mind explaining them a little bit? So I mean trace state will be coming as a separate header. And if you didn't parse it in extract and didn't associate with span then customer can access this data from span object itself. So I wonder, is there any plans to put this trace state information somewhere? I think the again the assumption is that both trace parent and trace state really represent the parent span in the in the caller. And then, once you're inside the application, your tracing will already read that parent information and create its own span internally. And that's what really open tracing is going to expose the span ID of the current span. You don't even have a span object really for the inbound span headers. They just like well they're in span context so you can theoretically could access them, but I would say that like we I recently open an issue on that regard is that when we do an extract from the wire and we return a span context. Sometimes that span context may not even have any trace ID because your system decided, oh I'm not going to trust the inbound thing. I'm just going to put it aside as a correlation, but for myself there's nothing really until you create a real span then it will get a real trace ID with span ID. And so I think like I think the main idea of access is really you want to use the current spans ID to do anything like login or observations and things and then it doesn't really matter what came on the wire. It's the only relation to the wire format is just the fact that we all in the industry agree that span ID and trace ID are the things that make sense to expose. Okay. Yeah, sorry, part of my ignorance I may not be that familiar with all the details but like if we send application ID as part of trace state, will I be able to get this application ID and like group byte or filter byte in SDK or I wouldn't be able to access this up ID. I don't think you will because it's a vendor specific thing now and open tracing is not vendor specific. Can I put some extension like so if I want to access it, I should be able to implement my own extract method and then put it in context and then I will be able to. So I mean, that's what I would say is in general, the goal of open tracing is to try and stay away from exposing wire format to the end user. I mean, I think really if you're talking about some tracer specific internal state, what you probably honestly your best bet would be to do an attempted typecast, you know, catch a classcast exception or whatever and then once you have your tracer specific span context to just access the fields that you want. I mean, that would be the sort of moral equivalent of what you're asking for an open tracing world. I don't think that open tracing has a goal with this change anyway. I mean, this is my opinion. I'm curious how that you'll disagree but I don't think it would be a goal to try and create a generic iterator for trace state and things like that that are intended to be tracer specific. I think the idea was that there are a couple of things that, you know, open tracing has tried to remain incredibly neutral about how tracers are described but it seems like there's enough of a consensus around trace ID, span ID and sample that we can expose those without having any vendor specific anything. The thing you're asking for is kind of literally vendor specific. So I think some form of typecasting is probably like more in line with the philosophy than than trying to expose all the information to the open tracing API per se, but that's my opinion. I'm happy to hear others. I would say I understand your desire Sergey around using those as indices, those values as indices and other systems. The reasons kind of mentioned already. I didn't include anything like that in this proposal. Just trying to instead focus on what we know is really, really necessary, which is just the main identifier, the span ID for correlating things on the span level and the trace ID for correlating things on the trace level. There's a lot of value that could be had from those two things. And there's a lot of consensus. It seems that people would be comfortable exposing some kind of value there. And then likewise, the third piece in the span parent, the sampling bit. There's also been a lot of requests for that. And it seems also broadly supportable and would again be useful as a basic honor offset to determine whether other secondary systems should be running or creating overhead. Though I actually have most of my questions are actually around that sampling bit. So that's why the focus of this proposal is just on those three fields. I have some ideas for how people could expose some of those other fields through baggage, if we didn't want to add another interface. But I kind of wanted to leave that aside because I was concerned it would be a little more contentious if we started getting into that other stuff. So those are my two cents. Thank you. Implication that a tracer implementer should be using an internal ID length and format that matches the 16 by stuff that trace parent specifies. I don't think so. No, for the purpose of this accessor it, it's intentionally left as a variable link format to allow you to put whatever you want in there. The assumption is simply that whatever is coming out of there if it's a span ID it's unique within the trace. And if it's a trace ID it's globally unique. And I think that if you don't support this also I should point out returning an empty and empty value is also acceptable. And that means that it's not necessarily supported feature. So there's also backwards compatibility. I see unique across all the processes in the trace, in the entire distributed trace. When someone's trying to use these for correlation, I think yeah the expectation would be that, yeah. And across vendors. That would be good to mention. Yeah, just that different tracer implementers strive to make it match even if they're internal tracing IDs don't follow this convention. Yeah, because I trace is five integers and trace trace view or optics right now is 160 bits which is annoying. I mentioned this in the proposal that it's very belief specifically to be backwards compatible with existing ID systems, and also before compatible because this is a version of this header. The industry is saying they're probably going to standardize on this but it could also version again, right even the parent header has a version field in it. Which is the other reason why I don't want to overfit to what's currently being proposed as an ID there. I think our, we should have a looser definition simply that we support these fields. But we don't indicate too much about the size or shape of the value within them. Just the properties that it has around its uniqueness. So, say again, you'll attempt to put it into trace ID but it doesn't fit you put it in trace state right. Is it the idea. Well, again, this spec is not about the header right it doesn't make any statements about what the format is. Okay, just how we. Make sense. I was trying to do a better way to say what I was trying to say earlier maybe one way of describing it is that, you know, if someone is using the W3C header. That should be the, you know, these three aspects of that header should be available and tracing. But the guarantee that's being made, it doesn't mean that if you're doing something other than W3C that you wouldn't also be able to provide your own version of the case ID. So that is to say it's not like attempting to conform precisely with the expected to support the key concepts that people have asked for for years, you know, yeah, make sense. And I'll redraft this this proposal to make this a lot clearer. I can see where the confusion is coming from now. I just one other question sort of the is sampled I was raising this question like do we want to clarify what is sampled means the bit. Yes, I, because I don't work. I have not done a whole lot of instrumentation with sampling based systems. I personally am not very familiar with what the use cases are for having that bit exposed. I know it's been asked for repeatedly. But it's not clear to me what as a Boolean what kind of logical switching would secondary systems be expected to be doing based on that value. I really think that the semantic meaning for it like why why what am I going to do or not do whether when that thing is on or off. It's not clear to me. Yeah, I guess one use case I remember people were asking the same oh I want to add a lot more profiling but I don't want to bother if this like traces not being sampled. So that could be an example but again that doesn't match with the for example with double three C definition because there it says if it's zero it doesn't mean that it's not sample it just like we're not telling you whether it's sample or not right. It's like it's really if it's one then yes it is sample for sure. But zero means like we haven't made a decision yet. That's the main idea and we do a lot of sampling in our systems and we will treat this flag as like somebody if somebody said it to once and yeah we will try our best to sample it to collect it but if it's zero we still will like will be on edge of making this decision. So mostly it's and even if it's one we will try our best but we are not guaranteeing that it will be connected. It's mostly flag like just a way to communicate between layers like first layer may say something about yeah I really wanted because for some because I think it's important. Yeah. And one one nuance that I think is a bit different between the wire protocol and an in process accessor for this bit is part of why I think it's vague about whether you should respect this in the protocol specs is it can be spoofed. Like you don't you can't necessarily trust what's come over the wire. And so you have to use this sampling value as an indication right it might be coming from another system. It might be fake. So it's it's kept partially vague I know for that reason. And in the in process accessor by the time you've gotten to a span context object. The tracing systems already created that object for you and is running. So it knows whether it's sampling or not so you don't have you don't have the trust issue. In this case around whether or not you should trust that value right like it's a little more definitive. But there's still some just semantic implications about what what does it mean that it is sampling or is recording. Just because there's this it seems like that's a vague concept in some systems. For example we may be recording it now but not storing it permanently would be a way light step would would do this. And so we would our question is will we say sampling is always on that would how we would use this header. Yeah. There's there's also another option is that when. You might be recording. But you really waiting for the for the downstream to send you back a signal saying oh yeah that was interesting so please kept keep what you recorded. So normally you're actually even though you're recording. But not really. Yeah Yuri what you were saying earlier about people using in terms of the open tracing access around span context they're not taking it sampling in general but in terms of the access around span context. Perhaps the most important use case is indeed for just an optimization that you can not do anything if something isn't being sampled this doesn't line up particularly well with the W3C thing but I almost wonder if it shouldn't be a sample bit if it could be a this is not sampled but it's more of a like a no outfit. I mean that's that's actually actionable from a code standpoint in a way that I think could be pretty meaningful in the performance context. Unfortunately it's totally at odds with the way that people tend to think about this field in the W3C world but it just occurs to me that that's really what you want to know is it not sampled it and then you know whether it's sampled or sampled for later in the way that had you know is asking about becomes irrelevant. And my proposal was to separate this to actually the span ID accessors from the sampling because it seems like sampling isn't like very clear understood what what really people want to use it for. So I would rather go back to the last curtain select let's define the use cases first where it's very clear and well at least for login they're very clear. Yeah, I understand it. I'm also fine with with pulling that out of the proposal. If we can if it if the value for them is not is not clear and and immediate. I just know it's it's a thing that does come up repeatedly and get requested. But we just like shall be completely right we can still work on it I would just prefer to move on the the span ID accessors because that seems like very easy thing to just roll out and then give immediate benefits to people where it was sampling like not everyone asked for it and then we can we can work a bit longer maybe on that. That makes sense to me. Is there anyone on the call who is concerned about exposing span ID and trace ID as far as their system goes and thinks this would be onerous for them. I won't take the bait on onerous but I will say that we've given plenty of talks over the last few years about open tracing being instrumentation API that can be used for things like for media and so on and those these cases really don't need stuff which is fine with me. I think it just means that you know we have to make sure the documentation is is clear about whether or not this these trace and spanities are sort of like quote unquote required or if it's okay to return empty string or something like that if you just don't need one. I mean, if you're just doing tracing to metrics exporter and you're doing that by some kind of trace implementation I wouldn't want people to have to stress stress out about these requirements. Yeah, and and for me as someone who doesn't use 120 bit trace IDs. It's a little bit onerous. Because in the long term for cross vendor correlation, it would be better if I were right because that's what's in the trace context back but I don't think that's a different challenge than newer like or data traces also kind of have to face where not not none of those three companies do 120 bit trace IDs today so and it for anyone to support trace context and have this correlation that you'll need to be able to get a unique identifier that's cross vendor compatible Amazon for example also isn't isn't doing it this way so 120 bit IDs. So, you know, I think we could start giving you these incompatible trace IDs and then have support for the trace context of later, for example, like you're saying with the strings, not having a fixed line. Well, is it incompatible? I mean if you're using one tracer in the systems then Yeah, then it's fine. Right, and it's only this cross system thing. Exactly. Yeah, no, you're exactly right. Yeah, it's only important if you want to mix two different vendor. And even if you mixed vendors it's really I think the question I would expect people to be able to configure their tracing libraries saying I don't trust incoming ID or I'm fine with just reusing it. Exactly. Like if you look at the notes in the workshop, like the local generic tracers which technically don't even need the custom vendor section right they can just use the parent header. But even then like if it's very easy to do a denial of service on them by sending the same thing if they can. Yeah. Some people say at the edge I'll never gonna trust this thing. Yeah, and that's the part I mean certainly if you're trusting authenticated API is a cross vendors where you can trust the sampling decision and the trace ID and you just happen to be using x-ray in one thing and some other company in another and there was this magic way that you could be sure to trust their IDs. That would be cool and then you'd have this magic cross vendor reference value. But yeah in practice, people are going to have to be very careful about writing rules to filter or use links you know to do the reference of it's not going to be a globally unique value across all the components in the distributed trace until you can have that functionality at the edge of your service like in some Mention X configuration or service stuff right like that that part seems tricky to me to promise that it's that unless unless you're using the same single vendor that's pretty much the only way it's going to be that for a long time. I guess that could be being more clear with documentation. Yeah, I can see making it clear that this is these expectations are only expectations within the tracing system you're talking to. So whatever tracing system is running in your process and you're asking it or it's spam ID and it's trace ID, your expectation for you is really only a really bad system. You're not asking it to know about other systems you like to be using or putting into whatever you're correlating with. Yeah. Yeah, and I think that would be just helpful and tampering expectations that this is going to be a magic value you can go looking everywhere. I noticed the census also has accessors like this on the span context. Is that part of the motivation. And chase options is being. It's not a particular it's not the motivation the motivation for the ability to add fan and trace observers in a sort of generic fashion. I'm going to try to do this. The lack of any kind of identifier really makes it awkward. You end up with a lot of machinery trying to track these things around around objects and using basically pointer addresses at your ID in a weird way. And then it's awkward. But there's a huge amount of value that will come from being able to correlate this information other systems. So what people are doing right now is simply vendor specific way where they're passing the searchers get at whatever identifiers. So if we can stop people from getting that by giving up the testers, then there's a bunch of code that could become shared. So that's that's the real hope. I shouldn't say hope. A lot of clear values that's been identified on that front. We've had users asking for trace ID to use in the logs from like day one. Yeah, definitely chase ID. Yeah, I get a lot of requests for that too. I don't have a span ID. I'm gonna have to make one. It'll be the start event ID, start event ID and event ID. I'll give you the start. That's the safest one to give you. But also I think one thing to look at though in the spec is it does say like you can produce a trace ID and when something asks you for span ID you can just give back an empty value. And I don't think we can. I don't think we can realistically add anything to the open tracing specs that wouldn't let you do something like that it needs to be some kind of backwards compatibility we can't automatically force everyone to now magically produces. It's just that some of these secondary systems may not have as much utility. And they might have to do value checks before they do something, which is like a mild inefficiency, I suppose. But I think that's, that's all worth a worthy price to pay for backwards compatibility and not adding some new required feature to support open tracing. And again, in practice, I think it's the kind of thing that's not too concerning, like if you're using a tracing system that doesn't support this stuff, and then you would like to use stuff that wants it. You're just like, Oh, oh well, my tracing system and this cool other library don't work with each other, but it's not. It's not like a disaster. Or something that would be unexpected for the developer, presumably the person gluing these things together, at least somewhat aware of the kind of tracing system they're using. So I don't see the compatibility issues as being too dangerous for people who don't already support this kind of stuff. And by stuff I mean span ID and trace ID. So I feel like my big takeaways are to really clarify that this is not header specific. It's not specific to the trace parent supporting explicitly that what's in the W3 spec is just supporting fields that are similar to the fields in that spec. And to really emphasize this is not for automatically allowing cross tracer compatibility or interop that that would not be the intention of accessing these things. And then thirdly to drop the sampling bit from this proposal and to move it to a separate proposal, because the debate around that, I think is a little more vague. Great. Thanks everyone. I feel like I know enough to move this proposal forward another step. And it's 915. So, unless other people have more questions on this, I would suggest maybe we move on. Thank you for inviting me. I'm going to drop off. Great. Thank you so much Sergei. So the other thing I would like people to have a look at and I don't really know if, if 10 minutes is enough time to have conversation about it. And also I want to leave room for anything else anyone else wants to talk about. But I did create a pull request around a more specific RFC process for moving the open tracing spec forward where each proposal to change the API is first drafted in a document called an RFC that is committed to the specification repo. It has several states that moves through from draft to testing where we then implement a version of the proposal as an API change in a quorum of major languages. And if that looks good, then we mark the proposal as accepted and add the language in it directly to the specification. So I'm curious, just sort of hot takes from people who have had a chance to look at that proposal. Is this something that looks generally like the right direction to them. I think it's generally something we need. So they wish it was following a different format, etc. Yeah, is it that no one has any opinions or no one read it. It's okay if you haven't read it yet. I would just ask, please take a look at that, because I think it's one I do think we definitely need a formal process for change right now. It's just to cowboy. I mean, it has been working because people are nice and care and are trying hard but it would be, I think more open to people outside of the sort of smaller inner OTSC group when it comes to making proposals. If there was a more obvious clear format for how that worked. And it would also help coordinate getting these changes out across several languages at the same time, rather than what we've been doing so far which is try to sort of test drive it in Java and then if that looks good, then roll it out to the other languages. That kind of takes a long time. And we now have a cross language working group of people who are interested in making these changes in their various languages and domains. So we've got sort of the, the open source staffing available to to make decide we want to add something like span ID and trace ID, and then go test and roll it out in, you know, three to five languages at the same time, and then add it to the spec. So I think we have the ability to make these changes faster and more coordinated, but we do need some structure to explain to everyone how we're going to do it. I have had a look and I do like this format. I'm not so passionate about the actual structure of it, but having an RFC is a good thing. Your example is also a good first example for that. Thanks. Oh, yes, and I should mention this trace parent proposal. If you are looking at that, it's written as one of these RFCs for the RFC proposal. So it's sort of a prototype. Great. Well, I don't think we can have too much of a debate on this until people have have read it and thought about it a bit. So I would ask people, especially specification council members and members of the cross language working group are on the call. If you can look at this next week and weigh in on it. In particular, if you you think it's, it's just a radically missing something. Yeah, it would be good to flush this out because now that we've got more people involved, it would be great to sort of tighten up the structure sooner rather than later. Should we set some to kind of have it approved. Sorry, I couldn't quite hear that. Should we set some timeline to have it approved to vote on it maybe. I'm not sure how to do that. We could have say like by the next DSD meeting we need to have this approved or something approved or at least a clear idea of why we're not going to move forward with anything. Yeah, so I think this. I'm kind of trying to draw parallels with the CNCF to see meetings. They usually they hold votes just online. And I think the, the meetings I use just to present things and maybe ask questions that people can also ask questions on the pull request, but am I my concern is that if we say, oh, let's just wait till the next like wait a month and then try to approve it. But by the in the month we can be in a similar situation like I don't know one actually read it. Yeah. So I would, I would prefer to just like set another timeline saying like let's, let's give people a week or two, and then call a vote within two weeks, let's say, and then just close this. That makes sense to me. Anyone has objections to that. Okay, I'm right into the notes. There's nothing terribly original in the proposal. I think the thing for people to think about that's just a little maybe unique about open tracing and why we can't just adopt some pre existing thing entirely off the shelf is the shape of what we're trying to create is slightly different from most projects. Most projects are either like a single implementation of something like Prometheus is an implementation of a metric system, or it's a standard like wire protocol where it all, it all can fit into a single RFC, or a single document that you're iterating on. A slightly funny shaped projects which is a cross language API standardization effort. And there just aren't a whole lot of those. But it does mean the process has to tie together multiple code bases with multiple versioning schemes and things of that nature. So that's, that was why I felt the need to sort of invent a proposal process. It was based on some of these other things, but wasn't just literally the W3C RFC process. Yeah, I mean, even even libraries like JRPC. They have this cross language nature but they don't have multi vendor next year that we have. Right. It's not an interface of cross language implementation. Right. And then like a bunch of CNCF projects which also standards they have cross vendor but they don't have cross language nature. So we were kind of in a unique position here. Yeah. I think it's better just to have something written it doesn't have to be like set in stone and pedantic about this thing but like a general outline is a good to have. Yeah, we can also always iterate on this process right like if this this seems like a decent proposal process now we can start with this if it seems lacking in some way we can always just modify it. Not. We don't require backwards compatibility on a proposal process. It's okay. Do we have any other agenda. I mean actually had one question I don't know if Carlos is still on right. Can we talk about. Yes. About Python. Yeah, so basically I was checking your comments from from the latest days so so I guess your questions are regarding the About examples, I guess right. It's examples and I mean unfortunately when I was doing instrumentation with tornado. You kind of you want to write your instrumentation in a completely framework agnostic way but at least with tornado you do have to use this special context manager call specs that context. Without it nothing works really as far as propagation and I don't know what what other frameworks are doing. That's why I was kind of curious because simply just having thread local span. Sorry scope manager doesn't work for at least for tornado. Yeah, so so in the examples I am not using this spec manager like in all the examples because I was kind of porting them from Java. So I will probably add a comment later today but some of the examples to use it. So regarding the other frameworks both my move from day to dog and me we have been looking into a similar approach for other like the event isn't I you pretty much. And basically what they provide is some kind of thread local alike. There's no storage where you can have different storage for each core team, but there's no, there's no automatic propagation. So in that case, you basically what you end up doing eventually is that you provide a specific scope manager for each one, and then you also have to include some helping function, or, or even patching the library so we. So this code managers end up collaborating with this patching code so you can have this propagation that turtle has out of the box. So, yeah, so it's, um, well, and as I said, of course if actually part of doing this for those of you who don't know of doing the release candidate one for the new API and scope manager integration it was to bring more ice. So yeah, if somebody has more experience with specifically with a sync I you know and and G a vendor would be great so I don't know I don't know if somebody has but just let us know. I don't know that answer your question I think we will still have to go through the examples and discuss a little bit more right there. Yeah. Yeah, I think I agree that we will have to use different scope managers depending on the framework. The, I guess the question is really whether the API itself is is reusable. For example, like you may be using I think framework like event loop framework but you may also use some standard like some people do that. Let's say you are a live instrumentation and we typically monkey patch that and when you monkey patch it you you want to get your context or like the scope from somewhere and you don't really know what kind of scope manager you're using at this point. So the instrumentation has to have some sort of a unified way of saying okay, I'm still should be able to get the current current span current active span, regardless of whether how it's propagated to me. So I'm not saying that this is a problem I think we just like we just need to try different frameworks in the examples. Yeah, I think that's actually remember talking about that to tell I think previously like a few weeks ago because exactly I found that are a few enough differences here and there that maybe we need to clarify eventually or something because so in general, for example, I think that we can create a scope manager for each one of these frameworks, but they have a slightly different semantics details and we need to be aware of them for a start so I actually I now based on what you told me I think in the vital examples I'm going to start writing a summary of what are the exact implementation differences or you know like, or the expectation of each manager. So we can get a better idea instead of just showing examples around for each for each framework. Okay. I have a question that only take one minute. It's on the agenda. I was trying to make trace context implementation for basic tracer for go. And I was talking to Sergey about it and understanding the draft he updated and and Alois updated but I ran into just parsing was fine parsing was easy, but then I got to the place where I realized I need to use decide whether to change the tracer to use 64 bit. But it currently is a 64 bit trace IDs and do you think it should be changed to use 128 bit trace IDs, or should it be done in this in this pattern where we discussed in Seattle you could put custom vendor stuff in trace state and then, you know, allow a 64 bit trace ID to be stored in some kind of spam context place and and then spit out when you propagate out again later, but internal you're still using a 64 bit trace ID inside of the tracer itself. And whether that the complexity of that was worth keeping like the prototype wire protocol the same versus versus just changing basic tracer all together to use a different internal trace ID size. I don't think I haven't done an audit of the basic tracers. But it sounds like when it comes to taking a wire protocol that they use and header types and stuff, it would be great. If we pick something across languages that we intended all the basic races to converge on. I'm not sure if they're even all compatible with each other at the moment, because they're sort of just started as example code right. I never looked at the other ones. I have no idea how what the size is in Python, for example. Yeah, that would be my only request is whatever we pick is default here is that they can all interrupt with right other. So pythons in 64 just like current go is. I mean, but yeah, that just opens a question for then to demonstrate compatibility with receiving a trace parent that's 128 bit you need to put that 120 bit in Jennifer somewhere and then spit it out on your right. And is it worth doing all that and doing like that custom vendor thing, or is it worth just changing all the basic tracers to 128 bit IDs. Either one you know one's an interesting demonstration of the custom vendor thing, you know where hey this this tracer doesn't support 120 bit trees but look how we still made it work with the three C standard. And the other option is hey this is a conventional, you know, W3C supporting tracer and it's in that there's no implementation. I could do both. I was actually going to do thinking to both. My, my suggestion, and it's just a suggestion is that, well, in general, I think we need to revisit the basic tracers and decide what their purpose in life is for. But independent of that, I do like the emphasis on basic, and I like the idea that that they're simple code that you can read that does the canonical whatever the current that standard is right. So having them by default in a very basic manner support 128 bit, you know, IDs, sort of doing everything correct. Right. Would be my, my suggestion. But then Ben will be mad. Would be mad. And then he likes 64 bits. Yeah, I think you're right about the canonical thing. So that would be my suggestion is just that they, they, they are canonical, they, they, they match whatever the W3C expected it's not headers. And it would be then, I agree an interesting experience if there's a way to add some optional. You know, or plugins or something that allows you to do some switcher view with this header. Or with these IDs, that turns out to be a useful thing to explain to people. But it seems a little secondary. Yeah, I would suggest starting with the like what they call level one. You can accept and come and trace ID, but you're not use it as your own, you just store it as a tag somewhere, because then you don't have to change all the basic traces all at once. You can, you can, and you already have like some compatibility with the spec. And then the next phase could be, okay, well, maybe we can support 128 bit. But then we said, if people are using basic tracer at the edge, they may still want to be go back to a level one where saying, okay, yeah, I'm just going to record it. Yeah. That's interesting. Yeah, okay. We should have a broader conversation about basic tracers at some point though, because right now, some people including myself see them as example code that's supposed to be very simple. Just so you have can kind of get a handle of like what the two basics are. And maybe you can clone it and do something with it. But then there's also people would like some basic thing that that handled the wire protocol propagation for them and then they, you know, can then just write some little plugins to spit out the fan information somewhere. And then there's backwards compatibility or no backwards compatibility. There's just a bunch of questions around them. And like what their point is. I think we should clarify that at some point, especially if we're going to be changing them. Questions of that ability come up. People actually run them. Yeah, I just thought it was like a reference. Okay. I mean, for example, Go ahead. Like step art our tracers, at least in some languages to even reference a basic tracer. Like, that's how lazy we are. But that's again, like I would say if the basic crates are changed, though, then we would just clone the code we wanted. And like life would move on, like we would be fine. I just don't know what people expectations are around them. I think we should clarify them one way or the other. But we can do that later. So other conversation. Okay, we're out of time. All ready. Cool. Okay, thanks everyone. Happy Friday. Yeah, and get it. Good year. Ciao.