 I thought you were. I have more. Some people need to mute. How many cats do you have, Chris? Yeah, just the one. He's just got a big castle. Need more. And tell to my housemates. Okay, let's, I think we should start. It's 8-0-5, so we'll get started. Hello, Ted. Hey, how's it going? Good, how are you? We're good. You want to run the agenda today? Yeah, let's go over the agenda. See what items do we have there and start discussing them. We have quite a lot of time, the items. So first item is change processor and exporter interface to pass the source. I think I just responded to that. Who added it? Probably Rahul. Yeah, I added this, yeah. So while I was working on the resource, I realized that resources are tied to the trace provider and the span processors do not have any idea which trace provider they are attached to. And part of the, pass the resources to the batch processor. If you incorporate in the span data, it's probably not going to be efficient. We want single resource pass for the batch of span. So I think if it passes an argument, then batch processor can either cache it or something. I put some solution there and it can export to the exporter. Single common resource for the batch of span. I see. By the way, talking about batching and stuff, probably we should read more carefully what you have there. I do not understand exactly why it's not necessarily optimal, but I do see that in my open OTP implementation, I have to construct the map resource to spans. And that may cause some troubles. So I do agree that there may be other better solutions. Anyone has to comment? Yeah, I think Boggin, we were just chatting this on your PR. Your, I think your response to the question that I asked was could we have multiple SDKs installed in a current system and the share of the span processor between them? And hence, we need to have different resources, but it still can be passed in to the export method then. Yeah, so one of the things that we have a limitation that there is only one resource per span per tracer provider, but there is no limitation that running binary cannot have two or three instances of the trace provider. And there is not also a limitation that a span processor cannot be attached to two Yeah, that's also highlighted in the issue that there is no limitation there either. So I mean, one alternative is to associate the resource to the exporter, right? And basically that makes it easy as well. You cannot associate it with the exporter because exporter is associated to the processor. So hence, you can receive in the exporter from multiple resources. So one thing that we can probably do is to change the exporter pipeline to have a map of resource to span data instead of just span data and that will solve the problem. And we do the mapping in the processor that we implement. Yeah, that's one of the things. Yeah, you can do that. Create a mapping of the resource to the span data. Basically group the span by the resources. Yes, and also with the latest OTAAP, I think is 83. I'm adding the concept of instrumentation library information, which may share the same concern as resource because essentially inside the resource, you have multiple instrumentation libraries that are producing spans. So you do need to create the same mapping. Okay. Given the beta timeline, it's like right around the corner. What do we, I mean, do re-passing the resources along with the span in the interface with the span processor? Do we make that change? So currently, as I said, currently I'm passing it as part of the span data. It's not necessarily inefficient, at least in some of the languages, because that's an immutable object. So I can share a pointer to that object. In Go, you can't have that, I think. Why? You can have a pointer to the resource. But I mean, if we have that be immutable, it'll work. This is not how the span data was originally. There was a resource field and it could be null in which case the tracer was expected to fill it in. I think that was where we were like in May of last year. Yeah, so if it's immutable, I think it should be possible to not, to share the same memory, I mean, to have the same pointer there. Let me take it offline and look at it. Okay, that's an alternative. I think it will be good to do a better design on this, but just for the better, we can leave with that for the moment. For what is worth, that's how we're handling this in JavaScript right now, is by having a resource reference on the span data? Yep. Okay, so we're going to put here a comment. Okay, I added a comment that we will use a pointer reference to the resource in the span data and we'll reconsider this after beta. We'll redesign maybe. PRs, who added this and what is needed in this list of PRs? I added the first two. Okay, and for the, for at least the second one, I think you still haven't reviewed it and you said you would last week, so that's why I put it there. For the first one, we discussed it in pretty good detail and I thought came to a conclusion, either last week or the week before, but it doesn't seem like there's been any comment or approval on it to say that, so it kind of got stuck. I'm going to ping everyone. For the first PR, I left some comments and I think that probably my only small concern was about removing the section about having a start active span as a separate operation, but we can discuss on the issue itself. In the discussion in the SIG meeting like two weeks ago, people said that they wanted it to be two separate operations. Yeah, sure, but we need to add it as part of this PR or another PR, you know, because I mean at this moment, it barely mentions, I mean like the current before your changes, it says that there are two operations but it's not very clear and so if we, you know, it's fine to keep them as two operations, we need to make that super clear. Anyway, let's discuss that definitely on the issue itself. Yeah, I'm not going against the agreement, just trying to be more clear. Yeah, no problem. I'm pinging everyone on these two issues. Just to make sure we get their attentions. Library to go ahead. No, that's good. Thanks. The next one is clarify telemetry library. Okay, who filed this? Who put this into the agenda? Yeah, that's mine. I just wanted to make sure that we are all aligned and agree and maybe I can get some more reviews as well. There were some concerns by Sergei. I see that he's in the meeting. Maybe he wants to comment on that one as well. Yeah, this is about library and resource, right? Yeah, so my only concern was there. Yeah, I didn't know, but so my only concern was there is how do we distinguish instrumentation adapters and library instrumented itself. So was it like, yeah, maybe I'm lost between instrumenting library and instrumentation library and what telemetry.sdk will represent that. Currently, currently there is no way to specify the instrumented library. So currently the name tracer allows you to only specify instrumentation library, the one that makes the instrumentation. So it can apply something. I was thinking about this and I tried to kind of resolve this with the component thing, but it didn't get too much traction. Maybe one thing that we can do is add a property on the name tracer and a key on the instrumentation library called instrumented library, and then we can have it from there. Sure, I just want to make sure this PR is clear about what means we put in telemetry.sdk property. Do we put instrumentation adapter name or we put library itself name? These are of these, I think. In the resource, I think what we put in the resource is the name of the like the library itself, which is the open telemetry. In the resource, what we will put is we will put the telemetry library, not the adapter, not the instrument, not the instrumentation, not the instrumented library. We will put the the telemetry, the producer of the telemetry, which is most of the time is open telemetry and the version and the language. Yeah, that's exactly what the PR that I linked the 494 is about, and I think we should be careful not to mix up things here. So you have the OTEP 84 about adding the tracer name, aka instrumentation library. And we should look into that one separately. I already approved it, by the way, because now it seems pretty fine and clear to me at least. Then there is the telemetry library resource attribute, which I renamed from library, which was highly ambiguous to telemetry SDK. It is quite clear if someone comes up with an even more precise description, I'd be open to edit, but I think that at least with the examples two lines below, it is really clear because I say that the default open telemetry SDK, if that one is used, one should use open telemetry and so on. And for the instrumented library or application or service, I think we should open an issue and continue this as a separate discussion so that things don't get mixed up. Yeah, so I think currently we identified three libraries in this world. We have the telemetry library, the SDK that produces the telemetry. We have the instrumentation library. If we do auto instrumentation or if we do, for example, write a plugin for GIPC and that plugin belongs to open telemetry. And we have the third one, which is the instrumented library, which is something that we haven't defined yet, how to represent that. Yep, yep. Yeah, we also have expert if we care about the point that out. Can you repeat? Sorry, I couldn't get it. We also have expert ones that are being used to send data. Yeah, so you might want to specify this one as well. Yeah, we don't have something for that as well. Maybe we should add. Although it's, I mean, it's a different discussion, but I think the export is always specific to the data it exports and can add its own name on the data it produces, but that's a different discussion. Yep. Okay, I will read the FDR again from that perspective. And yeah, I will let you know. Thank you. Perfect. And there was one thing that Yuri opposed, the fact that I don't make it a compulsory attribute. I actually think it should be required, but I didn't want to, to, I don't know what the term is called to sneak the change into a PR that is called Clarify something. So I will do that in a follow up PR. Okay, just respond that you will address this in a follow up PR and that's good. I didn't quite catch that. You have some noise in the background. I said, just make sure you comment on the PR that you will follow up with the next one to fix that part. Ah, yeah, I did. Okay, thank you. Next one, messaging attributes. Yeah, that's also me. It has been open for a bit more than one and a half months now and it would be great if people could look into it. By the way, it has three reviews, so it should be good to go. I think that the latest from what I remember last week, Ludmila had some questions about it, but you know, she's not, she's on holidays or something. So yeah, I suggest we just merge it, three approvals and one suggestion, just mark a lot of these conversations as a result. Just not be rude to people, by the way, because we know Ludmila is on vacation for the moment. Maybe file an issue to follow up on that when she's back. Sounds reasonable. It will be a while and I think one of the suggestions I have for semantic convention PRs or documentation in general, if you start tracking all the implementations of the semantic convention, it will make life for everybody easier because we know the semantic convention already implemented and this is how it's used in practice. And I think some concerns people have is it looks great on paper, can we actually implement it? So maybe if we start tracking implementations of the semantic convention, it will make life easier. Yeah, I think what also Ted proposed last time is that we need some basis to start implementing stuff and then we will find possible problems in those and then are able to revisit it. Yeah, absolutely. And if this list is empty then it will give a sign that something yet to be changed if we will find some problems in real life. But what do you mean by track implementations of it? So maybe on the top of the documents that we were saying, like we have implementation of the semantic convention for like, I don't know, Azure Event Hub and references implementation. Okay. Yeah, just a suggestion. It's nothing actionable at the moment, I think. Okay. So yeah, anyway, whenever you are, whenever you have field those issues, around your Mila's questions, let's definitely merge it. Yeah. I mean, I responded to all of them. I'm just, I don't have any verification from her yet that this is fine in the dress now. Well, I suggest rating a meta issue or something just to, you know, hey, like we after merging this PR, there are a few remaining questions. Feel free to either close this issue or keep discussion going here. You know, so some meta issue. So we don't forget. All right. Okay. Is then the next one is OTLP result code. Probably T-Renati did? I added that and thanks for digging it up, actually. It's been there for a couple months, I believe already. We need a decision on this. We can't move forward with the protocol implementation until we decide on this. Basically, the question is whether we're using GRPC result codes as the indication of whether the request should be reached by DORNO or we use, as it was previously in the protocol, please explicit flag to indicate the retrieval or non-retribal errors. So we have to use, if we are using GRPC as the protocol, the transport protocol, we have to use their code because that's what they are going to return to us. For example, in case of a deadline exceeded, there is no other signal that we receive. We're just going to receive a status called deadline exceeded and we have. Exactly. By the way, we need to clarify. We're not going to use GRPC codes in our pipeline. In our pipeline, we have the three values that you propose. This PR is mostly like, if you are using GRPC as a transport between the client and the server, here is how you transform the GRPC codes into what we call retireable or non-retriable errors. How do you handle the error, right? GRPC code returns an error. What does that mean? What do you do? Yeah, but this applies only to OTLP. That's something that people need to understand. We do not force everyone to use GRPC codes or anything like that. We just say, on OTLP, where we do use GRPC, this is how we're going to treat this error. That's very explicitly called in the PR. It's only about OTLP and only about GRPC. Yeah, I felt from a Christian comment that it was not that clear. But anyway, I think, as I said, this is how I think it's possible. Because otherwise, there are a lot of these errors come from the framework. For example, the auth thing. If you use GRPC and you set up auth or whatever auth mechanism you have, you'll get a permission deny or unauthenticated error without your code running. So you don't have a way to send back any information. Right, yes. That's right. So please go have a look at the PR. We need the quick resolution on this. Otherwise, I mean, we're probably already late. Unless we're, I don't know what's the plans with the beta release. Are we still doing the March 16th thing or it may get delayed? March 16th, based on our discussion, March 16th is a target to have a candidate release for beta. The final beta is 20 something. Okay. So I think that we are not going to be able to do the OTLP by March 16th, based on this and other things that are still in progress. Let's aim for that and we'll decide if... Right. Yeah. The implementation is in progress, but we only have six days remaining. I doubt we'll have a release candidate by then. But let's try. We'll see. Let's give it a shot. And we have to slip, damn it. But let's try and put a crunch on for this last week. By the way, the reason why I digged into this is I was implementing the OTLP in Java as a client, as an exporter. And I was like, oh, I remember I saw something when to retry and when to not retry. And this is how I found it. So don't have much choice. When you write the code, you get the error code at some point and you have to do something with it. Yeah. You rarely have the code. It depends on that. So what do you do now? That's probably why we haven't find these very useful, because nobody was writing this exporter. Now that everyone for beta have to write this, is the time when we find these kind of issues. Of course. Thanks for filing that. Anyway, let's continue the discussion there. Please, everyone, please focus and read this and provide comments. This will be very useful. Thank you. Next one. Metrics View API. Do you mind, Chris, if I ask you to postpone that discussion until next week? Yeah. This has been explicitly excluded from the beta milestone. Yeah. Let's not discuss it. I will add a couple of more. Let me do this. There are some new issues we should discuss. I don't know if that deserves to be listed. Like the ones about character sets and coding is 504, 501. By the way, I'm adding the label set OTP. Please, please review. Okay. I think, all right. The label set OTP has quite a lot of support. I need one more approver on that. So please, somebody do that. Yeah. I think we already have, for the label set, we already have five or six approvals, including external people. Right. We just don't only have three of the official ones. By the way, I want to add a couple. There's also one on metrics, which is more of a philosophical statement, which I updated last night, but it's not quite ready. So I'm not going to plug it right now. That's number 88. And I think that most people have agreed to it. It's just not clear. So I'll keep working on it. Okay. Perfect. So, yes. We need to review those on the removing label set. The adding instrumentation library also has a bunch of support, three official approvals, an extra one. So I would like to have that also under review. I don't know what about please. Okay. Next topic. I think we discussed about PRs. It's mostly about people reviewing them. Who added this and please translate. I added that. I still need some feedback on that. So, John, if you managed to take some guys to the, that you said from expert from function as a service to review that PR. And second is you are a comment of few Bogdan. If the execution ID does belong to the correlation context or not, should we try that track that as resource attribute or should we add to the correlation context? Yes. So in general, my idea was, I think your proposal was that execution ID changes between different executions, correct? Exactly. And one of the property that we try to have was that resource describes an instance in a way that is not, so that's, for me, that's a runtime property of a property of the request. Every request has a different execution ID. So applying that logic for me, that's a property of the request. So it's probably belonging to correlation context. And then hence, we'll be associated with all the spans that are produced during that execution. And we can associate as examples, for example, for, Okay. See your point. Thank you. Bonnie, the new relic folks chimed in and all their comments have already been, I think, responded to and they thought it looked great. Okay. Thank you. So and that's why, by the way, that's why with the correlation context, that's why I was pointing to that because you also proposed to add that as a span attribute. And I think we should add it to the correlation context and then it will get automatically added to all the spans. Okay. See your point. You did manually add it for every span that we create. Okay. I will look at it and then I will comment on the PR. Okay. Thank you. Thank you. Sorry. Number. Okay. Whoa. I just grabbed the latest stats from the milestone on what's logged in for the spec version 0.4. Just putting what's there for right now. So I see a lot of the language that's trying to follow the lead of the spec SIG and with the current status of the current direction of 0.4 milestone are optional things in order to include into the beta launch. I think they're looking for things to solidify for possible inclusion into the beta launch based on the timelines of March 16th as the complete date. So I think something maybe needs to be cleaned up or adjusted in order to make make it reasonable either to achieve this or to push out for another milestone. I do agree with you. I think there is a meeting about beta later today or something. I think we should address all these issues and we should follow them and decide which one goes and which one does not go into beta. Yeah. We're meeting at 2.30. Andrew, you have an invite. So do you, Bogga. Okay. Yeah. I also heard the date of like March 20th. So we can talk about that at the 2.30. Yeah. About the timelines and such. Yeah, we're still, I mean at least in my head, we're still aiming for the original date of the 16th as code complete. The other date was thrown around was like the actual announcement, but that's more like March 30th. But that's not like an engineering date. That's just like long posts and things. Okay. So I don't know whether this meeting could be at least adjusting the due date of the version 0.4 milestone because having it as the due date of February 20th just doesn't make sense too. Oh, February 20th. Yeah, no, I agree. Yeah, let me adjust that. Yeah, Bogga can go just that right now. I did. I put it March 16th for the moment as every March 16th. That's good for now. It's better than in the past. Yeah. Yeah, it's just to jump in line. I was asking about beta blockers, but I think one thing I want to emphasize is for me getting the beta started, we have an ideal of course we don't want to make breaking changes after we start the beta, but the main point of starting the beta is to make sure we have something that works end to end that people can start using and start getting feedback from building instrumentation and checking out the results in a back end system. So if we do have some design changes, we have to push out past March 16th where people are trying to make trade-offs to figure out what they should work on to get the beta at the door. It's really about getting that basic end to end pipeline out the door. So you've got an API for some API for metrics. The Trace API is like, I think, very locked down at this point. An SDK for those things, exporters, a way to talk to the collector and then collector exporters to talk to the back end. So those are really the core things so that we can start doing user research and getting our feet wet with getting people onboarded onto the project. Yes, so I think you're correct and I think we are doing great progress. Since we set this deadline, we're doing much better progress than before. Yeah, agreed. Even though we may not have deadlines, but I think great progress. Yeah, I see a lot of real hard work going on with everyone and it's normal. Deadlines get closer, it gets more stressed, but people are honestly doing some really awesome work. But we have to be realistic about what is remaining for the timeline because as it gets closer, there is less and less that can be done. So we have to understand what is the definition of done and what is a realistic timeline for everyone. Yeah. So, Andrew, for the moment, let's repeat with tougher deadlines. We found in the past that if we don't have these deadlines, we are reluctant on making decisions sometimes. So let's have some deadlines and maybe as Ted said, we should revisit a bunch of these and we should not be worried to break some of the things during the beta phase. Yeah, we are declaring in beta. We still can break some of the things we find critical mistakes in our design choices. Yeah, it's quite clear to me we've passed the good enough mark with our designs. I have some question around the remaining metrics work because I think that's an API that's more new. And so I know we definitely have more ideas about what a better metrics API would be. But it seems like on that front, we've also hit the good enough mark for launching. So I actually wouldn't mind an update about the current state of the metrics discussion because I know that's an ongoing. I think the last item that I want to push is the removing of the label set that will introduce a bigger change in the API. Anything else, especially Josh's work is very good. So we're in a situation right now where we've agreed upon and negotiated and discussed a number of changes that haven't landed in the spec language yet. So we're kind of in a backup. I can't merge things quite fast enough. So I agree that label set is the last significant change from what's going to break if the user starts using this. But we already have plans to introduce some new instrument types. And those can be done after the beta, for example. And by the way, the Go SDK is completely out of sync with the current state of the specification and I'm working on that as fast as I can. It's not going to be ready. Like it's not going to match the spec on March 16th, right? Unless we all start working 24 seven and getting me really fast code reviews. Good. Which I don't expect. Thank you. Yeah. Define restrictions for attributes and label keys. Armin. Yes, I brought that up. I just noticed that we don't have any restrictions there like at all defined in the spec. There are some limitations that impose restrictions on their own. For example, the Java SDK limits attribute key length to 255 characters and so on. But I think we should really find a common restriction definition in the spec so that people can rest assured that what they are doing confronts with the spec because the limits that Java imposes, for example, are not documented and all and not really expected to be stable. Yes. This is something the spec is definitely missing out on. A bunch of the restriction in Java come from my previous work with OpenCensus and where I believe that that is both of the good ones. Sorry for that. But I do think I discussed with Josh as well about the character set the question that we need to have an encoding for all these strings. If we say something is string, we need to define an encoding for all these strings because otherwise by default, for example, a lot of people will assume UTF-8. But that may not be the case. If we do truncation or stuff, we may break the encoding. So we need to specify for all the strings that we support in our library, not only for every string. Even the spam name has to have an encoding defined for strings. Yeah. I want to add, Armin, thank you for posting that issue or that PR. I was actually about to write the same type of issue because we talked about it last Thursday in the metric SIG. I fully support that issue and the restrictions that you proposed. Thanks. As I said, we need to only have a general thing for all the strings and then if there are other parts that we are more constrained, we comment on that. But I think we need for all the strings in our system a definition. Last week I commented to my colleagues that we were talking about encoding issues and unicode and stuff and that meant we were at the end. So that's the nice thing here. I know from working in logging systems that people are going to dump every encoding under the sun into a log. So eventually this solution won't hold up, I think, but I'm happy with it for now. I think we can always relax. So if we start with more restrictive things, we can always relax that restriction because it's backwards compatible. Or you could extend it. You could have a way to indicate the encoding somehow, which sounds terrible. Okay. I think this is important to be discussed. I do agree that can happen after beta. I don't think it's a blocker in a way that we can have an end-to-end working and we can clarify this immediately after the beta. Any beta blockers? That was me. I think we already discussed this, but I'm just curious if anyone on the call does have anything they're hyper-concerned about. We've discussed metrics already. Is there any other concerns? Yeah, the OTLP, as I mentioned, but we'll see if we can make it happen. Great. Tigran, is there anything that we on the group here can do to help you speed that process up, or is that just a man-hour sort of thing? I think we already have a few people working on this, mostly full-time. I don't think there is a way to actually split it into more ways for others to help. I don't think that we need help for other issues, but for this particular thing, I don't think having more people will help actually to deliver it faster. Tristan, things that will help us probably is clarify that OTLP with the retrieval thing. That's probably something that, as a community, we can help Tigran. Besides that, I think he has a lot of support from his colleagues, and I also help him. We can probably have something working there. Speaking of help, even though KubeCon Europe has been punted, and that was where we originally wanted to do a kickoff with open telemetry workshops, we are planning, we don't have exact plans yet over at LightStep, but we are planning on trying to continue to organize open telemetry workshops and training sessions to just both get an understanding of what needs to get worked on in open telemetry, how easy it is to get started with, but we're also hoping that we'll naturally get more interest in the project and we'll cause more organic growth in people wanting to contribute back to the project. By the way, it would be nice for people if we can organize maybe a meet-up here in San Francisco. We are a bunch of people in San Francisco, and the reason I'm proposing here is because we are a lot of people, but everyone can do that in their areas. I think we can have a meet-up with people, more contributors and users, more informal than a workshop or anything, just like chats and stuff. That would be great. Are you guys talking in-person meet-ups and all sorts of things? Without travel restrictions, it's unclear, right? Yeah, masks, arms length, like less than a thousand people. Yeah, and sharing coronavirus, so it's a meet-up where we share knowledge about the telemetry and coronavirus. We're sharing knowledge of coronavirus or just sharing coronavirus? Maybe a bit of both. This global pandemic, really what everyone should be doing is staying locked in their basements and hacking on open telemetry. The problem is, if you lock into a bunker, you're not going to have internet, so I don't know. You just haven't seen everyone else's bunkers yet. Yeah, come up to the Pacific Northwest. I'll show you. Yeah, that's right. You need a better bunker. Okay, I think we're done with the agenda. Anything else we should discuss or we can get 10 minutes back and go hacking more. I think that's it. Yeah, that's it. Thank you guys so much. Ciao. I'll see you later. Bye.