 according to the cloud. So welcome, everyone, to the monthly OTSC meeting. We've got special guests today and a BHS, if you wouldn't mind doing introductions. Let's take it off. Sure, yeah. So Lloyd and I have been communicating over email for the last couple of weeks or so. He reached out to ask the questions around combining distributed traces, all open tracing, adapters, that sort of thing, with kernel tracing information. And we met up in Copenhagen at the KubeCon being a couple of, I guess, last week and had a bunch of great conversations there. So I invited him to present here. So I think the work he's doing is really interesting. And it's combining different disciplines and tracing, which is always fun. So yeah, I mean, Loic is a master's student polytechnique country all and knows a lot about the kernel side of things and also the distributed tracing side of things. So I'm really excited to hear the talk and I should mention he has a hard stop at the hour. So we'll have to be disciplined about asking our questions quickly and so on. But take it away, Loic, thank you. Yep, well, thanks Ben for the introduction. I think you said it all. So today I'm going to discuss how we can combine different kinds of tracing. I think now is the time for having kind of hybrid approaches if we want to understand the performance and the behavior of our systems better. So I'm going to have a special focus on kernel tracing today because I am assuming of course that you know better than me with open tracing is what it can do with all its advantages in terms of understanding what your distributed system is doing using traces. I hope you don't mind if I discuss a few of its limitations because that's actually the subject of the talk and I'm also going to assume that you don't know a lot about kernel tracing so I'm going to start with a short introduction about what it really is, okay? So you can think of it as the parallel world of tracing because if I'm telling you that I'm doing tracing probably all of you are going to consider that I'm doing something that is related to open tracing but it's actually a bit different. So kernel tracing is actually an efficient solution if you want to analyze how your systems behave it's not entirely new, okay? We have different traces for Linux available so probably some of you are familiar with Ftrace or even VPS versus TAP. In my lab we're focusing on developing LTTNG which is probably the most efficient in terms of kernel tracing. And the main focus of kernel tracing the high precision of the events. This is actually one of the first things that you can see in comparison with open tracing. Open tracing focuses much more on the causality between the events rather than the precision of the timestamps but as for kernel tracing goes we want to infer the causality of the events afterwards. During analysis time which is why we need that high precision the timestamps. There's also a big focus on the overhead, okay? We don't want to have a big overhead when we collect all these events from the kernel so the very reason why we don't want to do that is because if we want to reproduce some tricky bugs then we don't want to have the system behaving differently because we're tracing this is called the observer problem in physics but it's actually the same kind of things in computer science here and there's also a focus on the fact that we can do offline analysis using the trace that we just collected, okay? So if I'm taking you through the common workflow of kernel tracing the first thing that people are going to do is that they insert static trace point into the kernel, okay? So that's been done. We have a lot of trace points already available. We can also do that in user space applications but my focus today is on the kernel tracing. Then we choose what trace points we want to activate at run time. So we use our preferred tracer to say I'm going to collect all the sketch switches or all the system calls, entry and exits or the interruptions, okay? And then the trace is collect the events so each time your kernel hits one of the trace points that are selected, it's going to emit an event and these events are written either into a file or then they can be sent over the network too. And then these trace files can be analyzed by specialized analysis tools. So TraceCompass is one of them and these tools actually take these events and create nice views so that you can understand what goes on in your system. Okay, so that was for the facts. Now, I like to analyze what I call a tricky bug with you so that you can see what kernel tracing can do in terms of debugging applications or understanding what goes on in your systems. So here's the situation, okay? So I'm going to demonstrate all that in a moment but I prefer to explain it first. We have three processes, okay? Two of them are running on the same CPU, low prior one and high prior one, okay? So you can see the priorities on the left, by the way. We have a third process that is scheduled on CPU zero, high prior zero. There is a shared resource that both low prior one and high prior zero want to lock and you have the control for on the left. So low prior one starts first, okay? Then high prior one starts, which means that low prior one gets preempted because it has lower priority and then high prior one, high prior zero gets scheduled in and at this moment it requests the lock, okay? And the question is what happens next, okay? Is high prior zero able to take the resource that is currently held by low prior one? Second situation is roughly the same, okay? But in this case, high prior zero has a lower priority than high prior one, okay? So we want to know what happens next and if we want to understand that, we can think that what's going to happen is that high prior one can get the shared resource back, okay, because it has higher priority than low prior one, which currently holds the resource, okay? So logically, it should be able to get the resource back. So let's find out, okay? There's nothing more important than practice. So we have this little example that's been coded. So trust me, it works just like I told you, but if you don't trust me, I can obviously publish the code for that. So I hope all of you can see the terminal can increase if you want. If someone can't see, you can just shout. I try to adjust, okay, no shouting. So it's been freshly rebuilt. Then I've got a bunch of lines here to initiate the tracing session, which means basically like I told you, I need to activate the events that are going to be interesting for the analysis. So basically I'm taking all the kernel events. And then what I did is that I started tracing, I executed my little program and I stopped the tracing right after that, okay? So you can see all that happening. And if I change directory to the parent one, I can see that here I have a directory that is called cat trace and that actually hosts the file that were written by the tracer. And if I want to be convinced, I can use something called babble trace that just dumps all the events, okay, to terminal. So this is what a kernel trace looks like. I know it's not appealing. We have our timestamps on the left, the name of the event here and here a bunch of pairs, key value, which you could relate to open tracing tags actually, okay? So I know, for example, that each of the events happened on a specific CPU and I have the identifier of that CPU, okay? So what I did, if I show you the code, so my main function, what it does, almost everything is commented except for these two runs. So the first thing that I ran was the first example that I showed you, okay? The one where the process high prior zero has higher priority than the other processes, all of the other processes. And then the other case where high prior zero has higher priority than low prior one, but lower priority than high prior one, okay? So this has been run, but if I want to understand what goes on, I'm not going to use babble trace, okay? It's just a dump of the events. What I'm going to use is a trace analyzer. So here is trace compass. I already loaded the trace into it, run the analysis, it just takes a few seconds. You can see that here I have what is called a control flow view, looks very much to what you can find in open tracing traces, with spans being just the states of the different processes that you have in the tree on the left side, okay? And here what I'm highlighting here is the first execution that I showed you, okay? So back to the slides, it corresponds to that execution, that first situation where I have high prior zero with higher priority than the two other processes, okay? And then this is my second run, which is the other situation where high prior zero has an average priority between the two other processes. And the colors here can tell you that when it's green, the process is running in user space. So it's actually executing code. When it's yellow, it means that it's waiting for something to happen. So it's blocked either for a mutex to be released or for a IO to be available or something. When it's orange, it's waiting for CPU. So it's not surprising that you have two processes here fighting for a CPU because we forced them to be scheduled on the same core, okay? And if we zoom in, we can understand even more what happens at this critical moment. This is where things get really interesting, okay? So here, as I said, we have low prior one executing first and then high prior one starts. And when it starts, obviously it preempts the process. So I have to go back a bit. This is why high prior one executes first, okay? So it preempts the process low prior one. But then high prior zero comes in and it requests the shared resource using that system call open ads, okay? And then high prior zero has to wait for the resource to be released, okay? So it's in a state where it's blocking for a certain shared resource to be released. And what goes next is a bit surprising. High prior one gets preempted by low prior one, even though high prior one has a higher priority than low prior one. And the reason is because high prior zero requested the shared resource. So it means that temporarily low prior one gets the same priority as high prior zero and then it's able to preempt high prior one. So the Linux kernel is offering that feature because low prior one has to terminate its work in order to release the mutex really fast so that high prior zero can resume its execution. That's what happens here. Low prior zero releases the mutex and then high prior zero resumes execution, okay? So this is what we expected because high prior zero comes in, the shared resource has to be given back to high prior zero. So high prior one gets preempted, low prior one terminates its execution and everything is fine. However, this is not what happens here. This is the second execution, okay? So I have to zoom in here. By the way, the backend is decoupled from the UI which means that the analysis can be run from basically anywhere and you can use any kind of UI to analyze the traces. We have that backend available for everyone. So here we have the same thing, high prior zero tries to get the shared resource, okay? So it makes that system call and then it blocks. But this time, high prior one doesn't get preempted by low prior one because low prior one temporarily gets the same priority as high prior zero but that priority is not sufficient to preempt high prior one. So we are in a situation where the process high prior one which has higher priority than low prior one has to wait after low prior one to release the shared resource, okay? So if I go back to the slides here, here is the explanation, okay? In the first situation, we had that behavior where low prior one temporarily gets the same priority as high prior zero in order to release the shared resource as fast as possible, okay? So low prior one preempts high prior one, terminates its execution, releases the lock and then gets preempted by high prior one. But it's not a problem because at this moment, high prior zero can resume its execution because it finally gets the lock. In the second situation, high prior zero has not sufficient priority so as to preempt high prior one. And so we are in that situation where high prior one is, high prior zero is blocked, okay? Because it has to wait for a certain resource to be released by a process that has lower priority which is a bit weird. So we have two distincts behavior but we thought that we just should have one and this is actually a big advantage of kernel tracing is that you can just run tracing, see how your application behaves and then you can deduce things that you could not deduce otherwise because sometimes the behavior of your application doesn't mean the expectations. So what can we do with that? Back to my project, we can analyze what's currently missing in open tracing. I hope that maybe this is clear to you that kernel tracing can have fine-grained analysis and open tracing actually focuses on the tasks that are being processed as part of the transaction. So you can actually detect a few design issues that your application has. For example, if you have a long diagonal of spans, maybe it means that you didn't try hard enough to paralyze your application. So this might be a design issue but in the case at the bottom of the slides, we have the task two that is lengthy but it could be long for several reasons actually. It could be long because it waits for a CPU, it could be long because it waits for a shared resource or just because task two is supposed to be longer than the other ones. So this is where we'd like to have a finer grained analysis, a better insight into our system so as to deduce more things from that situation. What we'd like to have is a solution for debugging these problems that involve some kind of contention. Usually it can be new text or network but it can be the CPU that's contended. We'd like also to debug other bottlenecks, for example, that comes from the interaction between your transactions and the actual machines on which they run. And we'd also like to be able to perform an analysis of multiple transactions at the same time if it makes sense because the transactions, they're not independent, they can fight for some new text, they have interaction. So we have to understand that in order to understand why that transaction is currently blocked. And we'd also like to understand how our transactions interact with the host system. So basically what we want is to have the best of both worlds which means the view that open tracing provides in terms of aggregating all the information and presenting that information as a single trace, as a logical transaction really. And we'd like to just add a single layer that comes from kernel analysis. So to tell us basically your task is actually very IO intensive, maybe you should investigate in that direction or maybe you'd like to have an analysis that tells you, well, task two is super long, but it actually executes in user space for just 10% of the time because of a certain mutex contention. Very fortunately for the interest of my project, we're not there yet. We have a few problems to solve first. First thing is that, as you may have noticed, LTGNG or just kernel traces and open tracing traces are not the same, right? Open tracing focuses on tasks represented by spans, whereas LTGNG traces focus on events, which means that we have to recreate a lot of the causality between the events during the analysis. Another thing is that kernel tracing is actually able to understand, how your threads get preempted and to describe everything as being part of threads, but open tracing focuses much more on tasks and tasks can be executed not on threads, but on things like go routines, which is another kind of abstraction. So we have to take that into account as well. We also have to find a way to synchronize the traces because open tracing has not the same precision as LTGNG because it's focused on causality. There's not that need for precision, so we need to synchronize the traces so as to make joint analysis afterwards. And the other thing is that we like to keep the same workflow that we have when just monitoring a system. What we usually do is that we have some kind of dashboard that is able to tell us, hey, you have a lot of transactions going bad, or you have a high CPU usage or you have a high latency. You can investigate that using open tracing as a second stage. And then if it's not sufficient, then we can go on to the third stage, could be kernel tracing. So we'd have to integrate all that so as to keep something that is really smooth. And another thing is that if you have systems in production, you don't want to have a 10% or 15% overhead because of kernel tracing. So usually kernel tracing is about 2% or 3% impact, but depending on how many events you're collecting, how often you're activating kernel tracing can have a higher impact. So we'd like to keep that as low as possible for it to be used as a very useful tool. Okay, so that's it for my presentation. I try to be short so as to leave time for discussion, questions, ideas. But if you want to reach out, you can just ping me on Gmail and I'd be happy to discuss with you. So thank you. Thanks very much, Loak. It's really interesting. I have one question to kick things off. I know we spoke about this a little bit in person, but in terms of, one thing I think is really interesting about the kernel stuff is that you can, you have a chance to understand the tension between different transactions and you illustrated that in one of your slides. But I was curious, do you have any sense, even just the ballpark thing, whether it's feasible to get some form of trace and span ID into some kind of kernel data structure, even with a patch on the kernel or something like that, like do you have any sense of how much overhead is required to make that work? I know that there's a bunch of things like user space schedulers and things like that that you alluded to as challenges, but pretending all that stuff can be solved. Do you have a sense of how expensive it is to do this from a throughput standpoint and or from a latency standpoint? Well, that's a good question. So what you're saying is that you'd like to have the kind of span ID information directly in the kernel trace points, right? So if you want to do that, you need to propagate that information from the open tracing traces into the kernel. And the only way you can pass information from user space to the kernel is by executing the Cisco. I mean, there's no other way of doing that. So basically it means that each time you want to create a span, you need to transfer the control to the kernel with a system call. And usually you want to avoid that overhead because it involves a lot of context switching each time you create a span. So if we can do that just in user space by finding a neat way of synchronizing the traces, it's gonna need to less overhead. But on the other hand, it's totally feasible. I mean, there's been a paper by Google recently saying that they did that by doing fake Cisco's, okay? So they just take a random Cisco, like get PID and as part of the arguments, they just pass the span ID. So of course the system call fails because the arguments are not consistent. But at least in your kernel trace, you can have one event corresponding to, oh, there's been a span created and it's on that physical CPU, which means that it's been triggered by that specific process, which is what you want. I think that we should try to avoid that but it can be a good idea to start first with because it's probably the easiest thing to do. And I know that Raja has students working on that. So it can be a good solution too. Yeah, other questions from people? I know you're almost out of time, I wanna- Yeah, no worries. Well, I have a question that we don't need to fully answer but I was just curious, devil's advocate, how much does getting the timing precisely to line up matter in terms? Like is there a good enough form of this if you avoid sys calls where the timing is roughly lining up and that would still be sufficient in the majority of cases to diagnose what's going on? So you mean you'd like to have a lineup that is created during the analysis, okay? Between open tracing and kernel traces, that's what you mean? Yes, like if in your open tracing, say your scope manager, that's the part of open tracing that knows when contexts are being switched around. Yes. You know, recording something out in user land at some level of granularity that's presumably, you know, lower than what the kernel is doing, you can still kind of staple them back together out of band, but it wouldn't be as precise, but is that good enough for most use cases? I don't think it is because if I'm taking you to that, to trace compass, to give you a sense of how many things can happen in just a few nanoseconds into the kernel. For example, this system call, it takes roughly 10 microseconds and you can have shorter than that. So it means that basically in just a few microseconds, you can have several sketch switches. So if your open tracing trace tells you that span has been created at that time stamp, which is precisely millisecond, and then you have the kernel trace, you're not going to be able to say precisely that span was created while that process was running on CPU, because you could have a sketch switch just right after right before that just basically makes your analysis not valid. So I think we should have some kind of explicit synchronization between the traces. So as Ben said, we could do that through the kernel, but it can also be done instrumenting open tracing traces using other kind of traces like LT-TNG. So this is something that I did just for trying, I instrumented the Yager so as to have information in LT-TNG traces about when a span is created and when a span is stopped, actually. Awesome, is that code out there and available? Yeah, sure, so you can just go to my, just going to my last slide, you can just go to my GitHub account and it's one of the latest repos. Think it's called a Yager something for instrumentation. Awesome. But I can probably, if you send me an email, I can tell you how to run it because I just did it during the KubeCon and obviously I did not try hard enough having a proper read me to tell you how to use it. Great, that's really great. Thank you so much. Yeah, there is. Any other final questions before Loic has to get off the call? All right, sounds good. Thank you so much for your time. Well, thank you very much for your time too and if anyone has a question after that, you can just reach me through email and I'd be happy to help you. Awesome. Okay, have a nice one, bye. Great. Well, that was awesome. I don't know if people want to have any continued discussion about that weekend. I would recommend people go try to play with that repo sort of the next step. I'd love to hear a report back on how that could get cleaned up because that sounds really cool. Yeah, I didn't wanna, I know Loic had a hard stop so I didn't wanna just add a comment and waste his time with the comment but one thing that we talked about in person in Copenhagen is that a lot of the kernel tracing ends up, I think that you can think of it as a way to decorate traces with a lot of extra detail which is fine and I think that's all well and good but a lot of the most powerful applications of it have to do with a credible way to do an analysis of how different transactions interact with each other because the kernel is the best place to see those sorts of contention situations like that some of his examples focused on. So I think that in my mind, part of the power of that stuff and this is not an open tracing comment, this is a general tracing comment is about using the kernel as a way to understand interactions between transactions without having to do any kind of special instrumentation in the source code and if you could find a way to make that cheap so that you could see that these two different transactions depend on the same file descriptor, mutex, whatever, that's really profound, I mean, incredibly powerful thing and I have no idea about the actual fact overhead of doing that but that's the only thing you did and you didn't record anything else, I think that alone would be a really powerful thing. Yeah. By one other comment is I can't help but think this is an issue that really requires probably, maybe doesn't require but it seems like a language level thing, right? Like if you're operating in a language that doesn't give, doesn't think about this and then you're doing on top of that some kind of user level context switching that it's gonna be really difficult on top of that whole sandwich to come back in and efficiently staple all this stuff together. It's still possible though with the go runtime you can know when the go routines get switched but yeah, more trickier than when we're just like plain thread language. Yeah. And go is like a good one. The execution tracer coming in go 111 looks very interesting and I wonder how much that could assist with this sort of thing. Yeah, I just took a look at his repo and I think he did it mostly in C with go bindings back to C mostly because that's what his tracing libraries in but yeah. Yeah. I have to say I appreciated that he was talking about open tracing but I don't really think of this as an open tracing project. I mean to do this properly right now you kind of need to hard code some understanding of in memory representations of things like even if we decide to add a bunch of access or suspend context or something like that like you're gonna have to get pretty down and dirty to make this stuff work. So I mean that's why his repo is called Yeager et cetera which I think is totally fine but I don't know what other people think but I don't really see this as being an open tracing project. I think this is a tracing project. I don't think of it as something that benefits from like a shared instrumentation library as much as, you know, like I think you need to understand in memory representations and things like that. Yeah. It seems like making it work with open tracing is the easy part provided you can make it work period. Exactly. I mean Loa is a really awesome guy. He's very approachable and fun to talk to. We had a bunch of conversations offhand in Copenhagen so I encourage people to reach out to him to curious about that stuff. He's a very collaborative fellow. Awesome. All right. Well, we've got maybe 20 minutes left on the call. Happy to continue this discussion. There were a couple other agenda items that were thrown on that we could work through pretty quickly. Two were just report backs and then maybe we can go back to talking about this or get off the call. First one is just someone asked for a report back from the W3C trace context working group meeting in Germany. I think that the primary report back there is like trace context is coming along. There's really two parts of it that are in flight. The part that is hopefully getting to a sort of V1 like a testable V1, not a final V1 is the sort of propagation headers, specifically a trace parent and trace state, which is everything you would need to glue multiple different tracing systems together and be able to correlate them so that you could propagate a trace from one tracing system into another and then on the back end, hopefully be able to export data from one of those systems into the other so you can get a complete trace. There wasn't any discussion of the sort of baggage header, which is called correlation context, but there was a bit of a nascent discussion around what I'm calling trace data, which is, okay, fine. Let's say we go ahead and do this work and now we're able to staple these traces together in terms of the correct span IDs and trace IDs and things like that. But if we're going to do all that work, presumably you're now going to have to export data from one of these systems into another one. And when you do that, you're back to a sort of end-to-end problem of all these different tracing systems and different data export formats. So that's an issue that it would be nice to massage over and go from end-to-end to one-to-one. And maybe even more importantly is if you kind of define some kind of trace data format, could we use that as a vehicle for moving forwards with a more semantic definition of the content of that data? So basically kind of the work on standardizing tags and open tracing, could we do that work in a slightly broader fashion? So regardless of if you're using open tracing or not, can we just call, can an HTTP call be an HTTP call? And is it possible for us to kind of standardize on that? So there was definitely interest in doing that because it seems like the other half of the problem. There's the main takeaway there was to sort of do a review of existing trace data formats and do a compare and contrast of what's currently out there to see if there's some easy subset that emerges from that and use that as maybe a basis for going forwards. So that's sort of the next step on that project. In general, there was some discussion. I had this feeling, I wasn't there quite for the end. I understand other people had this feeling that we should really be meeting more frequently in more kind of focused working group sessions. This was called the trace context working group, but really it was a more kind of general distributed tracing meeting. And would it be better if we just had more frequent meetings that were focused on solving specific problems in trace context so that we could get that over the finish line? So hopefully that will occur. And I do wonder something similar about open tracing as well when we have some kind of thorny design issue, can we have just more frequent meetings over Zoom or otherwise to kind of drive through those problems? Seeing as GitHub issues for all these standardization efforts seem to just fall down when you actually have some kind of discussion-based problem you're trying to work through. It just doesn't seem like a great medium for coming to any kind of consensus. So when we go away from these meetings and then go back to GitHub, it just seems like velocity drops. And I feel like we should get an open tracing as well. So that was my general takeaway. Meet more frequently. Any questions about the trace context stuff? Not everyone may be familiar with that. I wish I asked that first. So there was a meeting at QCon for the cloud events working group. It's related to the serverless working group at the same set. And they are also thinking about implementing some sort of context. And it's very, very similar to the trace context. And I think they might also join the W3C or at least ask them to take a look at this back and see if there is something there they need and which is out there. So perhaps if they don't talk to you guys at that spec, then it would probably be a good idea to talk to them. I talked to already had guys that are participating on that W3C group asking them to try to unify the context into one spec. Good to hear. Yeah, nice to see you guys. If you know somebody working on that group, then I also need to reinforce this idea. Do you have a link for that? I'm going to post a link to this W3C trace context group. Yeah, so it's cloudevents.io, is there what? So they made it. Yeah, I just placed there. And on their meeting notes, there are a couple of extra items about the trace scene, actually. I just saw that on their May 10th meeting that they do have an action item for this W3C trace scene. Add a proposal for integrating this W3C trace scene for tracing multiple mutations. Interesting. Yeah, this does look like it has quite a bit of overlap, just from glancing at their one page. Cool. I look forward to the emerging five standards on this subject. So yeah, exactly. So perhaps dropping the name tracing would be more appropriate right now. But yeah, a standard for another standard. Yeah, hopefully we can find time not just to meet as individual groups, but some amount of time in front or around some of these conferences to have intergroup meetings just because to a certain degree, I think there's just maybe a lack of awareness sometimes about what the other groups are up to. Great. Well, I feel like Dovetail is nicely into the next action item, which was a report back from KubeCon. So there was one report back from KubeCon in Denmark. My report back is Copenhagen is beautiful. We also gave a couple of open tracing Q&A talks. I know, JP, you gave a talk as well that seemed to go over well. And all of that is online now, I think the videos have been posted. Do you have a link you want to post, JP, to your talk? You've got a handy? Yeah, I have it. But it's mostly a year, so not open tracing really. Oh, yeah. So I try to make as less open tracing as possible because you guys have also the same talks, the same project intro and deep dive. So I just refer to the talks. Great. Now I'll try to find links to the other talks we gave. One of them was just a sort of Q&A session that I think was helpful. And that's another thing that I think is useful to almost, I don't know if we could schedule Zoom's, but maybe more like office hours or something. People have questions and they often, it's almost, they find it easier to ask them using their voices human to human and get an answer that way. So providing more space for people to be able to ask questions and have like Q&A sessions might be helpful to the project. In Node.js, we did this sort of office hours thing at a certain inflection point where there was a big influx of new users. And that was kind of helpful, just letting people know when they could log in to Gitter and a Zoom meeting. And people, core members of the project would just be available to answer questions. Might be a good action item for us to start soon. So let me know if you're interested in that. Yeah, I mean, just in terms of the conference in general, I think I was supposed to give one talk, but I ended up sort of giving two and a half because Donald Trump wouldn't let Bianca leave the country. So I had to fill in for some of the first stuff, but it was good. And I felt like there was a really nice reception. I tried to give a talk that wasn't really even about open tracing. It was just a similar talk to the type of things that like Eric was post about just trying to detangle the open source tracing ecosystem in general. And that was really well received. And I think people got a lot out of it. I think we should continue to do that and clarify the positioning of the various projects in the space. And then both the intro and kind of expert sessions on open tracing were well attended and there's a lot of interest in everything. In terms of just being out on the show floor, it was pretty obvious that like everyone, that's like KubeCon is pretty biased towards like understanding the CNCF stuff. So it's not like this is a random sample of the five relations just walking around Copenhagen. But it was also pretty obvious that they all understood what open tracing was at the basic level and a lot of them were actually applying it within their organizations. And both companies like Uber and so on and so forth, but also people from MasterCard and giant German banks and stuff like that. So it was interesting to see that kind of proliferation. Yeah, great. Any other KubeCon related business? Not really. Although I did think that for the next one of these that we have people talking at, I don't know how to accomplish this. We had these salons before and then they gave us like these half an hour slots. Like I really would have preferred to have had the thing that you ran, it was like basically a Q&A with a quick 15 minute presentation ahead of time. Like I would have preferred to have done like a really long Q&A. Like it would have been great to have the one hour Q&A or something like that. I wonder if we can make that happen. But I felt like that Q&A session was maybe more valuable than any of the presentations because the questions that were coming from the audience were very good and it just felt more interesting to me. So I'd like to try and get the CNCS people to give us that sort of situation. I don't really care what they call it, but I think that would be valuable. Yeah, I'm a firm believer in unconferences where the people who show up for it get to define what gets talked about is really immensely powerful for people in terms of getting their questions answered. But yeah, that's for us to bother the CNCS about. We also sometimes try to run workshops there. And I wonder if like we should focus more on these like Q&A sessions because the workshops really do seem to conflict with the kind of thing these conferences tend to want to provide space for and time for. Not to mention conference Wi-Fi and tutorials or bad combo. Anyways, one final item on the agenda that I threw up is Doctethon. So we've got a new website coming for open tracing. It looks like it's almost in the state where we can get content put into it. I tried standing it up this week. It looks almost there. Luke from the CNCF has been working on it for us. The same person who redid the Yeager site. So that's very exciting. I'll send an announcement out once that's in a state where people can start pushing documentation by making PRs against a branch on the open tracing IO GitHub repo. So that's almost there. That'll hopefully be there next week. So that's a thing that's coming. And to then help kind of push everything out the door, we'd like to do a Doctethon. So a poll is gonna go out today around potential times for having this. So be on the lookout for that. But we're thinking in the month of June, we'll do this. And the idea behind a Doctethon is we, you know want to get enough rough stuff out there that there's sort of a trellis that, you know we can grow the rest of the documentation on and then kind of have a big push to get everyone around at the same time, all of the experts, all the people who work on the different languages combined with people who know how to write docs and edit and clean them up and see if we can just do a sort of big day long push to clean things up and get everything out the door. And so that'll be happening in June. People who want to get involved in that or organizing it, please let me know. Contact me on Gitter or send me an email. And I hope to see you all there. And that is everything that we have on our agenda. So I'd like to open the floor to any questions, comments, bike sheds, people like to go over. We've got a couple of minutes left. All right, well, I'm gonna paint this bike shed black and say the meeting is over. Great seeing you. Thank you. Thank you.