 Okay, we're now recording. Welcome, everyone, to the monthly Open Tracing Specification Council call. Today we've got two presentations that I'm excited about both of them. And to kick it off, Johannes is going to talk to us about service maps. Johannes works for the platform lab team at InVision. He's been working on Open Tracing. He's also been playing with some light step. In the past, he worked at New Relic, so he's familiar with APMs and tracing in general. Johannes, good to see you virtually in person. Good to be here. Yeah. And if you would like to, if you have slides, if you'd like to screen share and kind of take over. Great. Why don't you give us an introduction to what you're going to present? I know it's a service maps and it's something new that you think it's kind of a novel approach to service map visualization. Yeah, I think so. So I would be excited to hear if someone has seen this before or not. Let me just skip my presentation up, share the screen. How do you do that again? Let's see. Okay. Can people see that? Yeah, looks good. Cool. There we go. Okay. Yeah. So my little presentation is called service maps with Open Tracing. And as Ted said, I'm an engineer at the platform labs team. So we do kind of like various little tech things. And this is just like a very minor thing. So unfortunately, I haven't been able to spend too much time on it. So what you'll see here is more an idea than anything like a full implementation of anything. And yeah, so the problem that I'm trying to solve is at Invasion, we don't really have a service map, meaning like a graph showing the interdependencies between microservices. We kind of have a ton of them already. And a lot of people, or no one has a full idea of all of them and how they're interconnected. So there are ways to build these. And I'm going to build one with Open Tracing. But one thing, I have like two bonuses that I don't see too often. And that's like, can we get a real time one? So if dependencies change, or if there's some kind of like outage, will the service map update? Can we get that? And also, normal service maps just say like this service is dependent on this other service. But it doesn't say like what the channel is through all the microservices that makes that one service dependent on another one via like a bunch of services. So it'd be cool to get that. So yeah, I've seen some tools out there, as mentioned, but I don't think I know of anyone that uses Open Tracing, although I wouldn't be surprised if that's the case. Anyway, that's what I used. And one thing that was important for this project is that it's just like, for me even, it's just like maybe 5% of my time. And so I definitely can't go to other teams and say like, hey, why don't you just add this library and then we'll get service maps. Like, I just need to be able to do this without anyone's involvement. And the good thing is that we had already instrumented most of our services with LightStep, which as you know, is an Open Tracing provider. And what I did was to create, so LightStep, you can configure to send HTTP thrift messages to the LightStep satellite, which collects them. And so what I've done is built a proxy that's proxies all the data and also dumps it on Kafka. So for every thrift message, it proxies it and also creates the Kafka message. Yeah. So that's how I gather the data. And let's see. Yeah. So LightStep is proprietary. I mean, it's open source, but it's just one provider of Open Tracing. And so I guess this is when I'll make a little side point, which is probably for this group, the most important point, which is that I would love, love, love to have Open Tracing not only be the API, but also the format, maybe the transport, so that like I said, we have instrumented everything with LightStep, but it would be so sweet if I could just say we have instrumented it with Open Tracing. So now we can plug in LightStep or we can plug in this new thing that I'm making or whatever other Open Tracing utility. I think that would, yeah, that would be a huge change because right now, I mean, you probably already know this, you probably already on top of this, but right now I really have to either make a proxy like this and decipher those messages or I have to involve every team and say like, hey, use this new library in addition to what you're using. So yeah, I think it would make it easier to make one of services like I'm making also to change providers or have multiple providers. And I think it would be a cool idea. And I'm interested if this is something that you've already, that's already in the works. Cool. So now that I have this data, I have a service that I'm actually running on my laptop. So instead of deploying a service and everything, I'm just running this little Kafka consumer on my computer, slurping in all the spans as they come in on Kafka and then aggregating them in memory. And this is probably how most Open Tracing things work, I assume. And so the kind of the first to answer the first question that I posed in the beginning, which is just like which services talk to which services. I just count edges basically. So this service and I'm looking at the component name, which I'm not sure if that's light stuff specific or Open Tracing and then the operation name. Actually, first of all, I'm just looking at component name. So which service is it? And so I just mapped services talking to each other. And then using graph of this, I get something like this. I'm not sure you can see this thing on my screen, but it's blocking my view. Yeah, we can see it. Okay, let's see if I can remove it then. We can see your graph is. Oh, yeah, yeah. Okay. Yeah. No, I just had a zoom window blocking it. Okay. Okay. So here you can see this is just a subset of the services we have. And this is running and testing as well. I didn't, I wasn't quite ready to deploy my one of service to production quite yet. And so here you can see like a little group of services that depend on each other. One thing that was immediately came out of this for me was that we have this user's API service and team's API service and they depend on each other. So I think that sounds like that should be one service and not two services. And I, oh, it's always cool to look at the civilizations to kind of get a different view and different idea. But yeah, one thing that's cool about this is that it also answers my first bonus question, which is that this kind of feeds off Kafka and could spit out the graph every five minutes or whatever. And so you can really get like an updated view. There's no need to like update some YAML dependency file or like figure it out manually. It just like feeds off of open tracing data. I also build these kind of Markov chain graphs. And for anyone not familiar with Markov chains is basically saying like, for each node, you can see arrows pointing out with percentages. And that's like the likelihood that a call will go in that direction. Of course, it's, it's not really like a random choice here. It's defined with the code. But if you just like want an overview, you can see where calls go. And I also put in like an enter and exit node to see how it enters our system and how it leaves it. And this spaghetti is kind of when I, instead of just doing the component names, I also added the operation names so you can see it's like teams API, get user teams or teams API, get user open enrollment teams. And so you can see within the service how the data flows. But this, the reason why I'm just showing you a little bit is that even for the small subset of data that I had, it very quickly becomes a little unreadable and you have to scroll around and it's, it's tough. So, so that's like the first approach. And what I have been doing just a little bit, which I think might be more novel, at least I haven't seen it for service maps, which is that instead of doing this kind of shallow map where you have, sorry, where you have just like you can see the likelihood of where it goes out of the service, but you can't really see the path that it took together. So instead of counting, I just I'm counting these paths through the system. And so what I'm doing is basically looking at the tree of calls and sorting it in a specific way just to make a kind of canonical version of it. And I think there are different ways where you can involve the tag names and stuff to get more interesting data. But so far, I'm just doing components and operation name. And so I'm just counting. So for each like complete path that happens with in my aggregation, I make a little hash of it and then I increment the counter. So just a side note is that this, like it's running on my laptop and it's running just fine on my laptop because I just need these like hash to integer mappings. It's not really a lot of data at all. And the output of this thing is also tiny. I just need to like for every canonicalized path, I need to store it and then store the account associated with it. I feel free to interrupt me with questions, by the way, I realized that there's a lot of talking text and not too much illustrations here. So if it's hard to follow, let me know. So the idea with these paths is that you can answer questions like what is the entry point to all our services that ends up with like a certain call to a service? Why are we getting so many of these? Like everyone calls them, but what's the origin of these calls? Another thing that I think is interesting for something like LightStep or other open tracing providers is that you can, if you have an outlier, say in response time, you can see like, okay, this thing takes 10 seconds. I think it's interesting to know like, how common is this path through the system? Because if it's completely uncommon and it's very slow, that means that this is a one-off thing that that you can like ignore or deal with depending on, or if it's like very slow and super common, then you know that there's something about the data or your system at this point. So those are like very different things and the count of the paths will tell you which group it belongs in. Oh, right. So I have some other graphs which I think I need to switch to my browser. Okay, can you see this one? Yeah. Okay, so this is one where we have we have the component name and we also have the operation name. And so one thing you might wonder if you look at this, you see that there's like, we have a bunch of calls to this teams API, get team. And so the question is like, how did it get here? Because we can follow this arrow up. And so we know that it comes from this users API which came from this one. This is just by following the only arrows pointing in that direction. But here, like once we're here, we can't say like, did the calls that came through here, did they come from the teams API or the conversation servers or did they come through our system originally? So those are the three paths that could be taken to get into this. And so if you want to reason about the paths of this, in just the diagram view, we can't really see that. But with the paths, I can generate those counts. So now I've highlighted this. I have to do this very manually. Unfortunately, I think this would be way better interactively. But basically now we can see that there is zero calls coming through here. All of the calls from conversation service will go through this path. I guess, yeah, I guess all of them will either go through here or come directly in from outside. So this is like a very early prototype of what I think as an interactive diagram could actually be very useful. I have a quick question. Yeah. Have you done anything with having kind of breaking out those calls by any sort of metadata, like if an error occurred or using any like the semantic conventions to categorize what percentage of those calls are for this reason or that reason? I have not. But I think that's a really good question. And I think that's something where you could see a lot of interesting data like that and tags. And I think so I think if I built this service for real, then I think it would be interesting to try to dynamically figure out like, what are the big groups of things that make a difference here. And so you could break it down by those interesting things. But yeah, so far I haven't done it. Oh, thank you. Yeah. And yeah, so actually what I really wanted to build but I didn't have time is Google Analytics has this really cool user flow diagram that's a Sankey diagram if people are familiar with those, which is I think a great visualization for this. Unfortunately, I didn't have time to to make that for this presentation. I also think Netflix is a visceral. This is a screenshot from the other thing. I think that could be a cool way to visualize it. It's actually more like cool than useful because it's again, one of those shallow mappings. So it doesn't really tell you like the story of the path through the system. But it does give you a very nice live view. So these dots on the diagram would actually move if this was a live instance of it. So yeah, that's what I have. And I'm super interested again to hear if you have any of you have seen this before or there are products like this. And yeah, that's what I got. Thank you. Thank you, Johannes. I wanted to mention because you were discussing standardized wire protocols in data format. That work, we're not doing it directly in open tracing because that project right now is just trying to stick to the sort of language API level, but it is going on with many of the same people through the W3C. I'll post just posted a link in the chat, the W3C trace context working group. We actually just met last week in the Leon France for the big W3C meeting. That's got two parts. One part that's nearly to a V1 is the wire protocol for in-band context propagation. So that's what you would be injecting and extracting from carriers and open tracing. So it's some standardized HTTP headers for including tracing metadata. That works interesting also because it's focused not just on standardizing for individual tracers but for interop. So if you had a trace that actually went through several tracing systems, the canonical example is if you have a service provider like an infrastructure provider and they have tracing data and then you have your own tracing system that you're running an application and you'd like to link that information together so that you can say you've got, I don't know, you're running New Relic or LightStep in your app and then you're running on top of Microsoft to Azure or Google Cloud and you're using one of their services. They're not going to be running LightStep or New Relic on Spanner, but perhaps they could give you some kind of interesting trace data out of Spanner or similar service. So there's that going on. And more relevant to the service map discussion you're doing, we're thinking about trying to standardize the sort of export format so that you could do black box tracing like this. So if you just had kind of a standard wire protocol for trace data being propagated in-band and then some more standard way of being able to attach to a process and get some kind of standard export format or a push model like Syslog, something that everyone could consume without having to know the particular kind of monitoring service that was running inside of the app. So I think that's something you would probably have liked to have had when you were running this experiment. Definitely. That's super cool. Yeah. I think this is interesting. I haven't seen, I've certainly seen service maps, but I do like the idea of not just of they tend to be service level, I think, and not, I like the idea of digging down into path history. I can see this being interesting for outlier analysis and also the percentages that you're showing. I can see that being very interesting. If you're just trying to figure out generally where is the traffic coming from, kind of getting a sense of the various flows that are generating some particular, it seemed like you were finding some particular point of interest and then working back from that to see the various types of things that were causing it. Yeah, correct. Yeah. I can share some experience with this. So we've implemented deep service maps like years ago. I actually showed them at the last KubeCon. So one thing about that is that the way you show them, it works fine if your architecture is small. As soon as you go to large scale, like at Uber, we have three something thousand services. So those maps by themselves, like if you do the whole architecture, they become completely useless and unusable. And so our service maps were instead done from the point of view with service developers. So you pick a service first and then you visualize all the paths going through that service rather than like the whole architecture. Advice is just too huge. And I was curious to see Sunky diagram. We've also experimented with a Sunky diagram and we just couldn't make them work at any scale. I mean, they're kind of interesting that if you have maybe 10 services, but anything more than that, then it just becomes unwieldy. Yeah. So I didn't have anything to show, but yeah, I definitely saw already with what I was trying that it becomes very long basically. Like if you show paths through it, yeah. I think for the tiny stuff that I had, maybe it would be possible, but. Yeah, we're very useful. We had a lot of people internally at Uber, like asking for them and using them to understand the dependencies. Do you have a link to that talk, by the way? I'll be very interested to see it. Yeah. I mean, I can post in the chat. Thank you. From Bill Westlin, who also worked on that project that I linked. I don't have a link for the KubeCon talk, but there's one is even better, because he actually showed live demo of that. I just had screenshots. And it's the same thing? Yeah. Yeah. This is, yeah, what Bill showed is like the live examples of that thing. Very cool. Thank you. It has like live filtering, like you mentioned with the highlighting the service and stuff like that. It's really interesting. Yeah. But I can see your approach, Johannes. You don't necessarily need the whole service map. If you're starting from something you're interested in and then working backwards, seeing a map of everything in your system isn't very relevant. You're specifically only interested in the things that led to that particular interesting moment. Yeah. Yeah. That's probably one of the directions we're going to take with the next iteration of the dependency maps. Kind of restricting it more severely, because on larger, like some more popular or more relevant or I guess commonly used services, it becomes pretty difficult to use. So we're going to probably limit it to a single service being on all paths. So it's kind of a focal service. And then like a number of hops up from that service and down from that service, and then allow the user to kind of progressively expand or collapse. Sweet. Yeah. It's really interesting stuff. Yeah. Yeah. Any other final questions? Otherwise, we're going to switch gears to Java agents. Great. That was an awesome presentation. Thank you so much, Johannes, for putting that together. Yeah. Please let us know if you published that work in any way. We'll do. Thanks. Yeah. Okay. So next up, we have some fun work we've been doing at LightStep. Seva Safras lives in Thailand, and he's been working with us on Java agent work. However, rather than a code of a black box agent approach, we've been trying to look at one that would leverage existing open tracing instrumentation somewhat similar to the existing Java agent that was in contrib, but thought we came up with a kind of interesting way of actually putting it together. So Seva is on the call today, and he's going to lead us through an overview of that project. It's just about getting to the point where it would be great for other people to kind of get involved. So this is sort of a welcome to Java special agent talk. All right. Do you guys hear me? Yep. Okay. Let me share my screen here. I do this. Okay. All right. So hi, everybody. My name is Seva Safras, and I have been working on the special agent project with Ted. Just to give you guys a jump start into the context of the project, this project is essentially the Java agent, but more. So basically, it does everything that the agent does, but it also does automatic instrumentation. So let's talk about what that means. All right. So first of all, motivation. What is the motivation behind the special agent? The essential motivations are to with a single, with a single, with a single command effectively, achieve end-to-end tracing to be able to instrument all RPC libraries, runtime frameworks in every service. Manual installation configuration of plugins is significantly cost, is a significant cost barrier. So the idea is to make it easy and efficient to effectively instrument an entire application with the least of effort. Since it's possible to automate the process of installing plugins, we basically are trying to create a tool that would do exactly that. Automatically instrument the third-party libraries in an entire application. This would be helpful for large organizations with the code bases and many services, and especially helpful for teams that do not have access to the source code itself. All right. So high-level goals. High-level goals of this project were to allow any plugin in the Open Tracing Contrib project to be automatically installable in an application. So essentially, with one command, you instrument an entire application that leverages all of the instrumentation plugins in the Open Tracing Contrib project. Install any plugins regardless in which class loader the library is loaded. So this is a pretty big one, because as we all know, in Java, you never can be certain in which class loader any class is loaded. And to say more to this, you can never even be sure that the class loader is a regular class loader that even has a class path, for instance. So number three, must not destabilize the target application. So this is kind of a quality control goal. Provide a single-line command to statically or dynamically attach to a target application running on JVM versions 1.7 to the latest. And also to provide a lightweight testing methodology to test auto instrumentation. So this is effectively a testing tool that would allow a developer to easily test his biteman script and ensure that the instrumentation plugin is being properly loaded and is resulting in traces via Mod Tracer. So the idea behind this test methodology is that it would simulate the exact conditions, the exact toughest conditions that would have to be met by any application. And those would basically be that the classes that need to be instrumented are loaded in a class loader that is disconnected from the system class loader. And finally, to initialize the Mod Tracer and to provide an easy reference, basically, to the test methods. And lastly, so this special agent needs to be configurable. And this is the kind of still out in the foreshadowing stages of the project. And by the way, if anybody has any questions, please feel free. Okay, so very high level architecture. The idea is that the special agent is built with all of the instrumentation packages inside of it. So that basically when you declare the Java agent parameter or Java agent, a VM argument on the command line for an application when you're statically attaching, that jar has every instrumentation plugin inside of it. Which also then, so that's instrumentation packages. The OTA rules files are also packaged inside of the special agent and they are correlated to the instrumentation packages. So effectively, it is known which rule is attached to which instrumentation package. And so then when the special agent attaches to the application, it has all of the resources to be able to trigger and instrument the third party libraries. Okay, custom class loaders. So this little piece shows how the special agent deals with the situation where classes are loaded in class loaders that, well, are not the system class loader, basically, because if it's not the system class loader, it could be really anything. And so what is the farthest extreme of anything of any weird class loader? In this picture, that will be custom class loader two. And custom class loader two is a class loader that is detached from the system class loader. And it is only attached to the boot class loader. So boot class loader is a class loader that you cannot actually effectively detach yourself from. So any class loader that is ever created, it does have like a super parent as the boot class loader. But in this example, the contrary is custom class loader one, which has a parent that is the system class loader that then is connected to the boot class loader. Okay, so how does the special agents deal with this situation? The way the special agent operates is, and it's okay, I'll quick overview here because it's on the next slide. So biteman, biteman gets injected into the boot class loader so that any code, any bytecode that references biteman from any class loader would be able to reference biteman because it exists in the boot class loader. Okay, right. And then next slide, wait, that's not where I wanted to go. Okay, weird. Okay. So biteman, the way that the special agent is able to operate with custom class loaders is that it fully leverages the OTA rules.btm files. So what do these files represent? These files represent the trigger point into the third party library for instrumentation. That is to say that if this biteman rule gets triggered, then this library exists in the system. And from this point, we can extract some information. The first point that, so number one is something that is done beforehand, before the trigger event. And that is to associate the instrumentation package with each OTA rules.btm file. So that basically links together the implementation of the instrumentation plugin with the biteman rules file. Okay. Number two is when trigger happens, when biteman triggers, when the rule triggers, the special agent uses the object on which the rule was triggered to determine the target class loader. Okay. So now we know what class loader the class for the object is loaded in. And then using some very interesting patterns, the special agent loads bytecode of the instrumentation classes in the target class loader directly. And the way it does this is it actually uses biteman itself to override the class loader dot find class method and also then exposes the define class method. So effectively what the code is doing is it is forcing the class loader to be able to resolve any of the classes in the instrumentation plugin. And upon resolution, the bytecode for those classes is directly injected into the class loader. Okay. Let's see here. I don't know how this got messed up. Okay. So if anybody has any questions about the class loaders, feel free. I skipped a lot of complexity, but that's basically the idea. And what's great about this is that this solution effectively works for any class loader because it operates with jambalang class loader class. Okay. So installation of the special agent. So the special agent is a Maven project and it has two artifacts that are built. It has the main artifact and it has the test artifact. And effectively these two artifacts, these two jars are the only jars that are necessary to use special agent, either for instrumentation or for testing of instrumentation plugins. Okay. So in this situation, in this slide, the special agent main jar has all of the instrumentation plugins inside of it. And when it is passed to the Java agent VM argument for static attach or when it is run as a main jar with the pit of a process, it is able to effectively attach and instrument the full scope of all of the open tracing contrib instrumentation plugins. The special agent tests jar is intended for testing and it is basically the jar that contains within it this class called agent runner. So the agent runner is a junit runner and this junit runner is used when constructing your test class to be able to simulate the steps that I had described earlier. Basically, yes, so agent runner is effectively all that you need for testing of your plugin. It does not have any of the instrumentation plugins. This jar does not have any instrumentation plugins within it and it also supports the Java agent VM argument and the standalone execution. Okay. So this is the main usage of the special agent and effectively it is just an example of how to use with Java agent VM argument or the dynamic attach to be able to attach to a running application on the bottom here. Okay. So this is what I wanted to get to here. So test usage and this is actually some pretty cool stuff here because effectively what the agent runner is able to do is it is able to simulate these very obtuse conditions with a detached class loader and allow a developer to test his instrumentation plugin and OTA rules file with these conditions. Okay. So how do you do that? Effectively you just followed regular junits patterns, vanilla junit patterns. You use the runwith annotation. You create your test method with the test annotation. You declare the mock tracer as a parameter and that's it. And basically when you run this what happens is that the test method it is well first of all the test runtime is forked. Okay. Because it has to be forked for the Java agent to bind, effectively to statically bind. And then it elevates all of the code inside of the test method into a class loader that is detached from system class loader. And that's it. And then whatever OTA rules file that you have in the class path and whatever instrumentation classes that it points to then it just does its thing. And you have your reference to the tracer and you can check to see that things are properly working. So effectively this pattern, the Java, sorry, the agent runner, it is effectively a, it supports full junit vanilla patterns. And it runs in both the Surefire plugin as well as IDEs. And finally, how do you include this agent runner in your dependencies? This is it. It's basically the same dependency artifact descriptor as you would use for a regular special agent but you add the test jar as the type. And you're done. So next steps. One of the more serious hurdles that we're trying to get past is the fact that many instrumentation plugins use the inheritance pattern to instrument third-party classes libraries. And this pattern unfortunately is very difficult to implement with triggering with biteman because biteman does not support triggering off of new. So this is something that we're trying to still figure out. Number two here is to add OTA rules for all current Java instrumentation plugins. And finally to support configuring instrumentation. And here's the link for all of you guys that I will paste into the chat. Okay? Okay. Yeah. Any questions? So the main interest here, well, I think there's like two things I'm interested in. One is support for different versions of Java, including new versions of Java. And the other is, you know, the emphasis on testing is due to the trickiness around writing these biteman rules file, rule files, basically. Yeah, I was just going to say, I mean, it looks like a big improvement on the testing from the current agent and also the class loading issue. And if I understand it correctly, seems like, you know, it could solve a problem that was with the current Java agent. But I'm just wondering, I mean, in terms of from a project perspective, I'm not sure it's a good idea having like the existing Java agent and this special agent. So, I mean, do, is it, would it be better to collapse them into one project? Or, you know, deprecate the existing one? When it handles this, you know, the instrumentation rules that the current one handles? Yeah. I would agree with that, Gary. We didn't want to touch the, we, when we were experimenting and rewriting this, you know, it was different enough than the current agent that we felt it would be better to kind of get farther along and then make a proposal rather than kind of mutating that code base in case other people were using it. I'm not sure what the current usage for the current Java agent is. But yeah, I agree. If y'all think this is an interesting approach and improvement upon the existing ones, then, yeah, we should, we should merge them. Yeah. And I think it's better at least to have some indication of what's going to happen. I mean, the only thing is, I had a look on the special agent, sort of, read me. And it was saying one of the non-goals was supporting custom rules, which is something that the current agent does support. It's just that, you know, if somebody's instrumenting their services, you know, currently with the other instrumentations in O2T Contrib, they can, you know, use those existing instrumentations, but they can also get hold of the tracer and add their own spans. So I don't think it's any different when you have rules, or automatically installing the instrumentations. I think users will still want to be able to, you know, add their own custom rules for their own logic. So if I can jump in really quick. The trick with special agent is to be able to dynamically inject the classes of the instrumentation plugin directly into the class loader. Right now, the way that it is designed, it requires these classes to be pre-packaged inside of the special agent. Right? So this is not the way that Java agent operates. Java agent requires the plugin to be just added to the class path. Of the application for static attach. But if the custom rules are only instrumenting the application's own code, then it doesn't need any additional classes to be packaged in the agent. Well, the actual instrumentation implementation, right? The actual plugin. Yeah. Gary, I think the reason why it was a non-goal for this project is not that that's not interesting, but it seems like there's a variety of, there's being able to safely install plugins that are targeting, you know, third-party software, which has a lot of its own trickiness around making sure you're targeting the right version. For example, some of the interesting edge cases here are like, you know, you might have instrumentation for multiple versions of JDBC or something. They're slightly different. You want to make sure you're installing the right one. So safety's an important issue. And then when it comes to kind of dynamically instrumenting application code, it just seems like the techniques you might use for doing that are a bit different. And so it seemed like you could use this for installing, you know, pre-existing instrumentation for pre-existing libraries and potentially be using different additional techniques if you were looking to kind of target your application code or do something dynamic there. Since there's like a couple different approaches, people seem to potentially want for instrumenting their application code. One is something similar to these sort of writing biteman rules where you just don't want to modify the source code, but you do know ahead of time what you'd like to target. And then there's sort of other approaches around, you know, while the application is running, just give me some insights around different things that it's doing. So it just seemed like the approach to that would be like additional work you do on top of that. And potentially you might want them all a cart as well since... Well, I'm not sure if there's extra safety issues there, but it just seemed like things that could be done all a cart potentially. So that's the only reason why we put that sort of limited scope there, but it could be wrong. It might be rather similar. So, Gary, actually, I totally get what you were saying. And actually, this is already implemented in the test version, in the agent runner. The way that the agent runner works is it does not have any pre-packaged plugins inside of it, but it is still able to do the instrumentation with biteman, executing the rules, but for the specific plugin that you are testing. And so effectively, this is exactly the use case that you're talking about, but not for testing. You're talking about actually using it for instrumenting an application with custom OTA rules file. So it's not far off at all. I mean, I just feel like some code would have to be moved into the main from test to main, and that's it. Okay. Yeah, I think it'd be good to remove the... It's a long goal, possibly just to make sure it doesn't sound misleading, that that's not intended as a potential future benefit. But yeah. Yeah, that's fair. Right now, the big blocker really is biteman, not supporting new, which sounds like something they're interested in supporting. So I think a concrete next step is to work with that group to see if we can get that over the finish line, because that will extend the reach of this to include most of the plugins we're looking at in OT contrib right now. So yeah, if you're interested in Java agents and Java instrumentation in this manner, what I think is interesting about it, though, is that your tracing system, there's APMs out there that do these kinds of things. But if you're using a tracing system with open tracing, it doesn't necessarily need to be doing this style of stuff. So you can be taking Yeager or something like that and then using this to install things on top of open tracing. And I don't know, I found that to be a little novel compared to having the tracer you were installing under open tracing also be doing a bunch of these things kind of behind the scenes, which is how a lot of existing APMs work. So if you're interested in playing around with a biteman and dynamic code injection, please check the project out for getting to that stage where it'd be fun to have users and other people playing with it. And on that note, we're over time, it's 9.35. People have any final questions on this? I suggest maybe going to Gitter, the open tracing public channel, and continuing the discussion there. And in the meantime, it's been lovely seeing all of your faces. And I'll see you all again next month. Yeah. And thank you again for your honest, for your service Mac talk. Yeah. Thanks for your honest, thanks. Thank you guys. Cheers. Cheers.