 Alright, welcome to my short rant on debugging OpenTelemetry by me, begobitetsuo, on the internet. So what's the deal? New users get started by installing OpenTelemetry in their app, right? This is where everyone starts. And we're always looking for ways to make it easier for people to install the OpenTelemetry clients and all of that stuff. You know, like improving the docs, various kinds of automation, yada yada. And when all of this stuff works, when the installation works properly, it's great. It's super smooth. But what happens when the installation doesn't work? Sadness. Sadness is what occurs. Suddenly, brand new users who don't know much about this are thrust into a position where they have to debug a tool they literally know nothing about. So once metrics and logs are stable, I really suggest that debugging is an area that we put some of our focus on. So in this lightning talk, I just want to briefly cover all of the ways new users may have to debug their OpenTelemetry installation, starting with is the exporter connecting to anything? This is usually the first place people look. It might be misconfigured. You can find connection errors. Hotel log level equals debug is your friend. That's pretty simple. Next up is buffering, confusing the heck out of everybody, right? Because the default configuration is meant for production, which means your data is going to buffer for some time before it gets flushed. And that means you might not immediately see data if you are just running, like, your app in, like, developer mode and you're just kind of, like, clicking on things a couple of times and you're like, is it working? I can't see what's going on. It's just because the data's been buffered. So this is something that confuses new users. And last and definitely not least is the data correct. Is the data coming out of your application actually correct? And this is the part I actually want to focus on for the rest of the talk because debugging these data issues is terrible. Instrumentation might be missing. Context propagation might be broken. And the thing is, when you have a connection failure, you have an error, right? So there's a bread come for people to follow and a straightforward solution to fix it. But when instrumentation is missing or context is broken, you get nothing. You just, nothing. No errors, maybe some weird data, maybe not. And this is terrible, right? This is really terrible because new users have no idea what correct data is even supposed to look like. You know, if you, like, turn it on and get it connected and then you see, like, one span, you're like, is that right? Is this what I'm supposed to get? You don't know if you're a new user. You don't know anything, especially if you're new to tracing. You don't really know what to expect. So how can we make this situation better for people? And my suggestion is we have to find some way to set expectations, right? Because when users test their instrumentation by running their app and clicking on things, open telemetry doesn't know what they're trying to do. It doesn't know what kind of data it should be expecting. So if there is a way for that user to describe the transaction that they're triggering, then we could create some expectations which would allow us to test the data actually coming into a collector. So what would this kind of testing look like? I don't know. But I think we should research this and have a look at creating some kind of testing language which is simple enough and free enough from open telemetry technical jargon that inexperienced new users could actually successfully leverage it to debug their installation experience. So just to give a quick off-the-cuff example, if you were writing this like a test, you might write something like this. You should expect a transaction. That transaction should be coming from this particular service and it should contain these following components. And I can write this down because I know my app. I know what my app is called and I know the pieces that should be involved in the transaction I'm trying to trigger. If you were writing this in YAML, let's say, so that you could use it to configure a collector component, maybe it looks like this. I don't really know. Something like that, something like that would be really helpful for new users. And if we had something like that, we might even be able to go another step and create, say, like a web UI for building these expectations. Something that would walk new users through the experience of describing the transaction they're trying to trigger. A potential source of this information might be their package manifest. Like in most languages, there's some file you can grab that will tell you which libraries are installed in that application. You could then match those libraries with instrumentation that we have available and start creating some expectations with the user about what kind of data should be coming out of their app when they click on something. And if we could do this, then new users, I think, would be able to get their heads wrapped around the kind of telemetry that they should be expecting out of their application. So this is just one idea. I think there's a lot of different ways to attack this. I think there's a lot of different ways in general to combine testing and telemetry. And I think this is interesting. I think this is an interesting place for us to go once we've got kind of all the table stakes nailed down. So if you also find this interesting, hit me up on Slack. Thank you very much.