 All righty, all righty. Hello, everybody. We are from the OpenTelemetry project, and we're going to give a brief talk today about a lesser discussed form of observability, which is observing open source software, specifically libraries. My name is Ted, Ted Sue on the Internet. I'm one of the co-founders of OpenTelemetry. And this is? I'm Trask, stalemaker and software engineer at Microsoft and a maintainer of the OpenTelemetry Java instrumentation. Yes, and instrumentation is what we want to talk about here. So what is the whole goal? The goal is observing our applications, right? We run applications, we run services in production. However, these applications are built out of libraries. It's third party open source software libraries that do most of the heavy lifting in our applications. So when we say we want to observe our applications, what we really mean is we want to observe our libraries. However, there's a problem, which is most libraries don't provide any observability directly. They don't have any instrumentation built into them. Instead, what happens is people like Trask here write their own instrumentation and then inject that instrumentation into the library through something like a Java agent. So this does work. This is the way we have been doing things since before time was time. But this does create some problems, which have only been getting worse with the recent acceleration of software development. The first problem being there is too much software. When instrumenting rails was enough to give you observability across like all of Ruby, it was fine. But now there's a new JavaScript framework getting minted every single week. There is too much software in the world. And so the idea that you can have a central repository maintained by a small group of people who are going to write and keep up to date. All of the software instrumentation in the universe is not realistic. It is not realistic to centralize this effort. It's also not realistic that we, the open telemetry maintainers, would be incredible experts at every single software library out there that we need to provide instrumentation for. If we happen to be users of a library, that's great. We might have some insight as to specifically what you would want out of that library. But we're not going to be users of every piece of software that everyone else wants to use. So it's just not realistic that we would be the experts at individual libraries. We're experts at observability, but not at libraries. Who are the experts? Well, the authors have said libraries. This library author presumably, hopefully, would know what's important about their library. They would know where the best place in their library is to put that instrumentation. They would also know what instrumentation is important. What is it that you really want to know about the software when you're running it? And let's give you some examples here. So for example, your library may have various tuning parameters. Queue size, timeouts, retries, caching, things that help your library perform at its best. But users don't know what to put into those parameters. And they come and ask you, how should I populate this parameter? How should I set this tuning parameter? And of course, it depends. And it depends on the most complex thing possible, which is how they are using it in production in ways that they may not even realize. And metrics can help guide your users to configuring these settings. If you as a library author maintain your instrumentation inside of your library, you can write and maintain your own playbooks or recipes for your users to follow. How to tune these parameters is a great playbook. Which alerts are useful? And my favorite troubleshooting guidance for common issues driven by the telemetry that your own library is now capturing. Playbooks, these playbooks and recipes make supporting users easier. Think issue backlogs, the number of issues that are open for common issues, common questions for users that could be answered by a playbook where they go and look at their telemetry and see what is happening in a way that you have exposed from your library and that allows them to solve the issues and the questions that they have. Yeah, so when we found it open telemetry, providing support for library authors to do this, to maintain their own instrumentation, to have a direct conversation with their users about how they're using their applications in production, we saw that as a goal. I'm in this lightning talk to get into why it's tricky to provide this and why people haven't done it in the past, but trust me it is and trust me we've figured it out. But we're really hoping to see more of this in the future. Right now, you know, if you went to look at a library and you saw that library didn't have any tests, you would be skeptical about wanting to use that library in production. And in the future we'd like to say the same thing for observability. We'd love it to become a best practice that when you're writing software, any software, including libraries, especially libraries, you are thinking about runtime observability, performance, air reporting, all of these things. So that's our talk. Thank you very much. If you have questions, comments, want to know more about this, please come by the open telemetry observatory. We aren't in the project, but, Villian, we actually have our own kind of special booth, but you will see it out on the showroom floor. So come meet us there. Thank you very much.