 Hi everyone. Good morning. It's still morning. I think folks are still getting seated, but let's get started because I think we'd like to use some of the time really for all your questions. I'm Aloleeta Sharma. I am one of the governance committee members on the Open Telemetry project. I've been involved with Open Telemetry for more than six years now. Really have seen an amazing community effort on the project. I'm also a TAG co-chair for observability and today along with my fellow GC members who will be introducing themselves. We'll be talking about Open Telemetry giving you an update on where we are on the project. Some exciting news as well as some features and community updates for you. So with that said, Ted, you want to introduce yourself? Sure. Hi, my name is Ted Young. I'm one of the co-founders of the Open Telemetry project. I currently work at the artist formerly known as LightStep. I'm Daniel Dyla. I work at Dynatrace. I'm also on the governance committee and I maintain the Open Telemetry JavaScript client. My name is Morgan McClain. I'm also on the governance committee, also one of the co-founders of the project. I work at Splunk and I guess I'll be able to make the same joke as Ted next year. You can see a pattern here happening. But with that said, again, I want to ask everyone a question first. How many of you use Open Telemetry? A lot. And how many of you contribute to Open Telemetry? Quite a few people too. So all I would say is that Open Telemetry is everywhere. It's a lot of users across the world who are using Open Telemetry. These are some of the major contributors as well as users, you know, of Open Telemetry. And of course, we'll go through some really amazing numbers as we proceed through. And in this slide, what we are trying to highlight is the top 20 contributors to the project. As you can see, there's a diversity of companies who are involved, organizations all the way from key vendors who have observability solutions that they're building, and also collection solutions, as well as key users, such as Atlassian, or Red Hat, or Shopify, you know, who have come and, you know, talk to the project about their use cases or have written blog posts and contributed to the blog, the project blog. And I invite you, all of you, you know, if you're using Open Telemetry in any way, come and talk to us. Come and tell us your story. You know, we are happy to publish on the project blog itself. And, you know, some of the issues you're facing, or some of the features you've found useful, or some of the improvements you'd like to see. So really welcome everybody to kind of, you know, get more involved with the project. And this is an amazing number. You've got to, you know, you've got to kind of relish this, and we certainly do. We have, at this point, you know, we started in 2019, you can see when Open Telemetry, Open Tracing and Open Census were merged. And today you can see a linear growth continuing to happen at amazing space where we have almost 1,100 active contributors to the project on a monthly basis, which is pretty phenomenal, and almost 1,000 contributions with the blue graph bars that you are seeing. But the number is phenomenal because 1,100 active contributors on a monthly basis is a really, really very large project, and a lot of activity happening on the project. And this is in comparison again to the second slide that I want to show for the most popular project in the CNCF, which is Kubernetes. And you can see that Kubernetes has about 2,500 active contributors, and Kubernetes is a massively large project, right? So you can see that Open Telemetry is about half 50% of the size of Kubernetes, but that's pretty phenomenal because, you know, for a single project and observability to have the level of activity and the level of contributions and participation from the industry as well as the larger community is pretty amazing. So with that said, again, we've got some great news to share. All right. I am so happy to be on stage this year to finally, finally be able to announce that Open Telemetry is officially GA. Yes. Yes. Yes. For those who have not been following along, what GA means for us is we've completed our original charter of achieving stability in the three primary signals of tracing, metrics, and logs. Now, this is awesome. I should point out we've actually had many of these capabilities for quite some time. So we've been GA for tracing and metrics for a while. Yes. Yes. But, you know, that was our original charter. We've also, even for logs, people have been using logs for quite some time, unlike a lot of open source projects that like to declare things 1.0 or stable or something when they're still in a somewhat vaporous stage. We really strive to be a responsible standards body at Open Telemetry and not declare things to be 1.0 or stable until they truly are stable for the long run. We aren't big fans of 2.0s for anything that our end users have to leverage highly and directly throughout their code base. Which is really exciting. Yes. Yes. This is really, really great. Also, an important reminder, even though we're talking about tracing metrics and logs and stability, we are not talking about three totally siloed separate things that is not the Open Telemetry vision that is very old school. What we're talking about is tracing metrics and logs designed to be fully integrated with each other into a coherent graph of data that can be walked and analyzed by machines as well as humans. That is the happy vision of the future that we hope Open Telemetry will power. With all of that said, let's dive down into the details of what actually has been changing with Morgan over here. Yes. As we mentioned, Traces went GA roughly two or three years ago. Metrics went GA last year. That was a big announcement to KubeCon in Detroit. The big difference now is that logs have gone stable. We show this slide every year, every time we're at KubeCon either in Europe or North America. Yes, there's been additions over the last several months to traces and metrics stability in various languages, various components, but those are already effectively there for most use cases. The big change here is logs. Not only is the logging spec, not only is the logging protocol and critical parts of high-level logging stable, but we actually, I think faster than most people have expected, have actually achieved stable implementations of logging, both in the collector and across various languages. Pretty amazing. This is actually more of an announcement than I think we were expecting to make even a few months ago, which is fantastic. This is very, very substantial for our end users. Tracing was a huge deal on Open Telemetry. There was nothing quite like it in the market when it launched. That's why I think the project, as you saw with our latest slides, saw such rapid uptake and such rapid growth. Metrics has also accelerated Open Telemetry's growth quite substantially, I think even more than any of us would have expected last year. I think logs will be another major step function for Open Telemetry, because yes, there are other ways to capture logs. There's been many for decades in this industry, but the fact that now these three really critical data types of telemetry that people need to extract can be done through the same agent, the same configuration, the same set of SDKs means that you don't need to rely on Open Telemetry, plus another thing, unless you want to. This is a very, very big deal for Open Telemetry. It also means you can do all of your processing in one place. I'm going to dive a little bit more into logs. In the spirit of this conference, I've gone to a lot of talks over the last few days, and it seems like using some AI image generators seems to be a necessity on various slides to make people laugh. We asked Open AI to show us some telescopes and logs together, and it gave us whatever this is. The more you stare at it and the more you think about it, the more questions it raises. Credit to Microsoft, we used the Bing image generator, and we asked for a lumberjack and some logs and a telescope, and it gave us this. This is actually pretty good. So that's the one we're going to go with. So there's your required AI image. But when we talk about logging in Open Telemetry, I think it's important to add a bit of context. And that is Open Telemetry, as I mentioned before, was the first sort of independent trace collection system that was around. There were open source ones, but they were tied to Zipkin or Yeager, and there were various closed source ones tied to various backends. And so Open Telemetry was somewhat new and novel for tracing. And then we added metric support last year, and there are other metric collection systems around, but I think the openness and flexibility of Open Telemetry has been very powerful. When we discussed adding logs to the project in 2020, it raised a number of questions, because there are many ways of capturing logs today. And so there were various things that we wanted to focus on, on logs, if we were going to add it to Open Telemetry, where we could solve problems that hadn't been solved elsewhere in the industry or problems that people still struggled with of logging today. And there were two classes of problems that came up. The first was that logs are challenging because they're unstructured. And the metadata that you get on a log from a system over here might be dramatically different than the metadata over here. That's true both for complex sort of per interaction metadata, but even for basic things like timestamps are often structured differently and need to be reprocessed. This is very annoying and very expensive to fix. The second is that logging agents can use a fair amount of performance to capture logs. They're parsing human readable text at scale. This is, you know, somewhat computationally expensive. And so to address both of these, Open Telemetry actually has two different logging paths. This is what makes Open Telemetry logging somewhat unique. So you can use the Open Telemetry collector agent to go look at a log files on disk the way you would capture logs with anything else today. That works. It will enforce Open Telemetry's standard data model. It will ensure that your data comes through with that right structure you expect that you get with the rest of your Open Telemetry data. The second way, though, is you can also capture logs in process by connecting to an existing logging library like for Java, log for J, or various other language-specific ones and actually capture those logs directly from your application. This gives you both the guarantees that we already provide in Open Telemetry around the semantic inventions and everything else. It also is just considerably more performant. And so when we talk about Open Telemetry logging, this isn't just adding logs to Open Telemetry the way you've captured them before. This is adding substantial improvements in the ways that you can capture logs and guaranteeing their structure and stability as well as offering substantial performance improvements. This is a very big deal for the project and for anyone who wants to use it. So we're very excited about that. And there's numerous other benefits. You can pre-process your logs inside of the collector using the exact same transformation logic that you would for other types of signals in the collector. That's a very big deal. It means you can modify your logs like you would traces or metrics. This is good. Also, of course, it's Open Telemetry. You can send those logs like your metrics and traces to any destinations that you want. And so this 1.0 is very, very exciting. I will say, just like as Ted had mentioned before, logs has been not officially stable, but much of the Corv has been usable for the last year or two. And so there are companies that we showed earlier if there are logos on there that have actually been using Open Telemetry logs for about a year and a half. And so with the stability, what it means is all the features are there. And it means that the configs and other things are not going to break, which is very, very big. And yeah, there's already a lot of code there. The collector has already like 29 or 23 logging receivers, 29 different logging exporters, 12 different types of log processors. Like there is a lot of content, a lot of stuff already there. Like this is ready to go use right now. We have logging bridges and all those languages that I showed earlier for direct in process log capture. And there's more coming for the SDKs as well. And with that, we'll hand off to Dan. All right. So thank you, Morgan. One more big announcement that we want to make as of last week or this week, early this week, HTTP semantic conventions are finally stable. Those that know know. This has been an ongoing effort for a really long time, and it is the first semantic convention area to officially achieve stability. And we are very proud of that. And we're very happy for everybody that helped contribute to that and make that happen. So thank you very much, honestly, from the bottom of our hearts. So the specification is stable. And there are a couple of languages that are approaching stability for that. Java and .net are particularly forward on that. But Python has a prototype that is expecting to merge soon. And other languages are also making good progress. So just because we've completed the original vision or what many people believe the original vision is, that does not mean that we are done. So there's still a lot of work to do. And what I want to talk about is less of a official roadmap and more of a list of priorities. So these are the things that we are really thinking about and the things that we want to work on over the next months and year and on an ongoing basis. And I'd like to split these into two sort of categories. So the first category is maturity and stability. We achieved a lot in the last three years, but we really need to focus on getting things right and polished and production ready and the various things that we need to do in order to prepare ourselves for graduation, which is going to be a big focus of the project leadership over the next months and however long it takes. The other part of it is increasing our footprint just because we have tracing metrics and logs does not mean that the work is done there either. We need more instrumentations and we need more signals. People are contributing in areas like profiling, auto instrumentation for languages like Go and C++, additional semantic conventions, hopefully by next year we will be able to announce significantly more stable semantic conventions. Right now the focus is on messaging and client instrumentation, but we really hope to ramp up the velocity on that soon. So finally, if any of that seems like a good idea to you, we would love your help with this. So please reach out to us and get involved. The best entry point for that is either GitHub or the website, but you can also follow us on social channels like YouTube where the meetings are recorded and posted and sometimes we post educational content and that kind of thing. Check out the open telemetry demo if you're not familiar with open telemetry and how the project works. That group has done a great job in making it very approachable. It's very easy to set up and you can get started and play with it very easily. We are having, some people may already know, a contribfest session this afternoon. So if you have never contributed to open telemetry or open source and you're looking for a good way to get involved with that, please come to room W186 at 230 and help us track down some issues and make your first open source contributions. We would love that. Related to that. Also, if you want to know more about logging, I realized I forgot to put on the slide. There's a talk about open telemetry logging. Annoyingly, it's also at 230 today. So you got to pick. Yes, we will treat that like you like one of us better. If you're not interested in contributing in code, maybe you're an end user. You don't have time to contribute upstream. We still want to hear from you. Share your experience with the end user working group. That is the only way that we know what we should build next. If nobody tells us, we have to guess and you're not going to necessarily like what we guess. So please tell us what you need. And finally, come visit us at the observability observatory booth. It's right in the center. Very easy to find. Big open telemetry logos. Can't miss it. There's sessions scheduled to that booth all day for various things that are meeting up or on various topics. So if you go there, you can see the agenda for the sessions and join whatever you want to join at that booth. Yeah. And I actually forgot to add a plug-in here. But for all end users, again, as you know, the CNCF has created a new end user technical advisory board. And I am part of that board. So again, please, please feel free to reach out to me. Happy to chat with you, help you, and also help you kind of prioritize a lot of the observability requirements that you have. All right. That's it. And with that, I think we'd like to open up the Florida questions. Yes. Yeah, we promised a lot of time for questions. So we're here. If you're a maintainer of open telemetry, can you also come up and help answer questions? Because I'm guessing we'll get some specific ones. Feel free. And if you want to ask questions, there's a microphone there. And I don't see a microphone there. That's strange. Well, you can also raise your hand and we'll run up with the microphone. Yeah. And don't be shy. Ask anything. Yeah. Hey, since you said I don't have to be shy, as a end user, who is kind of like deploying the commercial APM solutions, right? How do you see your roadmap evolving where we can confidently go and tell our developers that, you know, hey, embrace open telemetry, it's going to be the good thing long term, but you're not going to lose anything that you're automatically getting in your current tool scene. Sorry. Can you repeat? Yeah. So as the end user of commercial APM solutions and who is interested in open telemetry, right? How do you see your roadmap evolving where we can confidently go and tell our internal developers that, you know, hey, you can still adopt open telemetry and not worry about losing some of the automatic instrumentation that you are already getting it with commercial APM solution without gaps? Are we already at that stage or is it going to evolve to that stage at some point where we can make that move? So the question is about interoperating with existing observability vendors? Not interoperability, but more maturity of automatic instrumentation so they don't lose any signals that they are already getting today when we make the shift. Yeah. So maturity of automatic instrumentation is one thing that really needs to improve. It's one of our big priorities for the upcoming year. There are a couple of languages where it's really good. I know that Java and .NET have really good solutions for that, but we're continuously working on improving that in all languages where we can. Thank you. And if there's a particular language that, you know, you are using and are not seeing full auto instrumentation yet, I mean, please reach out to us, right? Because again, we always try to figure out if we can actually pull in more engineering to be able to build a particular area out faster. Yeah, we would love that actually. Totally. So please reach out. Thank you. Questions? Raise your hand if you have a question or approach that mic. Don't reach out. Hey, how's it going? First of all, phenomenal job with the release of the log, log API being fairly stable, fantastic work over the last year and a half. The question I had was around semantics actually. So specifically, is there any interest in getting geolocation into the current semantic schema? And I suspect that's going to go hand in hand with the client instrumentation. Yeah. So yes, absolutely. As part of getting semantics for clients, geolocation would be part of that. There is a client SIG that is focusing specifically on getting that in. It's a big domain. So, you know, it moves slowly like everything in open telemetry. We really want to get them right. We've actually have felt a little burnt in the past around semantic convention specifically. Oh, and we've got maybe someone who can answer this a little more specifically. So I'm also hosting a session at the open telemetry observatory at 1.30. If you're interested in rum or client side instrumentation, please come by and let's chat. Yeah, great. Jason's the Android maintainer. Yeah. Maybe one thing to add in the context of the ECS addition to open telemetry semantic conventions. ECS already has specifically geolocation as an area and domain and we are looking and contributing that as well. Cool. That's awesome. Thank you for calling that out. All right. You're up next. Yeah. Congratulations on the semantic conventions for HB and stabilizing that. I was wondering if you're also tracking when vendors are actually able to ingest those as well. Like, how do I know that my current vendor is actually able to get those new semantic conventions and that's something you're tracking from your end? So the short answer is that no, we as a project do not track that. You have to ask your vendor. I'm sure that they would be very happy to tell you. But I don't think that's something that we are really going to get into tracking very specific details of how vendors are implementing open telemetry. We're trying to focus more on providing high quality data so that they can build use cases as they see fit. But no, we don't have any centralized tracking of this vendor supports these use cases. But I mean, that said, please feel free to ask your vendor first of all, and most likely your vendor is already working on the project actively because, you know, again, we see an amazing diversity of all the observability vendors on the project. And this is very much a collaborative project where, you know, everyone is working together. So if you see that there is any gap, please reach out to us. We're happy to reach out to the, you know, they all interact on the project itself. All right. Thank you. We got a question over here. Hi. Are you engaging, for example, with well known projects that are like, for example, open source login libraries to help the integration with open telemetry, for example? So Dan, would you be the best person to answer that? Could you repeat the question? Are you engaging with open source logging libraries? So we talked about like Log4J can send data directly to open telemetry. That was one example, but like, well, I mean, I could sort of answer, but there's numerous others. So in every language, we try and build as many of these logging bridges as we can. In terms of engaging directly with the logging library providers, I don't actually know the answer to that. Or maintainers, yeah. As far as I'm aware, we aren't doing a lot of that right now. It's mostly already of the hooks that we need. Anyways, those logging libraries are often very stable and have been around for quite a while, which is part of the reason that we're leaving them in place and not trying to replace them. They have well-defined interfaces for hooking in with appenders or whatnot. So for the most part, we haven't had to, but be curious to hear if you have specific reasons that you're curious about this, if there's performance considerations or anything else. Is there anything they're motivating that question? Yeah, sure. I would say that nice to have would be like eventually, since this is going to be like a new logging standard. So all these open source widely used libraries, they support it out of the box, basically. Yeah, I think through the bridges, they effectively do for most of the existing ones. So like, I mean, I keep mentioning Mark for Jay just because I think it's probably the most common example, but like for most of these languages, like just to clarify for everyone because they didn't go over this in the talk. Like with open telemetry for metrics or for traces, you create those using the open telemetry APIs in SDKs. For logs, for most languages, the expectation is you just use the logging bridge to use log for Jay or whatever logging library you're already using. I think in one or two languages, we're adding direct logging creation capabilities like C++ or something, but that was where there weren't existing logging libraries that were standard. Generally, we expect you'll just keep using the logging libraries used today. Other questions, if you're over on the side, you can raise your hand. Yes. Hey, thanks for the talk. Congrats on GA. I'm an open census go user. What should I do? Yes, and the question I have is it seems like an insurmountable work to migrate to open telemetry. Do you have advice besides the bridge? Because essentially it's like, how do you migrate our users also? So I think I could respond to one part of that and then please feel free to join in. So for a migration from open census, typically you can use the open telemetry bridge first and then go and fully go native and that gives you the ability to kind of transition your instrumentation or improve it as well as continue to test with the migration as you move forward. But I'd say that, you know, does that answer your question or are you going to be, you need to be more specific? We also have someone at the back who works on GO who might have an answer for that as well, if he's going to answer. I just want to add one thing. Also, we're still users of open census in the collector, so I feel your pain. There's also an open census receiver in the collector that you could use if you didn't want to migrate away from open census and then you could translate that data into like OTLP natively and then send it out that way. Yeah, I'm Tyler. I do a lot of GO stuff. And so if your question was specifically around GO, we are actively working on that. Sorry, who asked that question? I was, okay, sorry. Yeah, and like it's a big project obviously to try to provide compatibility, but we're pretty close actually. So I would imagine in the next few weeks, there's a few things we want to try to make sure we're stable about. But as it is, it's pretty close to ready to go. I think that the plethora of other answers you've received are probably also really great ideas. The collector's a really good resource in this situation. But yeah, the idea is to try to make it as seamless as possible with the bridge. And then I think you could also look into some auto instrumentation options that are coming out in that space to try to help that migration path after the fact if you wanted to automate that switch. But yeah, we can talk more as we'll come to the sort of ability. Yeah, as we showed on the slide, we didn't dwell on it too long. GO auto instrumentation via EBPF is also a thing that the community is working on right now. So that may be an option as well. Cool. Okay. Other questions? Other questions? Yes. Hello. I noticed there was a bullet point about improving documentation and the end user experience. What's the best way to get plugged in? Should I pick a language and join a SIG? Oh, yeah. Hi, yeah. I think a lot of maintainers will actually like label for any of their repos. Good first issue, docs issue, that type of thing. And usually that's like the best way to get started is just look at the open issues that are already labeled. I know that I label a lot of docs issues in the operator repo. So there should be plenty there. I'll go check out the operator repo. Thank you. We have a question here. Yeah, I should also mention that we have a comms SIG. And so in general, this is a thing I should mention about open telemetry. We are very meeting a zoom heavy open source project. And sometimes people feel like, Oh, that's for like the inner circle experts. I can possibly show up to a SIG or join a zoom meeting. You're absolutely welcome. We love it when people show up and say, Hey, I'm new here. I would like to contribute. We are also available on Slack. So all of these SIGs are on the CNCF Slack. It's you can find all of this info in the community repo. But the comms SIG also has a Slack channel. And those are great places to just plug in and figure out how you could help improve our docs. There's there's also the end user working group. If you're an end user and you have any questions, I think they meet every week, maybe, or every two weeks. And you can join the call and just, you know, there's like a list of questions that you can ask. And then people talk about it. Usually there's experts from various SIGs that also joined. So if you want to get started today, just join the contribute session. It is about the collector and JavaScript, but we're more than happy to help you on the docs as well. Cool. So without just saying like it depends, I guess the question would be, what is your favorite logging back in? It depends. There's a few of us who probably are too biased to answer that. We have an issue open right now for that. But the idea is so the way that it works today is if the authentication data is part of your telemetry data, then it's easy to route. You can use the routing connector for that. And you get a authentication key from the telemetry data and inject as a HTTP header for the outgoing data. What we are what we have an issue right now for is we have a, we want a way to convert metadata, connection metadata into either telemetry data or to bypass or to just pass through that value from the incoming receiver to the outgoing exporter. So that you can use the routing connector to decide where to send the data to based on the metadata from the incoming connection. Yes. Yeah. Any more questions? Yeah. I mean, we also have a security, but it's not really, I mean, the security is more. Any more questions? Going once? Going twice? All right. Well, thank you very much for coming, everybody. Thank you. It takes me a person questions up front.