 Hello, hello. No, it should work. No. Hey there. So we're still trying to start on time, but as neither my co-chair or the tech lead is here, I am poking people off. Also, FYI, but I'll also raise this once we start for real. Maybe we can consider moving this call around a little bit because it's directly after CNC FTOC call, which means I'm not super well-prepared for either call and I hate that. And it doesn't do either of those calls justice because you don't have any time to just do the last fix ups before major external calls. Also known as Thanos. Free Zoom. Exactly. Also, as per usual, remember to write yourself into the attendee list would be great. Wasn't the plan that we start punctually that we don't wait any five minutes anymore? Or did I miss the plan? But people still keep appearing on the other hand. I'm torn. On the one hand, I don't want anyone to be left out starting too early. On the other hand, it's not great to lose those five minutes times currently almost 20 people. But we are at five minutes or almost anyway. So let's just get started. So two high-level things. One, FYI, there is now Prometheus Working Group within OpenTelemetry for anyone who has an interest there. And the other thing, this call is right after the CNCF TOC call, which means I'm usually jumping in between the two. And I can't really do either call full justice because all of you notice when you have two major external calls, something happens right before you are out of time. So long story short, if the feeling of the room is that we can try and find another slot, I would start talking to CNCF. But we can also move this discussion to the mailing list. I just wanted to see what people think. It's fine, right? Is that we're just finding the right time that works for folks? So as long as we can find something that's common, I think. OK, yeah, I'll talk to you. Yeah, can you take an extra item that would you to come up with a few slots? And if you send it out on the mailing list and on the Slack channel, I don't know, whatever the current way to survey that is, I think you have a nice. It's always only more channels. It's not that we can actually get rid of any old channels. It's just more channels, mailing list and Slack. Good. So the other thing which we currently have on the agenda is the due diligence talk for open telemetry. So I guess let's get started. So I'm happy to cover that. You want me to share the screen or does someone else want to share? I mean, just walk through. How do you want to handle it? No, go ahead and share that for me makes sense. Wait one second. All right. So I mean, for most of the people on the call, this is another example of doing the due diligence work. So same exact template that we have followed before. The link is in the agenda. Comments welcome. So please add feedback in here. We can be sure to incorporate all of these changes. But this is specifically for the open telemetry project, which is currently a sandbox project. It is the joining of two previous projects, open tracing, which is currently an incubation, and open census, which was not part of CNCF, but was an open source project. So the best way to think about open telemetry is kind of the next major version of both open tracing and open census. And because I'm sure it'll be asked, part of the plan here will be as open telemetry moves into incubation, open tracing will be removed. So we'll kind of work through the details of that. And of course, the idea here is not to break customers or end users. So there is something known as a SHIM that is available to both open tracing and open census. So you can continue to use that with open telemetry. And open telemetry will provide support for two years. So customers or end users of these products have a path forward without having to jump ship and re-instrument everything to kind of make it work. Now the doc that we have here has gone through the open telemetry governance board. We'll talk about governance in a while. And a little bit, you can see different members of open telemetry that have currently reviewed it. There's still a few outstanding, but they will add their review information in here kind of soon. And we don't have the TOC sponsor yet, but we will of course sort out those details. So let's make this as interactive as possible. So I'm assuming we'll go kind of section by section. And then Richie, I'm assuming you'll do call for consensus. And we can kind of mark things off as we go. So section number one, first, I guess, any questions before we get started? Can I interrupt? So usually it was kind of there was some time for review before, like for offline review or whatever. Did you share this maybe earlier or just maybe we can have the time, I don't know, to do it offline as well? Not before, I mean, before jumping into the straight consensus and stuff like that. We did, you had exactly the same point on previous incubation docs as well. Yeah, so this was shared last week in the sick observability Slack channel with the link. And we don't have to do consensus today. Like I assume this is going to take at least two meetings. So any items that folks are not comfortable with, let's definitely table. And we can definitely take feedback and come back in two weeks and address that feedback. So no rush here. If there are any concerns or areas that people want to dig into more, we have some time. So I doubt we'll get through everything today. Yep, yeah. Just next time, it would be nice to use the mating case. Otherwise, yeah, it's good. Cool, I'll make a note of that. And I'll send it out after the call. If I haven't seen anything on the Slack channel, that's why I also asked there. You said you're going to share something, but there was nothing unless I overlooked something. Yeah, I'll put the exact link. So maybe I just put the link to the agenda to sick observability because that's where the link for this document was. So I probably just wasn't clear. So I'll send that up to the mailing list and to the Slack channel to make sure it's super clear so people have access to it. Yep, that would be a pleasure. So normally, I think somewhere in the documentation, we say that the mailing list is the primary because that can be consumed completely asynchronously. On the other hand, Slack is basically half email by now, four by now, four words. So yeah, I guess we are just walking through stuff. And if there's anything which needs to be followed up as you say, Steve, then we just mark it as such. And yeah, okay. So then let's get started, I guess. So the question and also I need to copy the template which I used or which we used last time because I don't have it by heart. On the self-governance, I think there will not be too many questions. So open symmetry has governance committee charter. It follows kind of similar things to other CNCF projects. People can kind of review it, it's a little long here. One thing worth noting is that we're kind of going through a transition. So the board has kind of grown and is going to reduce again in size. Again, that's all outlined on kind of the read me here. But there is a governance committee, there are standards on how many representatives per company there can be to ensure that it's not controlled by a single entity. And yeah, I mean, there's a whole community with GitHub issues kind of outlying what the government's board is kind of responsible for, areas of focus and what have you. So definitely review it, let me know if there's any kind of comments. It follows a similar model to what Kubernetes has been using. So I guess any questions on governance other than people probably want to review it. Just one. So you said there is a limit on the total amount of people from one company on the governance board or on what level is that limit imposed? Correct, it's on the governance board, there's also a technical steering committee. So where is the note here? Maximum representation, so right here. So there's a max representation of one third from any company. Right now, given that we are in 2021 the board is going to be nine people. So that means no company could have more than three representatives in the board total. And if you have more, then someone has to concede a seat and then a special election occurs. Okay, okay, and that is on governance committee level. And governance committee is the board? Correct, yeah. Okay. And then there's a separate technical steering committee which is another board that handles the technical aspects of it. And that follows a slightly different process, but same idea. There's a max representation for that board and they handle kind of the technical decisions across all of the SIGs within open telemetry. Okay. So I will need edit access. Oh, do you not have it? I can fix that. Now RichieH at Grafana.com or RichieH at RichieH.org Boulevard. RichieH, you said Grafana.org? No, com. Com, com, com. Sorry, my bad. Editor, you have it. Perfect. On its way. Cool. Second one is around code of conduct. We just follow up CNCS. Sorry, sorry, sorry. Just to make. Oh, sorry. I think there will be no further questions regarding the governance, but maybe someone has some. So just to make sure call for consensus, is there anyone who wants to look into into the governance document more deeply? If yes, now's the time to speak else. The suggested call for consensus is already highlighted. Good. So SIG observability is happy with the section above. All agreed. Anyone disagreeing? Very good. Awesome. Code of conduct. We follow CNCF. So it's a direct link to CNCF. Nothing special for open telemetry, but yes, definitely have one. Definitely ensure it's followed and not diverging from anything else that CNCF does. Okay. So it's direct copy, I presume. I think it links directly. It links directly. Okay. That's perfect. I do missed wrong one. So given that, I don't expect much discussion, but still to make sure call for consensus, SIG observability is happy with the section above. All agreed. Anyone disagreeing? Very good. Okay. Next one. Does the project have production deployments that are high quality and high velocity? So I've included a link to our adopters file. There are a variety of both vendors and end users that are listed here. Anyone listed as an adopter is stating that they are running this in production today. What we have not articulated is what specific subcomponents of open telemetry are being used. We're actually going to update this page to make it more of a table that actually outlines that, but there's a broad range of both client library and open symmetry collector support, which of course means that the specifications are being used because that's a prereq in order to use the client library and instrumentation aspects. So yes, both end users and vendors are using open telemetry in production. Some are, of course, even trialing. Maybe they're not in production yet, but we do believe that there are production deployments that are high quality and high velocity today in open telemetry. Okay, perfect, yeah, but that would have been one of my questions regarding which component. Of course, if I remember correctly, when you talked about this last year, you considered having different tracks for different parts of open telemetry. That's maybe also something which we should be talking about. So for the update here, then we say that we request that table to be made and we will look at it next time. Sounds great, go for it. Yep, I'll take a note on that so I can make sure it's added on there. Yeah, I would love some details as well, right? Because especially Prometheus, we have like exporters, right? And they are not part of the Prometheus which is graduated. So I wonder if the kind of incubation, all those kind of street rules actually apply, yeah. All the small projects that maybe are part of open telemetry, huge ecosystem are kind of fitting the incubation essentially, if that matters even. So that's a good question. Yeah, I guess I will make one note on this. Well, I guess two notes. One is, I mean, observability, three pillars of observability. Open telemetry is focused primarily on tracing to start, metric second and logs are not even part of really scope today. There is some amount of log support but I'd say it's very early. So most of the production deployments that you're seeing here are primarily around the distributed tracing aspects. The other comment I'll make is we kind of group the client libraries or the languages together based on maturity. So if you think of like a Java or a JavaScript or a .NET those are typically like the top tier languages that are most commonly used. So those are definitely production ready. Where say like a PHP for example is like a tier two or tier three language like there isn't as much adoption of it today. So maybe those are not used in production environments yet. But for the things that we consider to be like top languages or like the collector implementation there are definitely production examples today with both vendors and end users. Cool, makes sense. Cool, so happy to provide that information and we can update this for next meeting. Next is the project committing to achieving CNCF principles and do they have a committed roadmap to address any areas of concern raised by the community? So the answer is absolutely. Open telemetry is quite a large project. So there are many repositories. The way that we handle this is that every repository is basically its own SIG and every SIG kind of runs independently with their own approvers and maintainers. So if you think about issues being raised by the community they're usually raised by individual into individual repositories. So maybe I have an issue with Java instrumentation. I will go add that to the Java repository. Each of the SIGs also maintain their own roadmaps. So for example, if you were to go into the collector repository there's a docs folder it lists roadmap information. Usually these are around major milestones. So for example, most of the roadmaps today are talking about the GA of open telemetry because that is the primary focus but a longer-term roadmap like two, three, five year type thing will also be laid out going forward. But yes, both CNCF principles and addressing community concerns as well as roadmap information are all top priorities for the project and constantly reviewed. Okay. Can you link to the roadmap documents as well? Sure, yeah, can I link? Yeah. Okay, perfect. Just make a comment and I will happily add that link for you. The next one, document that the project has a fundamentally sound design without obvious critical compromises that will inhibit potential widespread adoption. So the project is actually, as I mentioned the maturing of two other projects, open census and open tracing. Both the bridge have extensive support in the community today. Open telemetry is basically the next major version of these. From a community support perspective both contributions as well as adoption we have seen that all three major cloud providers are actively on board and have their own kind of announcements around support and compatibility as well as roadmap information. A lot of major vendors, I mean, not all but many, most I would say that are in the observability or monitoring space have some amount of support for open telemetry today. Open source projects are also adopting it. So for example, Yeager has moved off of the Yeager collector and now uses the open telemetry collector as the collector for Yeager. Fluent BIT has support in the open telemetry collector. So I mentioned there is some log support. There's an example of it. And even outside communities like SpringSleuth now has experimental support for open telemetry. So it kind of goes beyond just the CNCF. End users both are approvers and maintainers of the project which means they are actually committing engineers to develop open telemetry as well as deploying it in their production environment. And I've listed four examples of such companies here. These are also reflected in the adopters part. Now from a support perspective, open telemetry is really looking to address the primary data sources and observability. So traces, metrics and logs are where it's at. As I mentioned, traces and metrics are more mature. Logs are a little bit longer out there but there is roadmap information for this. There is kind of a path forward. So we don't see anything that would inhibit any adoption today. There's also support for all popular open source things as well. So Zipkin and Yeager in the case of traces, Prometheus and of course the collaboration with OpenMetrics for the metrics set of the house and then logs, Fluent D, Fluent BIT and very recently actually there was just a blog post on devops.com, Stanza from ObserveIQ is being added to the open telemetry project as well for native go log support. Cool, I have a question. So I feel like you are right that, I don't know, like those supported kind of projects there will be supported in future and like there is a good collaboration already to do so but it's good to mention that it's not done or like right now, right? There's definitely, I'm not sure if that's even related to incubation or not but would be nice to at least as far as I know it's not OpenMetrics compatible right now. Like there is good effort to do that but I mean it's not now. So I would love to also know if the same is with the Fluent D and Yeager in Zipkin so maybe it's just some research on our side or maybe you can tell what exactly, you know is it gonna be supported or like supported already and like, you know, to be kind of more realistic here. Yep, yep. So from a, I can comment at a high level and I'm happy to add more comments in the doc here. So Yeager and Zipkin should be fully supported from both a receiving and an exporting perspective. I mean, technically the same applies to Prometheus and OpenMetrics. The problem is the translation layer in the collector. So for example, we're using the same Go client library that Prometheus uses. So that means that the receiver should be the same. The problem today is the translation into OTLP format does not natively support into and out of. So we can definitely add a caveat for that needs to be addressed. And I'd also argue that say like the Prometheus receiving capabilities and the collector are not ideal and need to be rewritten. So there are definitely gaps from that perspective. So yeah, if you want to, I can definitely articulate on here areas of like we know these gaps need to be addressed. I can definitely add that here. Yep, that's totally what I would love to see. And for OpenMetrics, I got that because I kind of was part of the working group and I love kind of the collaboration there. I'm curious about the logs and FluentD if they have kind of the same long minds to be solved essentially, or just to be essentially transparent, nothing else. Yeah, absolutely. And the answer is yes. So I mean, we're working directly with the FluentD and FluentBit maintainers and the stanza maintainers right now. And we actually have log support. So we have elastic representations, SumoLogic, Splunk. I'm sure I'm missing someone, but there are major vendors involved in this conversation too. But logs are very early, I would say. So I would call them alpha best case scenario. So there's definitely gaps in things that will need to be worked out there. So would it be fair to say that those two metrics and logs are in the future and Jager and Zipkin is already supported as of today? Sure. I mean, there's some amount of support for all three but the way to think about it is we are planning to GA traces in the next month. So that is very, very close. And there is full support for Jager and Zipkin. Metrics would be on the next roadmap which is probably another quarter or so out to get at least the initial support in. So let's call it August. Logs, we're talking end of year earliest to have something and probably not until early next year that we'd have the first GA support for logs. Okay. Okay. Yeah, okay. So just on the, not to drag this out endlessly but just for anyone who might not be deep in this as the two of us are or the three of us who just talked there's still some data format in combat abilities between the metrics side of open telemetry and Prometheus but this is actively addressed. And I think the timeline which you just mentioned, Steve is absolutely realistic. So maybe just a little bit case of the wordsmithing. Yep. Yeah, just add a comment and I'll clean it up. Yep. Thank you. Yep. Okay, next one. Document that the project is useful for cloud native deployments and degree and that it's architected in a cloud native style. So yes, open telemetry was built from the very beginning to support kind of cloud native frameworks for traces metrics and eventually logs. You can see it listed in the assess category from the CNCF observability and user survey. Part of the reason why it's only assessed today is because open telemetry has not GA'd or offered a stable API yet. We just talked at a very high level of what that looks like. So stable API for traces, let's say the next month, stable API is for metrics, let's say the next quarter or quarter and a half, stable API for logs, at least six months probably a bit longer. So there is a path forward for that. And as I mentioned, we're already seeing other like library framework owners preparing for this. Spring-Slooth is a great example. They're only offering experimental supports because the stable API hasn't been released yet. They've already committed that once the stable API is released, they'll move from experimental to actually supported in Slooth. So there's a path forward for this as well. Clearly the same applies for Prometheus and open metrics. We have that working group to kind of work out the details and make sure it's fully compatible. So that will be addressed. It will clearly take a little bit of time, but we do have a path forward here. So my only concern on that section would be that then it's not only, like the word only is not correct, but the rest, so a reason. Oh, I see. Oh, sure, yeah. All right, add a comment. Yeah, yeah, good point. Yep, you can fix that. Also by APIs, you mean the actual client code APIs or something else? Correct. I mean, technically APIs and SDKs because the configuration happens from the SDKs. So Richie even had a comment on API and I'll have the SDK as well. But you need both. The biggest problem is like if you think of instrumentation, there are two primary ways of doing it. Automatic, which like by code manipulation type stuff, and then manual, where a user goes in and typically adds their own instrumentation. The manual aspect is where kind of the rubber hits the road. Like if you don't have a stable API and you introduce breaking changes, well, you just broke someone's application. And if you've run that on tons of different servers and production, upgrading an entire fleet takes a really long time. All the testing that's involved, like you don't want to be in that state. So once we get to the actual stable APIs here, backwards compatibility will be offered. And thus you can take a dependency and still with Semver, you can make sure you can get to next major versions as they become available. Yeah. And also just one point of order. It's just the three of us talking, but this is actually root code. So anyone else who wants to comment, you're more than welcome to also speak up. That was a good time to pause. Like any other comments with any of the sections that we've covered, please, please be careful. All right, we'll move on a bit more, but please don't hesitate to get involved. Next document that the project has an affiliation for how CNCF operates, understands the expectations of being a CNCF project. Definitely, like both in the GC and technical steering committee, we have folks from Open Tracing that's already in the incubation status. So we're very familiar with how a CNCF operates. As I've also mentioned, there's cloud support, cloud provider, vendor and end user support. Many of those members are part of CNCF in other projects too. And then there's collaboration with Yeager, Prometheus, Open Metrics, et cetera. Again, all of these people have a familiarity with the CNCF. So I don't think there's any concern from this aspect. All right. Yeah, I can fix this. I'll send in seven parts for those. I can do something in the chat. Yeager as well, yeah, and Yeager as well, that's correct. Awesome, let's move down to the next section. Document that it's being used in production by at least three independent end users, which would focus on adequate quality and scope defines. So we have the CNCF end user survey which lists customers that are using it, also listed back to the adopters page. So the three customers that I called out that I at least know are definitely running it in production would be AppDirect, Shopify and Wanderer. Again, we'll clarify the table, what specific components they're using from the previous part, but I believe that will address this section. Questions? Not really. You already anticipated the one thing to just link to the thing or refer to it up either works. Perfect. Next one, have a healthy number of committers. A committer is defined as someone with a commit bit, someone who can accept contributions to some or all of the project. Oh yeah. So according to CNCF DevStats, open telemetry is the second most active project in CNCF behind only Kubernetes. There are a large number of contributors. So I have the Google, the open telemetry teams, you can only see if you're a member of the community. So that link may not work for you. The membership information is listed. The DevStats link is actually in section three right below, but it lists a number of committers based on company, based on repository groups, what have you. So DevStats is a great resource for this. There are definitely a healthy number of committers. I think that is actually something where we can already put the marble. Just a moment. So for that section, sick observability is happy with the section above. Anyone agree, anyone disagreeing or agreed? Sweet. Perfect. Next one, demonstrate a substantial ongoing flow of commits and merge contributions. Oh yeah. So there's open telemetry, currently listed as number two in kind of the project here. And if you haven't used DevStats before, really cool tool. Hopefully everyone's seen it. Open telemetry is listed as a project and there are a variety of different dashboards built in. Come on, you can do it. Usually it's very fast. Here we go. So we could see like commits based on repository group. So here are all the different repositories and you can see a lot of activity with, it looks like the collector being one of the main things today, which is not too surprising. Given that it supports all the open source work as well as say like vendor exporters and what have you. But yes, very rich contributions to this. Sounds good. Any comments? So then call for consensus. Sick observability is happy with the section above. All agreed. Anyone disagreeing? Good. I agree. Only who is reviewing all of those comments? I'm just joking. Like there was lots of work involved in this. So good job. Good job. Thanks. Okay, cool. So next one, name of the project. It's called open telemetry. Some people call it hotel, O-T-E-L. We do not call it O-T because open tracing is also O-T, but we try not to abbreviate either. So open telemetry is pretty unique. Seems to be kind of certified. To be fair, someone owned the repository on GitHub. So we have open dash telemetry instead of open telemetry. But beyond that, like it is definitely a unique name. What does it do? This is basically copied from open telemetry IO. So this is basically what the government's board has kind of defined for what open telemetry is. It's kind of high level and vague, but that's part of the point. The way to think about it is both APIs and SDKs for instrumenting your app, regardless of data source that traces metrics and logs, and it offers an end-to-end implementation. So you can do both the generating, emitting, and collecting of that telemetry data. So everything you deploy on your side of the environment for the instrumentation and data collection aspects. Open telemetry does not provide a back end today. It plugs into many back ends. It's vendor agnostic. It supports both open source and commercial back ends, but it itself does not provide a back end today. CNCF charter mission, yes aligned, TOC, already mentioned that's TBD. The entire license is Apache 2.0 based, everything is marked, all the files make that very clear. We follow the CLA, easy CLA bot type stuff. All source control is done in GitHub on the open telemetry project. For what it's worth, we do all of our communications on Gator, not on Slack. Maybe that'll change one day, but the Slack channel on CNCF for open telemetry don't use it, go to Gator. External dependencies, I mean, technically none, but this is where things get a little bit more complicated. So everything that's a core part of the project is Apache 2.0, but as I mentioned, we support like receivers and exporters, which could be other open source projects or other vendor projects. So in a way, those are dependencies if you pull in those components. So there are technically external dependencies and other licensing things that could happen depending on what you're pulling in from those projects themselves, but it's all open source based today. Bartek, remind me, did we have a list of dependencies for Cortex and Fanos, or did we also just do qualitative statements? I honestly don't remember. Whichever was done for the others, I think is correct here. I don't have a strong opinion myself. It just should be fair for everyone, so. Yeah, I'm just checking, but... Give me a moment, I'm just checking. Okay, it was explicit lists. So yeah, but I mean, that's, you can probably get some in turn to do it. It's good for you. I don't think we can add whatever's needed, so. Cool, next one, release methodology and mechanics. So as I mentioned, every SIG kind of runs independently. So SIGs determine what they want their release cadence to be. So for example, some of the client libraries are released weekly, some are bi-weekly, some are monthly. Basically the maintainers of each SIG determine what they want to do. Typically anywhere from two weeks to a month is kind of the cadence. And of course, if there are security or what have you, there can be one-off releases as well. In general, we've kind of standardized around GitHub Actions for all of the CEI type activities, but the collector still uses CircleCI. There's no real mandates to use a single thing in open symmetry today, but we are starting to see a consolidation around GitHub Actions. So that might be the long-term plan. And then publishing the bits, well, that's language specific. So like in the case of Java, like you want to publish to Maple Central, we offer kind of the common locations. And then the release page also has links to relevant information if you need like the actual compiled bits. And of course you can pile it from source if you want to run it locally. So that's kind of the end user's choice as well. But it should support all of the common formats. We also try to support all the different packaging that you would expect. So like Helm charts, for example, if you want Kubernetes and there's other examples too. So try to make it as easy as possible knowing that there's a wide variety of ways you could be deploying this. Is this taking distributions and such into account as well? Of course, for example, with AWS distribution and such, you will obviously have huge user bases on specific distributions. So are those factored into, in particular the security releases? Yeah, so they're handled completely separately today. So anyone can make their own distribution and distributions would be based on upstream, but they could also choose to diverge. So one of the things that the government's committee of open telemetry is considering is putting around a certification process around these distributions to ensure that they are actually distributions and not say forks or modifications or proprietary vendor implementations. So we are looking at ways to kind of mitigate that. But today a distribution is run by whoever controls it. So like AWS Splunk has them as well, those distributions are maintained independently. They have their own release schedule. There's no oversight from the open telemetry community today. So everything I mentioned about release methodology would be for open telemetry, SIGs and projects and repositories in the open telemetry GitHub board. So to phrase it differently, currently the distributions are out of scope, but through certification and such, you want to pull them in scope. So you have control that they don't do anything stupid and then it falls onto your name. Exactly, yep. So for incubation, they will basically just be treated as completely external, not managed, not handled. By the time we probably reached like GA in CNCF, for example, that's when we'll probably have more guidance around what you can and cannot do for distributions and what that certification process looks like. Okay, but for the incubation phase, they are considered out of scope. That's correct, yeah. So you can add a comment and I'll make that explicit in the back. Yeah, perfect, perfect. So as I just wrote, hotel is not the official thing you really type it out every time. Well, hotel is how it's abbreviated. So you can use the abbreviation. At the beginning of the doc, we did explicitly add in the AKA hotel. So you are technically okay, but don't use OT. Yeah, no, like obviously with open tracing, but I think this is one of the few times where I actually spelled out open telemetry and didn't just write OT. But names are important, I want to get it right. All good, all good. All right, so the next one's around community size and existing sponsorship. So again, I linked to DevStats. There are over 92 companies and over 500 developers that have contributed to the project according to DevStats. So you're welcome to go take a look at kind of the links. But it's a pretty healthy community, constantly growing. I'm sure if I pull one of these things up, you'll see it's always going up and to the right, which is what you want to see. Oops, maybe that's the wrong one. Is this the one that shows to the right? Maybe it's this one. Questions about that one. Architecture design feature overview should be available. So there's basically three major components to open telemetry. There's the specification, there's a repository for that, the specification kind of defines like what is trace, what are spans? Like how are they defined? Same for metrics, same for logs. That's also where like semantic conventions go for the different data sources that exist in open telemetry. And then you have the collector and the client libraries that kind of build on top of the specification. So client libraries are language specific. Every language has their own. And the collector is a single binary that can be deployed in a variety of different formats, form factors. So you could make it be an agent, you can make it be a standalone service or a gateway or an aggregator. It really comes down to what are your use cases? And then some guidance around how you should deploy everything is listed as part of like the getting started guides for open telemetry. Beyond this, you're seeing additions being added to the project over time, but these are kind of the main components in terms of repositories, there are 50 plus. So it's a quite diverse thing. Questions? Not immediately, but I think that's one of the components which merits analysis. So everyone who can commit the time until the next call, please try and go through everything linked here, in particular through here, because I think here is a large part of the meet and that should probably get a few eyes. The bits and pieces which I saw were fine. Like it's a long pipe, but it's a well-designed pipe. So, yeah. Do we already have volunteers who want to say yes right now? Or tech leads who want to say yes right now? I would love to read a bit before saying yes, but yeah, it was good so far. There's a ton of material. I linked to a webinar too. It's about an hour long, but it at least gives you a high-level overview if you're not familiar with open telemetry. And of course we can drill into specifics of things. So if people have pointed questions, let me know, we can pull in the right people. Bogdan's on the call here. I think Morgan was here earlier, maybe he's still here, he is. So like we can get you the right folks that can answer any follow-up questions you have. Okay, what are the primary target cloud use cases? So it can be accomplished now. So traces can definitely be handled now, as well as the generating, collecting, and processing of telemetry data. So say the collector itself, metrics are very close, but like as I mentioned, still about a quarter out, given some gaps that have been identified and at least getting stable APIs. What can be accomplished with reasonable effort? Well, I mean, we could talk about like RFC type release stuff, we haven't really planned for that yet, but with OTLP, for example, being the format, that is an essential option. Things that are in scope, but beyond the current roadmap, we've already talked about logs, there's already some experimental support in the project, but that is not part of the original GA scope that we're targeting. And what is out of scope, as of today, as I mentioned, there is no backend. This plugs into backend, it's completely vendor agnostic, but open telemetry is not providing that backend, go use another one that's available out there. And noticing the time, we have about 10 minutes, but I think the meeting stops soon. So do we wanna stop here and people can review what's left or do we wanna try to chug through a couple more? You have, we're actually at time, I can run maybe five minutes over myself. I don't know about the rest of the call. I desperately need a bio break and I have the next one right after. So this might be a good stopping place and have people like please review the entire doc, even sections we've already talked about, please comment. I'll make sure this goes out on the mailing list and in the Slack channel. And hopefully either late tonight or by tomorrow, I'll incorporate the initial feedback we had from today's meeting. Perfect. Sounds good. And once again, I want everyone to talk more. Of course, this is usually just presentation style and this should be more discussion style. But to be fair, maybe people need some time to read it, but yeah, by all means people participate, please. Good choice of five access points, Steve. Thanks. At least it works. I've gone through so many. Good, good. I appreciate the communication and collaboration folks. So yeah, definitely don't hesitate to reach out even directly, we don't have to wait till the next meeting. I'm available on Slack. We can pull in people, we're in Gitter, like come join the conversation. Perfect. Cool. Thank you very much. And see you all soon. You too. See you.