 Test, test. Long to your ping. Hello, hello. Thanks for your play. I have a viewed thing with my camera. I'm half in another call, which was supposed to happen just before this one. And it didn't. And now I'm sitting and seeing if it happens or if it doesn't happen. So I'm kind of sitting in two. Hey, Shelby. Hello. Hey, Simone. And Cornelia. Hello. And I'll be brushing. I've got issues with my camera, so I would be switching it on. Hello. Hello. Happy almost summer. At least that's what I'm mentally doing here in the Northeast, where it snowed last week. It was 70 degrees Fahrenheit. And then the next day it snowed. And then it was 50 degrees Fahrenheit. And today it's cold and gloomy. So I'm just going to assume that some are coming eventually. No, nobody is still not skating. So good. We have people from quickly. That's super nice. So let's get started. As a reminder, please, everyone, write yourself in. I'll share the link once again. So everyone, feel free to write yourself in. So yeah, Matt, you wanted to take the first section. So I'll hand the floor to you. All right, hello, everybody. Great. So this is a CNCF call, CNCF code of conduct replies. I think of it as we will be kind to our colleagues as we are. I wanted to just put a couple of things that I had alluded to this at the very tail end of last meeting in the last few minutes. But as we look forward through 2021 at what's next for the SIG, we've spent a lot of time on due diligence reports. And now we've got the white paper that's gaining traction. But I want to just put some things out there as we have a lot of attendees headed to Coup Con shortly in the EU. And really, what comes next for the SIG? What's our purpose? And so I just wanted to provide some ideas and some vision that I think are really low hanging fruit. And there's more, but I'm just going to cover a couple today that are not controversial. The first would be to generate a list of vendors in the observability space and have some sort of look at what projects that they actively contribute to. This sort of forms the source material for an observability who's who. So if a smaller medium business or even a large business wants to engage with the CNCF broad set of projects, particularly in the observability space, they have a starting point. And the system feedback from end users in particular is that the space is broad, difficult to grok. There's a lot of players and a lot of options. So just a fundamental starting with a list of who in the vendor space where much of the technical contribution is paid for by these vendors. And we all come together in this, I don't want to say vendor neutral space, although that term is used, but just generate a list of vendors, what they're working on, what the roadmaps might be with links to what projects they contribute to. The second, and this is a couple of things here, this is the first of them where we could really use contribution from product or program managers that are technical that can work in contact with engineers, but develop a serious plan for how the SIG will engage with other projects. If you go to the landscape that the CNCF puts out, obviously there's a huge amount of projects. Many of them are doing fairly complicated things. And sometimes engaging with those from personal experience, an example might be flux. It does a really cool thing with GitOps, but understanding how and why things worked or didn't work ends up with log diving combined with a requirement that somebody be deeply, have deep technical understanding of what's happening. And the project in version two is doing work around observability, but many, many projects I think could benefit from the SIG reaching out and saying, hey, perhaps a very small new project in the sandbox or even some of the more established projects, there are vast opportunities to help people understand what's happening in that project as it runs and does whatever it does. So here again, I think we have an opportunity to start building up relationships and to leverage our existing professional networks to reach out to those projects. We received positive feedback from this notion late last year from the TOC, but I would like to put in a program so that we could have a horizontal scale out if you will and we could leverage the connections that we already have. So developing what that plan is and how we do that, that's an opportunity for someone to really step up and really define a program for the year and beyond and to drive it. Third, develop another pragmatic series plan for how the SIG will foster in-person meetups in a scalable way once the world reopens from the pandemic. I'm sure documentaries and movies will be made about 2020, but KubeCon North America is still far enough out if there's tons of time to plan for it. And in the past, we've had folks come to the SIG saying some variation of, hey, in my local geography, we wanna make a local meetup group focused on observability. So I think defining a program to support that, to provide materials that might jumpstart, how would you make a local area meetup in your geography? That could be an accelerator to really build out a grassroots set of meetups that can then kind of bubble up interesting findings or reports or presentations or talks or what have you. So again, this is someone we've experienced with particularly cross-organizational outreach and or just defining what that program is in a way that is welcoming to new contributors so that we can horizontally scale our efforts in an asynchronous way. There's two more. I think that we should, as a SIG, as a work stream generate personas that articulate the expectations, the opportunities and the needs of a few set of folks and if I could see the pot, some obvious ones are end users of observability projects. So companies like EverQuote or many others that are either working with vendors or are developing their own expertise and building engineering teams that are either using or contributing to these projects that are within the scope for our SIG. A second would be vendors. Like they, again, typically foot the bill, if you will, by paying for engineering time to let this, the project that this ecosystem is built around thrive. So what are their expectations for the SIG and what opportunities or needs do they have that we can structure our activities to engage with, to make this truly welcoming for us all to come together, make sure needs are met. Third would be individual projects, as I had mentioned, and their contributors. It could be a sandbox project that has a great technical idea with an initial implementation, but no observability stuff, no metrics or they're struggling with as many companies and many products start to engage with other modalities of communicating what's happening with the services that comprise that project. They might want to use AR or VR or other types of things that are not simply logs, metrics, traces, right, but are emerging. So vendors would be a second persona. I'm sorry, projects and their contributors would be a third persona. And then at the chair and technical lead level, there's a fourth persona there that's the TOC, right? What does the TOC need from the SIG? What are their expectations? And so I think just like personas are useful for defining a project stakeholders or a product stakeholders and customers, these personas could help us figure out when we're doing something like a white paper, like which persona is it for or is it for multiple? But here again, some non-pure coding type folks who might have expertise in product or project or platform management or definition could really contribute here in concert with the engineering folks and others that have been joining the call. So again, here I think we can broaden our tent and as we'll have a lot of eyeballs in KubeCon, I would like to extend a welcome to those other disciplines that are really necessary to have a pragmatically multidisciplinary successful specialist group. And then lastly, if anyone throws a rock on YouTube, there's a wealth of the technical talks as well as higher level architectural talks or product talks in the observability ecosystem. Everything from Cortex and Thanos and Prometheus to open metrics and open telemetry and how all of these pieces are put together. I did a quick look through my YouTube playlist and I've got 50 or 60 talks that are the ones that I would recommend of many more that I've watched. So I think we could curate a list of these or a compendium so to speak of additional resources. Does somebody really wanna really get into metrics and understand what's read? What metrics should I define? Or how should I put these pieces together? So some combination of a curation of case studies or talks that are in effect case studies. Again, for folks new to this space and new to the domain, it could provide a nice starting point to existing content that's summarized in some discernible way. That doesn't involve the sig generating content. I mean, we can leverage a lot of existing things in the CNCF ecosystem. So I just wanted to put that in the minds of the folks listening and those that might be watching this talk from KubeCon or otherwise, that there's a lot we can do in 2021 and I'll follow this up with a PR and concert with Richie and we'll iterate and talk about it offline, I suppose, or in Slack, but that's really just, I wanted to surface that at some more detail than I did at the tail end of last meeting, but I'm excited for the remainder of this year and the summer, not just for the warmth, but I think there's just a huge opportunity for us to again, horizontally scale our efforts as a group. And then lastly, and then I'll turn it back over to Richie and we'll get on with the rest of the agenda. This special interest group is by and for the community, which is all of us that are on this call and a whole bunch of people who probably don't know about this call or are not sure if they should join because they don't see how they can engage and contribute. And so as we've done with the white paper and other things, anyone is welcome to define new work streams and say, hey, almost like not to leverage a, I suspect there's a lot of gamers in the cloud native space, but you could almost think of it as like, hey, looking for group, like I'd like to do this thing. Who else is interested in this thing too? We reach out to your network to folks that aren't already here and bring them in. I would like to see that sort of be the nature of our special interest group, like a place where interesting things might happen and little dopamine hits can be had as we learn about the ways that people are using the projects in the CNCF umbrella and how we're welcoming new projects into it. So that's it. I'm back sort of full-time after maybe a quarter doing some self-care. And I'm happy to get up to my eyeballs and all of this all over again. So thanks. And if there's any comments or feedback on any of that, Ricky, I know that you've got the agenda for today, so I wanna make sure we're still with you. Max, you like that? I was just about to make one point of order. I have just been informed that I might need to drop into a super, super urgent and important call privately. So I might drop at any point during the remainder of this call. But yeah, as this, everything what Matt just talked about needs people. And it means both owners and people who just help or who give feedback or whatever. So this is very much a call for people to step up and just do what they think they want to be doing either from this list or if you have some other ideas, go wild. I mean, with any constraints of the sick. But this is very much a call for more hands. Of course, that is how we can start paralyzing more. Historically, the last year, the group was relatively static that has changed towards the end of last year and that has become better and better. So this is absolutely great to see. And it is also a great point in time where you can just step up and say, hey, I'm super interested in this and that thing. I want to do it. Yes, please. Very much so. Another suggestion. I'm not sure if everyone here was following the white paper when we started late 2020, we had really small contributed base and we were thinking about how we can get more volunteers. And then we did a tweet between a trend of Twitter. We got a few retweets and certainly we had more than 20 volunteers for the white paper. So Twitter, I think it's a great tool to make the community aware that we are working on something and that we need help. Yeah, that's my suggestion. Let's make our work more visible just like we did with the white paper back then. Yeah. I think that's a good point and I can talk to Jesse and such to see if CNCF can also help amplify that a bit. As to others, this is your chance. If you're interested in any of those, now is the time to speak up or if you have anything else which you want to suggest, now is a good time. Really any time, right? I mean, that now is great, but as just a normal operation of the SIG, like this can happen at any time and I hope it does. Yeah, bring your ideas if you're passionate about something. As Arthur mentioned, just throw a hand up and you might be surprised at who comes out and wants to jump in and contribute. Just curious, do we know how like other SIG groups like handle projects like these? Are there like little like working committees or subcommittees that may meet within the larger SIG to kind of move these types of projects forward? Yes, there is. So in the CNCF charter, I'm sorry, in the SIG observability charter in our repo, there's a link to the specifics, but there are technical leads and working groups and working groups in particular, I think, are how many other SIGs will do this. Some of the things that we've been doing if you look back through the calls over the last couple of months, we're on due diligence for Cortex open metrics, open telemetry, et cetera. So some SIGs do these in the context of a working group. A working group is, you know, it's meant to be a project with defined outputs and a time boxing, if you will, that runs, you know, as a work stream and then reports back up to the SIG on its status. And really, when the CNCF, they have a SIGs.md, I believe it's called, where they define like, what is a special interest group? And there's a lot of autonomy given to us as a group to define additional roles, like we could have, you know, curator of the case study committee on, or, you know, whatever makes sense for us we can do and those groups that might form around those could be called working groups formally. Up until now, we have not had the happy problem of having more work than we could do, you know, an hour together all at once in a single meeting and the white papers and example of that where the folks actually contributing are defining how they meet, in this case, at Slack, you know, and so, yes, in short, there's a, we have a lot of latitude and we can do whatever makes sense for us. Yeah, no, thanks for the context there. Just looking at like the Zoom chat looks like a few people are interested. So maybe starting a working group for some of these initiatives would make sense. But we can always take that offline to Slack to kind of see if that would make sense. I'll follow up this sort of verbal onslaught if you will, not to be too self-deprecating with something that's a little more concrete we can look at. But again, we control our own destiny here and that's one of the nice things about, I say it gets, it's a coalition of the willing, right? So, you know, newcomers and additional folks are gonna bring your friends, you know, let's, let's do, it's a, there's no shortage of interesting work, right? So. And you don't need to, you don't, like you can put all of this in A-Sync but precisely this kind of initial coordination and such in my experience is great to do live if people want to. So we can easily just take some time out of, out of this call for people to synchronize on this. I see Shelby wants to talk about the personas for example. So if you want to start this now, now is the time. Else we move on to the white paper it's totally fine but you should have that space to actually use it for live synchronization. Yeah, I would encourage folks to, if you want to see a model for how this can work once we actually do scale up and we're doing more in parallel, you know without having a choke point at a single meeting where we walk through one thing at a time. You can look at the TOC meetings. They have a pretty nice structure that facilitates that. Yeah, sounds, I mean, I think that sounds like a plan. I don't know, Shelby or whoever else might be interested if we want to maybe take time now. I don't know if we have enough thoughts put together for how this might work. I would also be interested in helping out with some of these, but maybe we coordinate a little bit first on Slack offline and then next meeting we can come with more of like a plan, I'm not sure. Yeah, I'm happy to talk on Slack about just sort of how to purchase stuff. In general, I'm really comfortable working on GitHub sort of asynchronously, but I know that's not accessible to everybody so we can discuss in Slack how we want to protect. Thanks, Rami. Anyone else? Okay, so then we can move over to the white paper. As I might need to drop at a moment's notice, Arthur, do you want to share your screen and walk through it? Yeah, sure. Maybe it has given more attention to me in the last moment. The last time we were cleaning up a little bit the comments, most of what's left today in the file is things that I think they should be rewarded. I think there is a better way to write this and so there are no editorial stuff like we had last time. So I was just scrolling through the doc here before and I didn't see any major things besides suggestions and thanks for the new metric section. There is like a whole new piece of text here but it could start, I don't know, it could, do you want to start from the beginning? I think maybe the ones which we have read once, we don't need to reread immediately, maybe the ready for review ones. Yeah. So if we scroll down the ready to review, we have, I think the first one that pops up is the logs. Logs that was contribution from Raphael and few other comments and some people that left other ideas to the text. So I didn't review the text, I just saw that Yona left some comments here more recently but they were not taken by Raphael yet. Is Raphael in the call today or not? I don't think so. No. Yeah. So you might have to call him on Slack and ask for him to come here and have a look at his comments. So logs next meeting then are half-way? Yeah. Yeah. So Raphael is in the logs. Traces are in here for a long time. Right. Juracy left, he basically built this section and I think one comment that people had was about the figures that was like too small and hard to read something. I agree with that. We're going to have to fix this at some point but is anybody else writing anything about distributed tracing or volunteering in any form? To call. So as far as I see, there are seats that went on. Arthur, can you, something is I think wrong with the mic because barely, I can barely hear you. Okay, sorry. We two, one of us are both. Arthur, okay. So for the continuous profiling section, we don't have anybody yet. For the crash dumps, I volunteered. I have already something. I just didn't paste it here. I'm sorry, could you repeat those messages for my mic? For the crash dumps, I have already something. I volunteered for this section. I just didn't paste the content. For the continuous profiling, we don't have anybody but we had someone suggesting continuous profiling as part of the section that describes the observability signals. And I think this should appear in the introduction as well when we talk about the three pillars but we shouldn't say that the three pillars are the only things that we have. These are the signals that we have to, let's say, more established today but probably we are going to find out that other signals are also interesting. And then in this section, we can talk about the crash dumps, the continuous profiling and other suggestions that people have. I'm curious if someone from the conproff project can repeat those. I'm just dropping, see you soon, see you. Yeah. Also, I did add a continuous profiling thing at the bottom but I wasn't sure what the deal is here of if I'm looking at the same thing, yeah, down below service mesh. Oh, you added, okay, you added here. Well, I didn't know what the deal was if it was a thing that just anybody could add. So yeah, I've been working on a continuous profiling project and I stumbled on this white paper randomly and then it just said TBD for continuous profiling so I went ahead and just added my thoughts, I don't know. These are slightly based off of also like Google has a continuous profiling paper and then also just my general thoughts on where it fits into the whole, you know. It does, it really does. Thanks for that, Ryan. So I move it back here. I move it a little bit up where we are talking about observability signals. Is it okay if I put your name next to the section if you want to add something else, add the reference that you just talked about or some, it doesn't mean that you, let's say that you are locked into this section now and you have to watch this document every week but at least we know who started here and eventually somebody else continues or you continue yourself. Is it okay? Yeah, you're doing that yourself. Okay, great, thanks. I just delete then from back from down here. One question, Ryan. When you wrote that down here, you were able to write without a suggestion? No, I just suggest, so it said TBD and then I just like suggested instead of TBD that next and then I guess somebody like merged it in or whatever. I don't know who, but I can, it was kind of like where Perry is right now. It was kind of like that, except I just replaced TBD with like the full text that you see. Yeah, no problem. I was just surprised, I haven't seen that before and then it was just merged already. Yeah, but great, thank you. It wasn't me. You merged it at least. Okay, great. Yeah, then it looks like we have some, we were looking for someone to talk about continuous profiling because somebody suggested as like a piece that's entering the observability picture, but we didn't have anybody directly working with it or that have had experience. I could speak briefly. I think, so I had pasted in a couple of diagrams just as like, not like this should be the diagram, but like we could have something like this that I had stolen from Richie's talk from KUKON last year, but the KONPROF project is actually distinct from the Stackdriver Profiling Stack. There are two different projects. What KONPROF is, I believe it's a young project. I don't know if it's in the CNCF, I don't think it is, but in a nutshell, it's the guts of Prometheus hacked up a bit. I'm not doing it justice, I'm sure, but so instead of the value being a metric that's tracked over time, you know, in 64, it's a go profile and I believe they're working on Python as well. So it's basically what you described, but as an MVP early project, I wasn't sure if that was a project you were referring to, but I think that the most recent Grafana KON, I wanna say earlier this year or late last year, there was a talk on that project called KONPROF, which is effectively this, but it's very early and I don't know the details, Percy. But I'd love to hear what you're working on, Ryan. It sounds super cool. Yeah, I actually didn't add the references. I didn't wanna like, yeah, like promote, I mean, it's an open source. So we're also similar to KONPROF. We're doing it, I guess, like slightly differently, but yeah, it's basically continuous profiler, works for Go, Python, Ruby and EBPF. And yeah, we just like release, or we just open source it at the beginning of this year on like January 1st and have just been working on it since then, me and one of my friends. Awesome. Yeah. But similar thing, yeah, just like KONPROF just does it. I think KONPROF uses, yeah, like Prometheus more heavily. We basically get our own sort of like custom storage and compression and use that as our sort of like backend, but both of them use like sampling profilers to be able to like have low overhead, get the profiling data and then send it to the server. Yeah, this to me is an exciting area of observability generally. I mean, continuous profiling is something that has been in training for like the better part of 20, 25 years, but we can never do it before because network bandwidth, storage, compute, blah, blah, blah. And now in this new kind of, in this new cloud native world that has been rapidly expanding, we have all these new capabilities to do things that we couldn't do before. So I'm very enthusiastic to see what happens in the space. Yeah, I am too. Hey. And it's nice to meet you and welcome. Yeah, thanks. Got any white paper? What we have been discussing earlier is to try the most to not like do marketing to any tool. Doesn't matter if it's since yeah, for not, but just focus more on the methodologies and what we are doing. But... Yeah, I didn't add this. Yeah, yeah. I know, I know. Yeah, so I wrote all the, basically all this text I wrote and then whoever merged it in added the references. Yeah, like... Yeah, there's nothing to do with the story. I'm not trying to repeat anyone. The process thus far, I mean, we're really in content generation mode here. You know, we're shaping up and it's been really cool to see all the contributions broadly. I think at some point, I don't want to speak for Arthur or Simone, but I believe at some point we'll put a pin in it and we'll probably do like a rewrite. I thought to do this in the last week, but it didn't seem ready yet and I had some questions around like the high level goals and I didn't feel like word smithing without letting the rest of the content come through first made sense. It's always easier to prune things and remove duplication than to... Yeah, that was also one of the things I was kind of interested in coming to this to learn more about is sort of like, what is the ultimate, I guess, yeah, like goal of these like white, or I guess kind of like, yeah, I don't know, just some extra context from I guess the people who work on it more, I guess, you know, seriously, just like how you guys think about it and where it's sort of like, what its role is in the whole like, you know, CNCF landscape and everything. Okay, please, do you want to say something? Yeah, I mean, the, I mean, from the SIG, at least for the white paper, the goal is to have also like a more solid reference when somebody comes to CNCF and looks after observability and tries to understand how we understand observability and we're not trying to make any marketing pitch here for any tools like Arthur said, but this is more, it's more like an overview. Other SIGs, if you have seen other SIGs in CNCF, for example, there was a white paper from the security, I think a security SIG or something similar last year. So they also produced a similar document. I think it's a Github even where people just have as a more consolidated reference and how the topic is, let's say, framed, how we understand how the CNCF like understands the topic and things that we, the community agree that are important and how they work. So this is at least the output from the white paper, but it's not the only output from the SIG. So the SIG, so we have been doing due diligence and other things as part of the SIG as well, but it's not the only outcome, I would say. So there is another document that has been under review or generated about, it's more about tracing and how you as an application developer get from zero to start using tracing and how it works. This is more like looking at the application developer perspective, instrumentation and looking at the APIs and the data collection. So this is the sort of thing that I think that the SIG does in the SIG, in the SIG, in terms of output, but we had other ideas already that some were good, other ones were up and down quite quickly. So I've just read both papers with fresh eyes or in their current state. My takeaway, and again, this is just an opinion not with my chair hat on, but just as Matt. The second paper that Simone was just referring to on distributed tracing, to me, reads more like a very good start at what would become, say, a self-guided walkthrough or a lab. Here's how you do it, using a certain set of tools that the SIG is not saying are that set of tools. And I would love to see a bunch of these documents or guides or walkthroughs that can be generated by us, whomever, about different ways people are using these. Cause if you look at the projects, it's really like a bunch of Legos and you can put them together lots of different ways, depending on the needs of your business and your goals. The white paper we're looking at now to me feels like it's a great, in progress white paper that's more around noun definition and defining terms with the goal of establishing maybe not the vocabulary, but at least a common vocabulary so that when we're talking about, there's a lot of overlap between some of these things. And in fact, the three pillars is kind of not really a great model for that, for reasons discussed in the paper. So I'm really excited that we're coming together on just helping folks that are new to this dizzying set of different projects and signals and metrics and terms and let alone how you operationalize it or implement it, but just what is this stuff? This paper, I think, at least again, from a fresh reader's perspective, aims to do that. So the audience is like, are you new to this space? Do you not know what people are talking about when they say distributed tracing or continuous profiling or what have you? So, and again, that's just my opinion. Keep me honest here, Simone and others that have actually been doing the writing. Can I ask one thing on the content just real quick? On the traces section, are we missing like a little bit more detail on trace propagation like W3C context, trace context comes to my mind. I was too busy to do anything on this thing, but I'm chair of W3C trace context and I feel like I have to add this in a one way or the other. Also, a section on history would make sense. About open tracing, open census. And I was actually a fanboy of very early open tracing. I wasn't able to use it, but I implement it. But I'm not talking about open tracing. Now I'm talking about W3C, the W3C standard on trace context. Yes, what I meant is I was referring to recently open tracing and open census merge geometry, but that's just one example that the canon, the history of distributed tracing, which really the W3C tracing context and context propagation and just distributed tracing as a concept is decades old now, right? And perhaps a section that, as you say, kind of elaborates on what some of these standards are and what some of the history is, which is inclusive of the open tracing project, the hotel, the other things I mentioned and W3C, as well as like perhaps additional links to references in the W3C ecosystem of folks that have implemented things to that standard, like in particular, as I've talked to various companies over the last month, as well as other stakeholders, I think distributed tracing is probably, is and perhaps always will be one of the more complicated things that's hard for the summer to develop and sometimes even just common terms and definitions are lacking. And so, yeah. I think also this is something from like talking with customers that many get wrong. So distributed tracing is often misunderstood as a whole. And I will try to fill in the blanks here. I don't know if it should be a separate section or if you can tuck it into traces. I'm not completely sure about that. Yeah, and I think also there's a couple, there's like the instrumentation side obviously, but then particularly in the CNCF, there are multiple trace back ends. They have a lot of vendors that are selling a SAS variant of their own back end. So I don't know if it's in scope or out of scope, Arthur and Simone and others, but perhaps something that talks about the trade-offs of what the true costs are of running your own tracing full stack locally versus leveraging vendors that have economies of scale that can provide a lower total cost of ownership. I mean, in our case, and ever quote, we've used both vendors as well as for some things our own in-house things that we're prototyping in our case, you know, Yeager and Tempo and some other stuff, I think in R&D, but, you know, the total cost of running everything yourself can be quite high and as oftentimes not talked about in purely technical papers that talk about, well, you do this, that, but you know, I don't know though if that crosses out of the technical and into the operational. That's not what I think. I personally would not go there. Sorry, Arthur. I think like the costing is a really important factor to decide for beginners that don't know what to choose costs is a very important thing to analyze. So I do agree that it's an important topic to add. Different vendors have different back ends and they can manage traces or metrics differently with different costs. So yeah, I do agree that's a great topic. Just to be specific, I am categorically not suggesting we build a big table of vendors and what the cost and what the cost would be. I more mean just at a meta level discussing that the total cost of ownership involves implementation, you know, support, and then just, and then we can reference some of the other things I mentioned at the top of the hour, you know, there's a rich ecosystem of vendors that can help and contractors and everything else or you could do it yourself. But I would want to steer very clear of like specific costs or making any kind of king making or favoritism but just talk about as a technology leader, here are the things that when implementing distributed tracing for your business that one should consider sort of a look before you leap, but yeah. Yeah, I mean, this is a journey or also a kind of warmth that is sometimes somethings you don't want to. But anyway, I'm not opposed to it, but it's extremely difficult discussion, this cost of ownership discussion because it's pretty complex. After all, you never know, it also depends very much on the use case and the size of the business and what you really want to achieve and whatever, you know. Yeah, my mind is like, here's things to think about but not here's how to do it because as you said, it gets very scary quickly if you try to go too far. I kind of have a question about this too, like when you're talking about like costs, like one of the things also, yeah, that I'm interested in in this space is like the, almost like the benefits as much as the cost or the benefits in context of the cost of like, yeah, we know that tracing allows you to track down things and whatever, but I don't know, it seems like how do you balance the cost of having to instrument a bunch of like spans in your code with the benefit of being able to see that kind of thing? I don't know, does that make sense? I'm also kind of just curious what you guys think, just like in terms of like the, yeah, like how do you weigh the benefits of adding tracing to your system basically? So it's like, yes, you can track something down, but is that worth the amount that like you're talking about that it ultimately costs to add something like this to your systems? Yeah, I think that ties into the point someone was making about instrumentation in general where it's like, what does that look like as an end user or like as a practitioner? How, what's the level of effort in the ROI on instrumenting my code versus using like it really depends on your language ecosystem, but like for example, open telemetry Java does auto instrumentation with the Java agent and you don't even have to change like any, like a single line of code. And so like what, you know, what's the level of effort in ROI on that for different like language stacks and stuff versus and sort of also the, I think of it like the crossing that threshold from like capturing data outside of the runtime code versus like inside the runtime code and what you can get from that. I can take a crack at that and see how big it gets. I don't wanna just like derail that whole section, but I think it's really a good thing for people to think about because a lot of people here instrument your code and they think it looks like, you know, wrap every single method in like, you know, another layer of junk and it doesn't have to be that way. Yeah. I think also the personas that I mentioned might help here because like what we've just been talking about is the cost to a developer to instrument their own code, which is a concern or to do whatever's necessary to make traces happen for their service or for their code. I was actually talking more from like the, from the business perspective, like if I'm going to run my own trace stack, my own backends operationally and have SLAs back to engineering teams, like there's costs around there as well, that well, I don't think we can quantify generally. We could talk about like, if you're running a business or if you're a startup or you know, those total costs can be really hidden sometimes and you end up having, you know, half of your development resources might be going to tooling that you didn't really think that you had to do because you didn't know going into it that these could be some hidden costs around total. So not just from a developer persona, but from a business persona as well. Yeah, certainly monitoring company. Are the personas we like to work in? I think that's pretty common. I think meanwhile is we have, first we have developers, we have cloud architects that have like the bigger picture in mind, then we have the ops, SRE or ops is almost one person on its own. Then we have the SRE that has again different needs. So maybe, I don't know. I think it's common knowledge if we discuss personas, maybe it helps. Running out of time I think. Yeah. If you're gonna improve the tracing section could be dangerous seekers, just so he doesn't get caught by surprise. One thing I suggested back in the days was also to talk about, because in tracing you have, even in cloud native you have two dimensions. You have the dimension where you trace vertically your application, let's say it's an application that it's on an application working inside its own pod container. But you could have an application that interacts with a bunch of other service. So you have the time dimension. So this was back in the day more treated like system tracing when you're working with monolithic systems. If you worked with Perl for LTT and G or something like that in Linux. But as you move to cloud native or containerized applications you have the time dimension as well. Like these applications using the network to interact to each other. And then you have the distributed tracing dimension. This was one suggestion I made back in the days because at least from the corners that I come from it's usual a confusion when you talk about trace and you are not specific. If the person can think about that I'm talking about Perl or things that are monolithic tracing instead of distributed tracing. So it's not a term that is if you're not specific enough you don't cause confusion in the person that's on the other side. But that was not, it was not adopted in the section but that was a suggestion that I made back. I think it's been a long time that we should include the system tracing part as well. If somebody knows a little bit of the work from for example, Brandon Klag, the guy from Netflix this goes a little bit in this direction. Okay, but we came up four words then if you don't have any other comments. So the crash dump, I said that I'm going to add at least a part from the crash dumps. There were other people talking about other sort of dumps, heap dumps or whatever other dumps. And I added here the section. They can of course suggest as subsections what they would like to write about but they still have to add my text here. So the correlation of observability signals Bartek said that he would take this section. So we can ping him on the slack as well. Just to check. Is that actually related to the exemplars? How to correlate traces, logs, measurements, is that it? I don't know. I think you have to ask him. Looks like we are almost done, right? Do you have a few sections too? There are few sections that are still empty. So the data visualization and exploration, David. I don't remember when he suggested this one here but we can ping him anyways. I had another suggestion for a new section if it's not already there. And that's the correlation of activity data and costing data. There's a couple open source projects. Like if you actually go to AWS and say, hey, I'm running AKS, how do I correlate this? They point you at cube costs for other kind of vendors. Some of them have some beginnings of things. Google had a data a while back that used some Stackdriver Brits to do the same thing. Take audit data from the Kubernetes activity stream. And audit mechanisms and then correlating that with other things. But that general topic of using the observability data and then mixing it with other things. I don't know if that gets again out of sort of the scope for the paper or if it makes sense to just introduce the concepts that, hey, this is a thing people do and here's some links to examples of that but it's still nascent, right? But that would enter in the correlation, right? In the correlating observability signals that you can add there, I would say. Makes sense. So there is this one here from Service Mesh. They use cases from Michael, we can ping him as well. This one is defined since a long time and he had some good ideas about use cases or at least examples from AWS. Alerting on observability data. This is a new one and I think David suggested just to see how it would be the reaction if we if we leave it like it is or not. I think maybe we can just let it out. It's like- This is a little bit in the correlation, right? Because you're going to look into different observability data to generate alerts or I'm not sure, have to ask him if this is not something that would be under the umbrella of the correlating observability signals. My point is that we don't have many too many people here right now or people are already leaving and he added like 30 minutes ago. Maybe we can discuss further on slack. If more people agree, then yeah, let's leave it. I think we can decide. I mean, it's up for y'all to decide if it's in the context of this or not. But I do think that alerting specifically is sort of what to alert on. So not so much like what our metrics but like what metrics should I define? And there's like RED and USC or RED and use are two common frameworks for that. So I wasn't sure based on his comments if he was talking about sort of writing a section on best practices around where should you instrument or of the many metrics you could pull out of Kubernetes or anything else, what should you do? Or if that was not what he was talking about with alerting, so I'm not sure either but I do think that that is something that most people that once they deploy these tools they're like, great, now I have two million time series. Like what do I actually alert on so that I don't make people crazy? And here again, there is a wealth of existing tech talks that go deep into this that we could simply reference or maybe we could introduce some, you know, RED and use as concepts or as nouns. And then leave it there. I'll leave him on slack and ask him what he meant. And yeah, but I do believe that we always said that. And I took notes here from the remaining sections that are kind of abandoned. There are no owners or nothing. I mean, some have no owners, for example, the zero to heroings of salability. But the remaining ones, they have some body but no content. So we can ping these people on a slack and then we see if they're still interested or they just forgot about it. But yeah, we are on top of the clock, so yeah. But yeah, I don't have anything else to say. I just, I'm just going to ping people in the, or at least go in a slack and say which sections are still need some love here. And we see if the owners react. Okay. Just one thing more about Yana, she just joined like when we are living. I don't know if you got the right schedule. It's Yana. Which schedule? I mean, that the meeting started like one hour ago. She joined just now. I'm not sure if she knows that the meeting started one hour ago. Ah, okay. We have to drop people are joining just now. Yeah. Okay. Thank you everyone. See you next week. Bye bye. Bye bye. Hello. Hello. Okay. So the meeting started like one hour ago. Oh, thanks. We just. Yeah. Yeah. I can check. What do you get the, the meeting from? The official CNCF calendar. The CNCF public events calendar has the meeting schedule there. Yeah. What can you see? Let me see what's at the event exactly says it's serious. But for me, it's at six to six. Ah, that's interesting. Okay. The official calendar was adjusted, but the other meeting was not. So it's okay. The calendar seems to be fine. Yeah. I just opened the calendar and tell me one hour ago. Yeah. I think the, because what I did, because I have to use outlook and Google calendar at the same time, I usually forward the meeting so that they're actually on my calendar. And it seems to not update the meeting when you change them. Oh, that's why. But it's fine. I'm going to try next time. That's good. Okay. Okay. See you. Bye.