 Okay, we'll catch up with everybody else later. So let's get started. Do-do-do-do-do. AIs, nothing too exciting there. I do not see an Austin on the call, so we have no update on the SDK, but I don't think they had a, I'm sorry, I know we did not have a meeting, so there probably isn't anything to update there. Other than to say, I will mention that I bumped into Mark Peek, who's not gonna be able to make the call today, because he was traveling, that they did get permission to transfer over their Go SDK, he just needs to actually make it happen. So that should happen relatively soon. Other than that, I don't think there's been any real progress on the SDK's front. Moving forward, Cathy, is there anything you'd like to update us on relative to the workflow subgroup? Oh, really, thank you. No, okay, thank you. There's a lot of background noise, people can go on mute if you're not talking, I appreciate that. Let's see, KubeCon. So Cathy Clemens and I had a discussion last week to talk about the preparations for the deep dive and intro sessions for KubeCon. If you guys are interested, there's a Google doc here under update. I did put together some very, very rough slides, just as an outline of our various topics. If anybody's interested in contributing to that, even if you're not going there, please feel free to look it over and correct things if you see that things are wrong. As of right now, the slide deck really won't have a whole lot, just sort of an outline of what we agreed to talk about. So I really want to wait a couple of weeks to see something close to the final version. You might be better off, but anyway, feel free to take a look if you guys want to. Interop work, as far as I can tell, no one's really putting any comments into the interop demo Google doc that I put out there. I really need some feedback here, in particular on the type of application we want to do. The two leading candidates are the natural language translation, meaning go from English to French to German, back to English or something like that, or a madlibs kind of thing, where each participant in the demo just generates a random adjective, adverb, whatever, and we put it into sentences and steal kind of funny things we come up with. Personally, I'm leaning a little more towards the madlibs thing because I'm not sure how many participants actually support natural language translation, and if everybody just calls out to some popular servers on the internet, then that doesn't really do a whole lot for us, so I really need some feedback from you guys. This is probably not gonna be done in time for Shanghai, but we are looking to do something for the KubeCon session in Seattle. So people probably need a lot of time to put code together, so please do comment on that if you get a chance. I really need some feedback there. All right, any topics or any questions or comments on any of the topics that we just ran through? All right, cool. In that case, Richard Hartman is on the call. He's from the open metric side of CNCF. He and I conversational a while ago about the possibility of some sort of collaboration between our two groups, and we thought it might be good to just spend 10 or 15 minutes just to have him talk to this group to share some of the ideas that we've bounced around. So Richard, do you wanna unmute and join? Thank you. Also sorry for the background noise, but that seems to have subsided. For current information, I will be at both KubeCon Shanghai and Seattle. We can also sit down in person if need be. I also try always to be a little bit early, a little bit late and leaving. So there's actually time to sit down in peace and quiet outside of the actual KubeCon. So open metrics. Most of you will have heard of Prometheus, which is the de facto monitoring solution within the cloud native space in the meantime, especially since it's really well integrated with Kubernetes in Prometheus or with Prometheus, where the second project to graduate within CNCFs. And there's quite some velocity behind what we do. Yet we have the, I think there's a ton of political stuff around projects or even vendors supporting a competing product slash a competing project. So what we did is we took that standard, which was already existing for how Prometheus emits data and it has data into its database. And basically updated a little bit, mostly with input from Google and from Eber into something more versatile, which is still backwards compatible with Prometheus exposition format. And that's what we're currently doing, basically creating a standard for transmitting metric data over the internet or whatever with a special focus on the cloud native space. We already started to branch out a second thing, which is not yet fully done, which you'll probably call open steps or something, which is more like steps where you actually push data as opposed to the open metrics to pull data. There's also some other operational differences, certain large deployments need that kind of thing. So we decided to split the input nice and simple. But we also plan to expand beyond that with then cover for example, events, log files, traces, all these things. So basically what we are doing is we are defining the format through which you can transmit data. And from what I can see of cloudy ones, basically what you're doing is you're defining what needs to be transmitted on the wire while being flexible about how the accept format looks. And that's basically where I see the potential for coloration in as much as we have an opinion about how things should look and ideally things for metrics, for steps, for logs, for traces, should look very, very similar. So people learn it once and then they basically can deal with that data no matter what type of data it is. Whereas you only have minimal requirements on certain data types, which need to be exposed but else you don't need to care about the format. And that's where I see the intersection with a possible intersection. That basically you support something which looks like open metrics but for events. And we on the other hand basically support it and then we call it whatever. That's basically the really short version. Right, right. Any comments or questions from people? I know you guys aren't that shy. You can go ahead, I can also, if need be, I can also expand on stuff or on wire and how on such. Yeah, so I might have to. No, no, I think it might be useful actually to expand a little bit because you were very quick on it on where you see the collaboration. I mean, you put it in very precise terms. Like for example, do you see us producing a cloud event mapping for open metrics or how did you see the collaboration? What was the collaboration result and put it that way? Ideally it would result in a format which is supported by cloud events which is also supported by whatever we call it on our side for handling events. And then basically we define that you have to emit certain metadata about events within that spec, within that format. You define that it can be done through that format and that basically both look the same. So we have something that people can use something existing which is coming from the metric space and stuff actually looks the same. Maybe it's best if I just copy and paste something in for people but before trying to explain this that's probably better. Is everyone aware of what promises and how promises does things? No. Okay, very good. So let's start at that point. So basically we have things with a text to have a format. For me this proper actually decommissioned the proto but Google needs it so open metrics expect for our stuff we are actually quicker in things with just parsing text which was surprising but that's the case. I'm just going to look into the agenda. Okay, so I'm going to put the link into the agenda I put in there just now. If you click on that link what you'll see is current live data from a demo instance of Prometheus and this is how Prometheus explicit format looks like but basically in this specific case it would look exactly the same if you're doing open metrics. So what you're seeing here is a certain format of how to define data which is attached to things. You have the name of the type of the time series then you have certain label sets which are basic key value pairs and then you see different keys with different values. This opens an n-dimensional matrix with which to slice and pass your data and at the end because this is metrics you just have one single number and this is the actual data being transmitted or the actual value of the data. So if you look at that you can easily see how you could transform something which is already now being done in one of the formats of CloudyWinds to just look the same in respect to the format. And that's basically what I'd like to see come out of this possible collaboration. So wait, so what are you, so in which direction? So what are you, I'm not clear what you're expecting. Basically in both directions. So again, from what we do at Open Metrics we actually named the group in which we are doing this and have open observability to deliberately expand the purpose beyond just metrics course you need more for through observability I don't have to convince you of that. And that we say as a CNCF member project okay this can or should look like that and you as also a CNCF project say okay it can also look like that and just by happen sense it's exactly the same. And then we just, I don't know say you support the open whatever format within CloudyWinds and the open whatever format of CloudyWinds also supports CloudyWinds. Based on the background I'm being distracted. Yeah, so so far what we're doing with CloudyWinds is we're expressing distinct events and then are effectively dispatching and routing events. And there are different from, so I think what we're mostly, we've been so far been focusing on also in our interoperability samples in most of the discussions are events which are actionable which means that you raise an event from a source something just happened and then you dispatch that to a handler which sits somewhere on the other side of the world and that then goes and reacts to it. It looks like what you have here is more a data stream of observability things which are not necessarily immediately actionable but they rather need a later processing which is a bit of a different pattern and which we've also discussed like we've had we have extensions that are allowing to put sequence numbers on cloud events, et cetera but we're really looking more at discrete events which are individually handled and then individually dispatched for PubSub systems and there are individually invoking functions rather than looking at streams of events and what you see to have here is event stream. I get what you mean. So, and that's where basically or to the other way around. Your focus is on a different level than our focus. Our focus is to have a common format which people can basically parse and rely on certain aspects of having a certain format for the data to make it easier to then for example, most systems today or the legacy systems are hierarchical data structures. You don't have them, that's great. We don't have them either. So, that's for example, something which people should be able to rely on to have certain baseline assumptions about how data is being structured at the conceptual level. And we both just finished your thought. If you send something actionable and that's, I know that that is basically your current focus. You don't care as much about how the format looks. We care a lot about how the format looks and that's where basically this is coming in. I don't really care what is being transmitted as long as the format looks the same for all the different types of data. That's where we are coming in from. So, we're looking at two totally different layers. Think of it as exit. I mean, HTTP is the new TCP IP. You've probably heard of that one. So, we want to be the format within HTTP to transmit data and then people basically just have something which looks alike and can reuse parts of the parser, blah, blah, blah. So, why do we need to have another data encoding? You don't need to. If you say you're not interested, then that's totally fine. So, I'm not trying to convince anyone. My only thing is I want to make sure that people are aware this exists, how it works. And if it's interesting, then great. You should collaborate if not, no hard feelings. So, quick question, but Richard, did you ever see a time when the, where the data that you're displaying here on this page would ever be transmitted as a cloud event? No, this specific type of data, no. But this is just for demonstration of how the format looks. Of course, it makes it easier to discuss things when you know what is being talked about specifically. So basically, if you have an event, you want to transmit that event. What I would expect to happen if you are open, whatever compatible is, you have a format in which you say, okay, this is the name of my event. That would be the first part. For example, where you have the cursor go, you see duration seconds, obviously you had a different name. Then all the metadata with human data must be included with an event which would come within the curly braces. Then at the end, you would have the actual value of the event or maybe in your case, you would put everything into the curly's and as key value pair. And that's basically it. And you send this over the line. And the only thing I would care about is that this looks basically like that. So someone who's familiar with open metrics, open stats. Again, we are also looking at transmitting traces with the same format, blah, blah, blah. That this basically looks the same. And that's all I care about, at least on this level. All right. Why do you keep your specs? Mainly in our hats, and this is a large problem of where we currently are. We have, so basically, we have a bi-weekly call and we're currently in the phase where we are both implementing reference code and we already have reference code which emits the format. And also we are currently writing an internet draft with the purpose of this becoming an actual RFC. But it's not yet done. So there's bits and pieces of the internet draft in our repository, which I'm going to link to in a second. But again, this is not done yet. So you can't look at the full spec just also why this is a good short-term of showing where the format ends up at to just look at demo traffic or demo format. Yeah, without a proper wire. So do you have a type system? So basically what you see here is what we are doing. So it's always going to be just written through over HTTP, always. Yes, exactly. I think you don't. And it always is either UTF-8 encoded text or it is proto. And then you have what you see there, that's the basic thing. There is a full request open for the internet draft. I can also see, I'll probably use this for you. I can also link the Python code for emitting. No, the question is, you are showing numbers there in that data, is there a formal type system? Yes, of course. It's also plus 64 if you go by proto, if you do it via text, we basically took the short-term or whatever that as this is text you can fit more than float 64 but by the format you would be required to always have float 64 as a minimum. You're allowed to float 128 at the end. Which means you can only represent fractional numbers and, okay. Yes, in the metric space. Again, this is here exactly and only for metrics and a lot of the design decisions made here are being, have been done because this allows us to compress extremely well in the database, like really extremely well. So what you see here would not work for events for obvious reasons. Of course, this is just for metrics. So if you ignore the number on the right-hand side and basically look at the name and the key value pairs within the queries, that's what you would be using. Well, since we're both, since we're both about transfer, so both projects have a notion of events. And I've, I assume that the data that you have in your project is useful potentially for people who are using, who want to use the call events format. It'll be interesting to map one to the other and see whether there's conversions potential, but we would have to have, we would have to have a, effectively a full wire specification to look at. Yes, agreed. We don't have that yet. Basically, we will have it soon. This is a good place where we have a deeper look. I can also just take some random demo data with captain Europe's entry and transform it into what I would expect this to look like. Of course, this would basically short circuit the need of having the full spec already written out and you have already something to look at, then we go from there. Yeah, so I think what I'm hearing is maybe revisit this topic once that other, once the specification is available for us to take a look at. Is that fair? Sounds good to me. Intermediate offer, I can, I can just transform some of your data into the format which I would expect to come out. Of course, this is probably 80 20 and it's pretty everything. Yeah. I'm prefer, I'm preferring having a spec before in concept before implementing something because I would like to understand how those things conceptually align because putting things in code is relatively easy but having to make sense is a different thing. Totally fine, I don't have a preference. Okay. Is there anybody else on the call who'd like to chime in here? Yeah, hey Doug, this is Austin. I've got a quick question for Richard. Hi Richard. I know very little about Open Metrics but I'm just curious what level of industry adoption does this effort have? So as we are back to compatible with the Prometheus Exposition format, we have over 300 different exporters written by thousands or hundreds of different entities. We are aware of either thousands or tens of thousands of companies actively using this wire format within the context of Prometheus. We have several competing projects and also competing vendors. I mean, we are a project so we don't have a product to sell but still there are vendors which have a competing monitoring solutions which either adapted our format or even just filled in our libraries. Of course, Apache or they can. So there's tons of adaption in there. Google is currently looking at making open sensors compatible with Open Metrics or they're not looking at it. They are actively working on it. Uber is actively working on making their stuff speak Open Metrics. I had Apple contact me two weeks ago. There's tons of others. So basically there is already quite some adaption and also there is quite some interest in this. But this is only for the metrics and for the stats side. Of course, the rest we didn't get really start. There's people interested but we didn't really start this yet. Of course, we need to finish the metrics and the stats. Great, thanks. Yeah, the backwards compatibility there sounds like an awesome way to approach the whole effort. How long has Open Metrics been around? As an idea for I think 2015 or 2016 as something we actually are working on since last year. Okay, cool. Those are all my questions. Thanks, Richard. All right, thank you. Anybody else have any questions? Comments? Hi, this is Dabane. You said you are looking to integrate Prometheus into Open Metrics or actually build Open Metrics on top of that. Does that include the alerting side and surfacing actual events as we're speaking about them from Prometheus and the related systems to users because I see that's where the biggest actual potential for collaboration would be not in actual metrics, but what happens because of that? Yep. So basically currently what Prometheus is doing, so Prometheus itself already is, there's already a branch which is able to ingest Open Metrics data that's the metrics side on the events of Prometheus. I fully expect when we are actually done with defining something which transmits events that Prometheus and Alert Manager would migrate to that. We are also currently looking at having a complete new system for alert systems or events things within the Prometheus space to not only have an alert manager which basically throws out garbage or alerts and then doesn't know about them anymore or doesn't persist anything. So there's more of a user story about actually following up about stuff and persisting important information, blah, blah, blah. It's like the central data point of something is open, I need to fix it. And I would expect everything which happens in the space to use something which looks basically what we're currently looking at. Also information for a fauna is playing with something for lock, like syslock replacing, which is also used. And it basically looks the same. We just have text and not numbers on the right hand side but else it looks the same. So this would all migrate to exactly this format or to this format family, let's say. I have one more question. Okay, I think we're, go ahead, Glamis, but I think after this we gotta wrap it up. Yeah, yes. Yeah, go on, go ahead. I'm curious, there's, if you look at the space of an eventing telemetry transmission, et cetera, there is NQTT, there's NQP, there's Kafka, there's Nats, there's all kinds of streaming, specific streaming protocols, which are really good for this. And so I'm curious, why are you voting this HTTP? Basically, XKCD, I think it's 927 applies. There needs to be more standards. But seriously speaking, no, seriously speaking, what we see especially in a cloud native space is that HTTP really is the NQTT IP and everything is an HTTP endpoint. And that is something which all of Prometheus is built upon. Basically because it comes from Google or it's stolen from or whatever and there that is already the case. But the cloud native space is also really, really getting into that corner where basically everything's an HTTP endpoint because it makes a ton of things a lot easier. You have the low time and you see all this streaming, blah, blah, blah. So you have a ton of working code and working processes which you can build on top of. And that's why we chose that approach. Yeah, there's also a lot of GRPC and all kinds of other things. So that's not necessarily true. But anyways, thank you. Yeah, that's true. Again, we also have code. So you can also toss it in there, guess what Google and Uber are planning to do. All right, well, cool. Thank you, Richard, for doing that or for your talking to us. As I said, I think once you get the specification written, go ahead and ping me or us and then we can probably have another conversation at that point. Does that sound fair? Sounds great. It will probably be one or two more months if I'm going to be at both heap cones. That's basically it. All right, yes, that sounds great. All right, thank you very much for coming and speaking to us today. I appreciate it. Thank you. Is it okay if I do up off the call? There's people waiting for me. Oh, of course. Yes, yes, please. Okay, perfect. Thank you very much to whatever times you're in and talk to you soon. Goodbye. Okay, thank you. Bye. All right, thank you guys. So let's move on to the PRs. Clemens, you opened up one about encoding exceptions. Let me bring that up here. I'm probably mis-enabled that a little bit or it's... That's fine. This is not about encoding exceptions but encoding exceptions. Yes. Yes. Go ahead. Didn't we talk about this last week? I don't... Wait a minute, did we? I don't think we did, but maybe I'm wrong. Oh, maybe we didn't get to it because the other discussion was... Yeah, that's probably it. That's true, yes. Okay. So that basically says that we can go to the... Where can we make it go? H&P is a little outlandish for people to look at, I think. So H&P's clear, probably. Yes. Right here? Okay. Yes. So, effectively, cloud events, extensions. So this is effectively creating an escape patch and then someone raised that. I can't remember. I think that's the view. That if extensions may have to specify their own H&P headers, or they may have a different mapping rule for the attributes that they define in other ways. And so, this is effectively allowing an extension to do that. So there are... Because we have pretty hard and fast rules here in how the metadata headers actually map down. And if you have an extension, then that actually may have a diverging mapping. I think that was motivated by the other tracing thing. Open tracing, what did we have? One of the extensions that has specific H&P headers that must be used in a certain way. And this is basically saying, yeah, what you should do if you are implementing an extension, you should look into the extension and how it does that. And so that's what that first paragraph is for. And if you do this, then, and that's what the second clause says, is you can't just do that for H&P. You also need to do that for all the other program calls that we have. So quick question for you. I don't see anybody jumping on the queue. So let me go ahead and ask my question. Would it be better to define this in one spot, like in the extension document or in the primer, as opposed to having to do it in every single spec? We can. Because as I was looking through this, it just seemed like the same text is kind of repeated in most places, right? It's slightly different. But yeah, it's in essence the same thing. Yeah, that was my only question. Conceptually, I agree with it. I just thought it might be a better way to write it down. But any questions or comments from people? Where would this mapping rules be defined? Are they part of the extension definition and all the transports due to the, I don't know, parking of these definitions? Or does each extension need to modify transport, which sounds like a bad idea? So the first is here, the certainly trade this extension, right? This would have to go and say, when used with for HEP, this spec needs to define how it wants to go and have its HEP headers look, right? Must be encoded over HPS as headers in the following way. And it basically needs to go and define clearly how it wants the HEP headers to look. And I think for interoperability, even if just HDP is defined in distributed tracing, if you're making an extension to cloud events, you must also define right here how that looks on NATS and NQT and NQT. Because we're doing all the stuff right here. Yeah, I agree with that. But in the sense that does every extension writer have to add the transport to actually define and implement these rules? Or will they be parsed by some generic parsing done by the transport? So, no. So, in fact, this guides more or less our plugin model for the SDK, right? Ultimately, an extension should, if it can, avoid making any special rules. If it can't avoid them, which means it needs to go and align with an existing standard, like here, open-trade distributed tracing, then it must do an override. And then that override must be implemented effectively inside of the extension for distributed tracing. So, the way how the SDK must be shaped is that it can go and influence how the headers work. Got it, that makes perfect sense. So, implementation will actually happen at SDK level. Thank you. Yeah. That's how I look at that. Like, if you have overrides, if we allow overrides, we allow the versions from the standard mapping, then effectively, whatever we create at these extension plugins, the transport must be able to go and call into them, right? Use them to say, oh, yeah, and now I have some metadata and now you need to go and please translate that for me into headers because I don't know how. Okay. Anybody else have any questions or comments on this? No questions on my end, but it looks super interesting. So, thanks for this, Clemens. Always for you. So, Clemens, would you prefer to keep it the way it is now or would you rather find a single spot to put this text? I can go either way. I just wanted to raise the question. I prefer having things in the place where you can't ignore them. Okay. That's fair. Which is right in the spec. I'm looking at this. I have another spec to which all interdependent documents right now with an NCP and I'm also finding myself replicating a lot of work just that you can't escape the normative power of what I want to say by ignoring the documents. Right. Okay, that's fine. Okay. I believe this has actually been out there for a while so hopefully people haven't had a chance to look at it. But we haven't had a chance to actually discuss it till today and let me put it in the question out there. Do people need more time to review this before I ask for approval? And do not hesitate to ask for time because I don't think this is critical. So if you need more time, don't hesitate to speak up. But I also don't want to prematurely or unnecessarily block it either. Does anybody feel like they need more time to review this? Can I ask one thing? Even though if each transport spec defines how the extensions are mapped, would it not be quite useful to also have explained the basic idea behind this in the extensions specification or the extensions part of the specification where one would actually know about this without looking at individual transports because now you wouldn't know about this. If you went to look at extensions, if you wanted to make an extension and you just looked at the main spec and the transport findings, that might be problematic. That's true. I can either make that as a follow-on item or I can go and extend this one to tie that together. I can do it either way. So Clemens, how much additional text do you think beyond what we already have, which is what I'm highlighting right here? If we need to put that into the extension section, then that's another paragraph. Well, this is already in the extensions doc. Oh, it is, hang on. Yeah. Oh, there's already a section on that. I guess that might be enough. That's why I thought it might be a little bit of a duplicate text, but I'm okay with it if that's the right thing to do. Okay. Any other questions or comments from people? Is there anybody who objects to, in essence, taking a vote on this one? Okay. Is there any objection to adopting it then? All right. Cool. Thank you, Mr. Clemens. Appreciate that. All right. Next is Clemens. Oh, look at that. Look at that. Geez. Oh, yeah. So are you picking the ugly first? Well, I don't know. Which one do you want me to go? This was the new one. You just opened. Which one do you want me to go with first? Well, that's true. I made some changes to the other one because let's look at the second one first. Okay. Can do. Because that's actually, so there are actually two commits that are sitting on top of each other. So that makes, so now here I went all the way and renamed. So the text that we reviewed last week was kind of the core constraining the character set. Down to all lowercase. And now I actually went through the work and made it all lowercase through all the documents, including here in the H&P spec and changing all the attributes and all the samples and all that. So that's kind of, when you scroll down, you'll see just an uppercase lowercase, uppercase lowercase conversions. So that's not more exciting. The normative text is effectively the same. I didn't change that. I just did all the work to go through all the documents and make that change. Now you also see that the effects of that, that this runs together a little bit. And I think that will trigger some reaction with people who are keen on having everything nice and aesthetic. But I don't think it's all that terrible. And, okay, I think that's enough scrolling. I mean, you can scroll through the whole thing, but it's kind of mostly what it is. The only text is the same as it was last week. And that is we're making it all lowercase and we also allow letters. Now you can look at the other one. Okay, Dookie. Hey, Plays in particular here. Yes, we can go to normative text, which is our, I'm trying to find that. Inspec.md, right? I guess, spec.md. Here we go. Here we go. Thank you. All right, so, the underscore symbol. So you see how this underscore already is messing things up because it now makes it all italics. I didn't escape it. We can't use it then. Sorry. Like I said, as I said, that's my point, right? The underscore to the behavior symbol. So the underscore, from the ask the character set, and you must begin with the lowercase letter. And then, so I'm just adding the underscore. But effectively now that I have the underscore separator, word separator, now I'm also using it. And that's what I'm doing in this variation. Where I'm now also using event underscore type and I do this through the, like if you scroll down, I just renamed them all in an appropriate way, cloud events version, did that for all those and renamed them through the entire documents. And now this looks a little nicer, especially if you're a Python person, probably. But it runs a little bit longer. And I'm, you can see, yeah, runs three characters long in that one. And I'm okay either way. I'm leaning towards the first one, where it's all, where it's without the underscore because of the risk of the underscore character overall. And because I'm a fan of saving a byte or two on the wire, if I can. And so I'm okay, ultimately, with either option, if people like the underscore better, then that's fine. But I'm personally leaning towards the more compact version, which means PR 321 rather than 327. And just to be clear, the only difference between these two PRs is this one allows underscores and the other one does not. Yes, so it's underscore. And then because I allow the underscore also go, I do the logical thing and use it as a word separator for all the places where it needs to be. Right. But that is still optional for someone to use it as a word separator. They could still choose to not use it at all. Yeah, but then it's a little weird to go and introduce it. I mean, we could do the same thing, but I would find a little odd if we would allow it and not use it. So does that mean that we should encourage people to use it a word separate? Because what I'm wondering is, if someone defines an extension and they choose not to use underscore because they just like you, they just don't see the point in underscores that you choose not to. Is it going to hurt the adoption of our spec if we don't encourage extensions to also use underscores? Yeah, well, I don't know. Okay, so something to think about for the group. Okay, any questions for Clemens? This is Jim, not really a question, but just an observation. I would vote for the second one. It's just a stylistic thing from my point of view. I wonder, at the same time, if we could normalize some of the attribute names. So, if we've got event type, why don't we have event source and event data and just sort of keep it all consistent? That's the second, that's the thing I didn't do. Right, to drop the event. On last week's discussion is to actually go and trim the names and basically turn this into because type and ID and all those things that I thought of that, but then that was a bit much surgery for my taste. Sure. In the single PR. But yeah, that's something like trimming the names is something that I'm in favor of which also gets us out of some of those underscore cases. Exactly, yeah. So, Jeb, would it be okay with you if we dealt with that as a secondary issue after this one? It is related, but secondary. No, no, absolutely. I mean, I think I was just trying to avoid multiple iterations and if you could do it in one go, then so be it. But no, that's okay. Yeah, I'd rather deal with one thing at a time because it could make it harder for us to get anything through. So, okay. Any other questions or comments? I do have one question. Why is this even a topic? Because HTTP headers are case-insensitive across the board. Yes, but we have other transports which are actually case-sensitive. So, first, the attributes of the NQP are case-sensitive. And Jason is case-sensitive. So, the problem we have is that we have parts of the infrastructure we're targeting is case-insensitive and parts of it is case-sensitive. And we have observed, I think, some people have observed some libraries that they've been using who try to be really clever about how they do casing with headers and did case-folding kind of all by themselves for aesthetics. And so that's all causing all kinds of confusion, apparently. And that's why we said everything on the wire is always lower case. And if you see it's not lower case, then you have to go and case-fold it down. I didn't make a PR for this idea, but what would you think of if your map goes into the JSON header, it comes along with a translation object? If we use these in the header and it should JSON or AMQP into this field and we know we can't do direct translation between the two, so here's a lookup guide. Actually, Scott, I think that's similar to something I had suggested a while ago, which was, in essence, add another, I'm not sure what the proper term is, a parameter to the HTTP header. So the end of the HTTP header value, you see a semicolon name equals and then the real name in the right case. I think it's almost similar to what you're suggesting there. Yeah, yeah, exactly, like a decoder ring. Yeah. How does that work? I'm not sure how to transfer it. I have the ability to add that though. It works a lot like how J-pigging works, where you can group long colors into a key and then there's a lookup table that translates what the actual color is for that particular shortened color name. That sounds complicated. It's, yeah, it's very complicated. I'm not for complicated. So Scott, is this something that you'd like to actually seriously pursue and write up a PR for it? Oh, what I'm questioning is why force every other transport to have this funny, force them to have cases or underscores if they can handle casing and the trouble is HTTP headers. What we discussed last week was that case folding per se is a problem, right? So if you have, if you need to have case folding in your system, that's causing trouble per se. But only in the case of from HTTP headers to another type of transport. Because HTTP is case insensitive. Every infrastructure is at liberty to do with the headers what it wants. So if you have an event that gets rounded through multiple hops and one of the hops in the middle is HTTP, what can happen to you is you start out in HTTP, the event gets rounded through HTTP, gets case folded, lower case, upper case however the intermediary wants because it's okay to do that in HTTP and then it pops out as something that's completely different on the other side. If we don't constrain it, then basically in case insensitivity of HTTP kind of bleeds into downstream transport. But isn't, aren't all the proposals trying to bleed this the same problem with the names? Maybe I just missed some of the states. No, what this, so we're looking at the lower, the underscore variation of this one which I'm not preferring, but I'm preferring the one that is all lower case. All lower case, lower case rule says if you are putting an event on the wire and no matter what wire format, you, everything has to be lower case. Yeah, I don't think, Scott, I don't think it leads it through as much as this actually avoids the problem. I view it as avoiding the problem, not bleeding it. Yeah, yeah, yeah. It's avoiding the problem of case molding because we're not allowing upper case characters. Right. And we're actually not allowing any characters where case folding would be critical. Does that help, Scott? I'll follow up on the PR. Okay. So, go ahead. Last week, Clemens, you and some other people raised some concerns about underscores also being a problem for some transports and you said you weren't doing some research in that, did you come up with anything? Yeah, I have not seen further cases. I'm just worried about it. The only thing, so I haven't done deep further research into it. All right, I was just interested if you had to spend some time. I was supposed to ask how will they be handled in H2B headers? Do you have concerns that they're not allowed? You literally make me look up. I think they are allowed, but I just did some Google research and the hits suggested that some infrastructure might have trouble with it. Really? Okay, so I think I'm going to go over it. If you Google underscores in HTTP headers, you get a lot of forum hits where people ask when they have some troubles with, I don't know, engine X, Apache and you name it. Apparently some CGI legacy. Oh, it seems. Yet another reason. I have this hunch about all the separators. I've been trying to do these tricks with separators in URIs and headers for years and they always run into troubles. So I look forward to all. So I got to jump here just for a sec. So I apologize, the participant list window was half off my screen and I did not see some of you guys raising your hand so I truly apologize for that. So Tim, your hand is up, I apologize. Oh, I think Clemens probably covered it. I just wanted to point out that if you adopt this proposal then you're going to be 100% sure that the representation on the wire and the representation in the JSON blob you got in the representation anywhere is always going to be the same bit for bit. You're just reducing one place where somebody can get in the screw things up, that's all. Okay, thank you. And I'm going to mispronounce it, I apologize. Tapini? Tapini? Yeah, just to follow up. Surely underscores might not be the best idea everywhere but for example with HTTP when you have only a single separator character such as the underscore it's not very hard to just map that to the what to say idiomatic separator in the transport say the dash for HTTP. That's true, like we could do that. It sounds like it's an interrupt problem we need to happen though. I can't explain why I feel that way. I said, I make this, I give you that as an option effectively because people were concerned about not having word separators. If you look at 321 you scroll through it's really not that awful. And really in the SDKs you can go and make it in the SDKs you will make it idiomatic it will make it Pascal case where needed and Camel case where needed like if you have strongly type strongly type representations and so this is really just about the wire so I'm not too worried about it but it's something that I'm putting both options up and whatever wins wins. Okay, so I think we're gonna have to call time on this one because we're also on top of the hour. I don't get the sense from the group that people feel like they're ready to do something as serious as take a vote on the three different proposals that are out there. I think, again, this is people want a little more time to think a little bit about this and possibly continue the discussion next week. Is that fair? Or is that agreed? I heard Clemens say yes, but. Is there anybody? Okay, I'm gonna make it to someone speaking up I'm gonna jump in here and say let's do that. Let's, I was gonna try to force necessarily a vote today but force a vote next week and say here are the three choices we're gonna have a voting scheme to decide that but I'm gonna hold off and say let's see if we wanna continue the discussion next week next week's call we will decide on the voting mechanism to move forward and one of the choices could be to do nothing obviously because we can close all PRs and say we wanna find something else but we'll come up with the voting strategy at next week's call after assuming the discussion does end and then we'll figure out how to do a vote at that point. Does that sound fair to everybody? So we'll not be voting next week we'll be deciding the voting mechanism next week assuming we end the discussion next week. Yeah, sounds good. Yes. Okay. Sounds good. All right. Cool, thank you guys. And with that, I don't think we have time for anything else. However, before people start vanishing let's do the final roll call. Rachel, were you able to come off mute yet? Yeah, can you hear me? Yes, I can, thank you. And I heard Tim, I heard Austin even though I just spelled his name I'll get that later. David Lyle. David, are you still there? Give me that. Yeah. Oh, is that David? Okay, I know Dan was there paying me offline. Doug, are you there? You said Doug, yeah, what was Doug in there? I'm here. There you go. Thank you. Yep, Renato. Renato, are you there? What about Stevo? I'm here. Excellent, Eric. Hello. Hello, thank you. Klaus? Yeah, I'm here. Excellent. Christine? I'm here. Excellent, thank you. Nathan? Yes, I'm here. Erica? I'm here. Excellent, thank you. Hello. Here. Excellent. Is there anybody I missed or what about Renato? Are you there? Or David Lyle? Oh, Renato. Okay, thank you. David, are you back? No, okay, is there anybody I missed? Excellent, I believe we're done then. Thank you guys very much. I'll talk again next week. Thanks, guys. Thank you. Thank you. Thanks, bye. Bye-bye. All right, guys, thank you. Bye. Doug? Hi, it's Christoph. Hi, Clemens. Are you still there? Okay. Doug, Doug, Doug, Doug, Doug. Scott, Scott, Scott, Scott, Scott. Actually, the reason I got back in is, hey, Clemens, we were having a phone call about the prep session. That's true. Although I don't see Kathy. Actually, you know what, to be honest, I was kind of hoping we'd get on here and say no one actually did anything other than me set up the templates. That is the truth. And I'll tell you what, we canceled the call. Yeah, okay, so we can happily do so because my wife just called me to dinner and I said, well, let's set up this call. But if that's what it is, then that's what it is. Well, it's okay. Kathy, welcome. I guess I actually, Scott, you're ticking on this too. So I did create the PowerPoint slide deck. I decided to put both slides into one long deck. That way we can easily just go from one to the other to make sure it all flows nicely. I didn't put anything in there other than basically the outline that we worked on from the Google doc itself. Is there anything you guys want to discuss on this call or is it just a matter of us finding time to fill in our various sections? We need to find time to do some of the work, yes. And I haven't done the work. Right, myself included. I'm on vacation until Wednesday. Okay, Kathy, is there anything you'd like to talk about or is it just a matter of finding time? Yeah, so I actually haven't got time to look at your Google doc yet. Would you like to quickly show that, the outline? Well, to be honest, all I did is take the Google doc that we looked at last week. You know, the list that showed here's the topic for 10 minutes and then this topic and then that topic. I just put that into a PowerPoint version. That's all I did. Oh, so you didn't put like each slide title? No, okay, I see. Well, here, okay, you're gonna make me do it. Here we go. So here's the, last week, by the way. That's fine. I know you're busy. So this is the slide deck right here. Hopefully you guys can see this. Put the agenda. Obviously a lot of the texture's gonna get removed. Like all this stuff from here is gonna get removed and stuff like that. Yeah, okay. And then I create a slide for each of our sections with what was taken or what original Google doc. And then I figured you guys would just change the text as you see fit. Yeah. That's it. That's all I did. Okay, I see. So why don't we do this? Clem, as you said you're on vacation till Wednesday. If I don't see any updates to the PowerPoint deck by next Thursday, let's cancel next Thursday's call and talk next week, but let's really, really push to get something in there by a week from Thursday. Absolutely. The following week I'm all in Redmond, which means I have very long extended workdays because I have no home. And I'm definitely gonna get something done in that week. Okay. Okay, so Kathy, you okay with that? We may have a call next week if there are lots of edits to the doc, but if not, definitely try to get your edits in by a week from next Thursday. So two weeks from today. Yeah. Yeah. So for the SDK part doc, yeah, I think last time you say you will help with that, right? You will do that part. I didn't quite follow that part of work. Or you want to move that to your section? I could do that. Okay, here, hold on. Okay, yeah, I'll change the title because it's obviously not just demo anymore. Yeah, right. Yeah, I'll fix that. Yeah, I can definitely do that. That's not a deal. Okay, sure. All right. Okay. What does the deep dive look like? Actually, hold on a minute. Let me do that taste before I forget. I guess I already did it. Okay, deep dive. Here we go. There's that. There's the agenda. So you mentioned the deep dive. If you know, you'll say there are only two people. So how is it still see three of us? That's right. Yeah, there's the rules of the conference. People in the actual room, which is things. Yeah, so Kathy, they can only list two names on the agenda, I'm sorry, on the session title or in the agenda doc itself. The fact that there's, it's okay that there's three people presenting. They don't care about that. But they can only list two names in the agenda. And since you and Clemens are talking more than me, I thought it made sense for it to be your two names, not me. Yeah. Oh, okay, but you'll still be there talking. Okay, that's good. I'll still be there. I just thought you guys deserved more credit than me since you're talking more. That was it. Okay, I see. Thank you. Okay. Okay, thanks. I'll talk to you guys next week. Yeah. Okay. So Clemens, Kathy, anything else you guys want to talk about? No. Okay. Kathy, last chance. Yeah. Yeah, I'm fine. So after I put in slides, we can review it. And then, yeah, that time we'll probably have more to talk. Yeah. All right, cool. And that case, we'll talk, hopefully, I've had a minimum in two weeks. Yes. All right. I would like to put it by next week, by next Thursday. Yeah, I'm gonna try to as well, but if we, you know, things happen, so we'll see. Yeah, yeah. Bye. Bye. Okay, bye guys. Okay, thank you. Bye. Bye.