 All right. Three up to the hour. Let's go and get started. Um, but I'm bum bum. AI skip. All right. Anything from the community, people like to bring up. All right. Um, SDK, I think you had a call last week, but I can't remember if there's anything worth mentioning. Clemens or. Scott, can you guys think of anything worth mentioning? I showed the most of a demo of a conformance tool. That's right. Okay. Um, so we. Oops. Did you want to show that off sometime? Let me work on a little longer. Okay. That's fine. I wasn't going to do it today, but just in general, let me know when you're ready and we can then schedule time for you to show the group. All right. So the incubator proposal for the project still scissors for September 17th. Just reminder, if you have any questions or concerns with the slide deck, the link to the PowerPoint is here. The link to the Google doc is here. Just let me know if you want to make any changes and I can get those in there. Otherwise, I think we're pretty much ready to go. Oh, and of course, we're looking for more end users when you have three listed right now. So if you have more, you can add, let me know. I'd like to get list to be longer than three if possible. We do have a outline of an agenda for the two sessions at KubeCon. If you want to add any additional items to that list or want your name to be associated with one particular topic, feel free to go ahead and edit that as you see fit. We saw plenty of time. So, you know, be thinking about that. And now we get down to PRs before we jump into that any other topics you want to bring up before we get into PRs. All right, cool. In that case, Clemens, I believe you're up first with this data encoding one. You want to refresh your memory where we left off last time? Well, I know what I know what I was. I'm just not sure what everybody else is. So, there was a long winded story around having structured data inside of data that ended up with what we had affected this map as a structure that was independent of encodings. We gave up on that and we now allow structured data in the data field, but we only allow it as the chosen encoding defines it. And so there, and with that, specifically in JSON what we have here, there's no difference between a string and a JSON object, because at least from an encoding perspective, because string is just another JSON expression and so is an object and so is an array, et cetera, and that's all permitted. The only thing for which we didn't have a good way to express it was a pure binary because pure binaries are not presentable in JSON itself, certainly not if we are encoding a structured event where everything is in JSON. So what we had before was this data encoding field, and then we decided that we wanted to probably rename it. And then what the end of it is this PR, which is choosing to do away with that field altogether and simply making a rule that says if you have a, if you look at the in-memory data structure and in memory you find a string or you find an object or you find something that is not binary, then you serialize it out as JSON and then you stash it into a member that is called data. And if you find in your in-memory structure a binary, which means there's some binary byte array there, then you go and take that, you encode it in Bay 64 and put it into a field that is called data underscore Bay 64. And those two are mutually exclusive. And so I made some, there were then some suggestions on how to consolidate the, so first of all there was a comment that you made, Doug, about those having to be mutually exclusive, so I added that. That's in 135 down there, that's right there. And then as you see there's a lot of other deletions because all of these rules around, you know, detecting whether it's a JSON, text JSON, or whether it's plus JSON and all those things, they basically completely go away but us no longer making a difference between our previous transcodeable way of doing structured data and this JSON model where we're now effectively trusting that if you have a runtime good serialization support like a C sharp app or a go app or a JSON app that, sorry, a JavaScript app that can effectively, that can produce a conforming document out of a memory structure and then you read that back into a memory structure that you can then probably from that memory structure that you created also produce a reasonable Avro documents without having too much loss so we're basically assuming that the serializers will do the right thing here. So that's the total of the story I think. So the core of this change really is to do away with all the differences with all the analysis around JSON because we no longer need this with the rule that we have and basically just say, data can occur to ways and that is data underscore base 64 which always contains base 64 encoded encoded information, and that is always binary and data which contains all the other types because they're representable, then in Jason there's an analog in the in the encoded encoding, there's a parallel PR that's also pending, where Evan made some changes. If you would look at trying to find that one you hold on so I can take it's this one. Now that's the after one there's one. Oh, sorry. I think that's in the, I think it's in the fix up Jason type in the Jason thing. There's a, I think this one, that one, yes. Okay, it's making changes to to MVP if you go to the comments I actually have a. So I agree with everything that he did here but then I made a comment down here. In fact, so MVP has this has the MVP body can occur in three forms there's an MVP value. There's an MVP data, and there is an MVP. I forgot because it's rarely used. I think there's an MVP map also, but in key value and MVP data are the two things that are used mutually exclusively. So we're effectively having a. If we make that extra that extra change were effectively symmetric here on between MVP and and Jason, because there we have a place where all the but with a pure binary goes as an MVP data. And if you wanted to have structured data inside of the message, we'll put that into a value and the functionality is exactly the same that this effect exactly equivalent between the two of them. Okay, so let's let's focus on this one first just to get one behind us. Okay, so this is the, this is the data and then database 64. So this looks good. Right. I think from last time, semantically I don't think you change anything I think it was more syntactical and rewording type stuff right. Yes, that was, that was all. I just want to retell the source that everybody's really on the same page. Yeah, I didn't, I made some minor changes to consolidate basically Evan, Evan said, you should go and take this and consolidate that with section three because it's set in section 2.3. I think. Yeah. And so I consolidated that and as I did this consolidated text, I just ran into all this like lines 140 through 140 151 that then made no longer any sense and I just threw that was out. Right. Okay. All right, so let's open it up. Do people have any questions or comments for Clemens on this PR. I'd see Tim in the chat is in agreement with the general direction he may have a wording change for later but generally sounds right to him. Anybody else want to comment. Any questions. I'm starting to get a little nervous on how much we're changing right before one. Can you give us more, more time for all this to soak as we implement the the V1 RC one. So I definitely think it's a valid concern. I think that's a separate discussion though and since that I think that goes into when we want to take the vote for 1.0. And so I'd rather focus on that later on in the phone call that's okay Scott. Okay. And to this particular PR though. Unless I'm interpreting what you said Scott, are you comfortable with this one or would you want more time to review this one as well. I think these are changes that make sense. Okay, there's a lot of changes I would have to, you know, pick through it. Okay. I think I said I'd like to focus, you know, on the small little pieces do baby steps and people are okay with the general direction that I'd like to get that behind us and then deal with the more abstract of do we need more time to review 1.0 before we actually call it 1.0 is a separate discussion. So any other questions or comments on this one. Mark you were up there for a sec. Did you change your mind. I was going to ask Scott if he did feel that this was 1.0 worthy. I'm just trying to nuance your Scott's comment. Yeah, I don't I don't know because I haven't used it in a live system yet. I don't know the interrupt story. A fair amount of pieces have changed and the though I agree with the base 64 data piece that change, it's going to make version upgrades very difficult, like going from one to two to three to version one. There's a there's a lot of API breaking changes we've made in the last few weeks that are going to be difficult to to understand how to migrate in current running systems. We have we have since since we killed map. We call, we call we cause we cause we cause things to break and we need to go and clean up and so that's one of the cleanup items. Yeah. We made the breaking change where we now have debt from it and we need to go deal with that debt. Yes, I agree with the killing the maps that's it simplifies a lot of things for HTTP binary mode. But upgrading to 1.0 is going to be very difficult. I'm trying to figure out Scott, whether I'm trying to figure out stuff with your comments are along the lines of maybe you don't want this PR or along the lines of just sort of more like venting. It's like, yeah, not happy about it, but yeah, it's the right thing to do. I am happy about it. There's just a lot of changes happening. Yeah, okay. Yeah, just yeah, okay, that's fine. And, you know, people are going to deal with the ramifications of these changes that we've agreed to. It's sort of a given. Yeah. Okay, ultimately, this is really a, the reason why this is in the Jason, when this hits in the Jason encoding and why the. There's another one that's is that's over in the pp encoding is that those are really changes that only have to do with a wire representation and they don't affect the model per se right there's a difference would show up to the way how this will surface to an application is still there is data. Right and there's a, the in memory representations towards the app will be that there's a but either there's a byte array, or there is an object graph you can interact with. Based on based on that difference, the wire representation will be different but in memory representation should be stable and it should still be accessible through whatever the obstruction of that respect of the API is for data that should not change the only there's really only a restriction in that Jason can represent binary well and then there's a special convention with a pp that if you have binaries you should stick to the data, but these are only two differences that have to do with the wire format. Yep. Okay, thank you. Another round of questions any questions or comments. Any objection to approving the PR. Give me a PR in the PR. Even by in the PR. Can we, can we get a collective couple LGTMs I'd like a little time to pick through it again. So you want to do an offline vote basically. Yeah, people should go and look at the PR and LGTM at the bottom. We could if you really want to. Love it. I'm just trying to figure out what's the. Okay, it sounds like Scott you're saying you'd like a little bit more time is the which aspect I'm just curious which aspect of this is the most concerned is it just the the introduction of the of the underscore base 64 version. Yes, has anyone tried to use that yet. It adds a little complexity to the receivers. Yeah, but it's also killing entire attributes and for complexity by, by not relying on an extra field is like, it's literally just so this is actually just taking the marker that we have in data encoding with base 64. It's taking that marker is it's effectively moving that it's the member name if you will, like from a, from an information from information contact perspective, it's, I would argue exactly the same. So it's less complicated than it was, and we're eliminating a bunch of extra, a bunch of rules as well. So I guess I'm trying to figure out is if we, if we decide not to approve it right now and try to prove it offline. In your mind, what will change between now and when you finally type LGTM. People will read it. Well, you know how that goes. I'm just I'm just worried that we're delaying it for no real reason. I mean, if you're saying you will you personally need more time to think about it. And, you know, actually implemented that that'd be one thing. But if you're doing it because you want to get other people do stuff. Just realistically, I just don't say that's going to happen. People have already, it's been out there for two weeks now. Basically. So I'd like to, unless there's an objection and someone, you know, if it's, I don't know if I don't want to force it on you, Scott, but I, but if all it is is trying to get it to do stuff, I just don't, we haven't had much luck with that. So that's why I don't want to push it. And delay it unnecessarily. Yeah, we, I think if we want to stick to our timeline, by getting this all done before the, before the beginning of October, we have to go and make some new progress from those ones. And just, and just to remind everybody, you know, even if we do approve this right now, that doesn't mean it's a done deal right during this whole review period that we have lined up for ourselves if somebody comes up with a reason why this is a bad move. We could always revert it and go a different path. You know, we're not 1.0 yet. So we still have that flexibility. Yeah, that's a good point. Okay. Okay, so just to make it more official, any objection to approving this PR. All right, cool. Thank you. Let's see. Next is Evans clarification. That's this one. Um, unfortunately, I don't see Evan on the call. So let me ask a quick question here. Okay, so I don't think he has any drastic changes since last time. There are a couple of RFC wording changes here, but I don't think that actually changes anything from the intention of this one. Scott, do you know enough about this one to talk to her? Do you want me to try to mumble my way through it? Or are you going to mumble? Mumble, fumble. I think most of this is the beginning part was just syntactical and changes, I believe. I don't, I don't know what he did. Now that we've removed other changes, I think he clarified that there's a mapping between data content type and the content type header for HTTP. I think that was one change. Just made that a little clearer. And, oh, I think the reason, one of the reasons we went back there was the percent encoding stuff. So I think this section down here, it might be a little bit new or reworded because it talks about the percent encoding and this and when to do it and make sure you don't do it twice that kind of stuff. Let me hide the comments here for a sec. Make it easy to review. It's the button right next to it, I think. Yeah, I think I already got that. So I think, I think everything else is just syntactical. So I think this is really the bulk of the section of the change right here. He made an update. I think there were more, he had more encoding rules and after Tim brought the wonderful update to clarify what a string is from here. He collapsed that into something that's a little bit more simple. Right. All right. So, Jim, welcome back from your break and your hands up. It is. Thank you. One, I haven't looked to this, to be honest, but if you could wind the screen back up again, there was something to do with changing, stop down a bit, down a bit. The data content type. Why did, why are we having to do that? Treat those differently. Wait, which, oh, this section? No, one further down. Down here? All C attributes with the exception of data content type must be individually mapped. That was there from the beginning? I was okay. Yeah, because what we did is we were effectively taking the data content type and making that the content type of the, like the proper, the proper content type of the message because we're putting, we're sticking data into the, into the entity body. So those fields are directly corresponding, which means we're no longer replicating, we're not making that, we're not replicating that into the cloud events bucket. So that's, that doesn't change. Okay. All right. Yeah, I think, I think technically the wording changes that were the, I think everything else is just a word or a line wrapping thing for the most part. And of course, we, yeah, of course, even with the data. So I think this is, I think down here is the bulk of the change. So Clemens and Heinz, because I think not Heinz, Klaus, Klaus, I think you may have looked at this one as well as, as well as Clemens. What are your guys opinion on this one? So I think I know you two have reviewed us. I believe that is correct. So the, the, if you find a string, so we assume that all strings are unicode, but if you find a string that is, has nonvalid, is not valid as an HDP header, you run percenting, percent encoding on it. When you encode it, and then when you take it, then you percent encode it, and that's then giving you back a unicode string again. And I think, excuse me, it was why he had to call out this notion of a single round of percent encoding. I'm not sure why anybody would think they would need to do more than one round of percent encoding, but it seemed like a minor thing. People, people do all weird, oh, that's a weird thing. Maybe there's, maybe there is pain in that line. All right, um, Klaus, am I remembering it correctly? Did you, I thought you had a comment on this PR at some point, that's why I picked on you a little to comment here. Maybe not, maybe, maybe I'm missing, maybe I'm remembering it correctly. I remember reading it, not commenting. Okay. Well, do you or anybody else on the call have a comment on this one or question? I don't actually think it changes a whole lot to be honest. I think it's more of a clarification text. But I think irrespective of whether the string does or does not contain characters outside the ASCII range, you have to run percent encoding anyways. Because the percent encoding, so, so I think we need to make the rule, like you, if you store a header in, if you store a value in an HTTP header, and you take a Unicode string in, you have to percent encode it. And you have to, because, because you could run into, you may have percent signs in that, which are in the permissible range, and those you have to percent encode as well. So, so are you just reiterating what he says here, or do you, are you suggesting a word change? No, no, he says, he says string well is which contain Unicode characters outside the ASCII range. And I'm saying, even if you look at that string and it is all within the ASCII range, because it may also contain the percent character, which needs to be escaped. And I understand is, is not correct with that rule. You have to go and use the percent encoding in all cases. So basically you want to replace this whole paragraph with just presenting code the values. I just want to want to remove the clause. String values must be percent encoded. Just get rid of that text right there. Yes. What other people think? Does that seem right? I'm trying to figure out if this is the kind of change that we can agree to now and I can ask Evan to make him because Merz appear later, or do people need more time to think about that? I think this is, is exactly in the right direction. And the thing we need to do because we have that will close the hole that we have. But so I would, I would, I would approve pending that change. What other people think? Is everybody on the call comfortable with approving the PR conditionally with this highlighted section being removed? Not hearing any objection. Am I hearing people jumping down? I'm really excited either. Hines, go for it. Hines, you have to come off mute. Sorry. There we go. I believe that it should be approved since most people only will bring up the negative. Okay. Hold on a minute. Okay. Anybody else want to comment? Okay. Let me ask the question. And then to approving this PR with the removal of the highlighted text. All right. Not hearing objection. Thank you guys. Okay, let's go back to this fix up Jason mapping thing. So, Clemens, you briefly talked about this one, since I know that implies you've actually reviewed it. Do you want to say, you already sort of talked about it in the context of the other PR. But is there anything else you want to add to with the changes? I think this is the bulk of the changes right here. These are these are effectively just editorial changes that are necessary because we didn't track them all. So he's, he's effectively removing the mapping of map. It doesn't preclude that he can use the AP map because you're free to use whatever you like for data. But there's no, we only have data now and we don't have the, the, the any type and the map type anymore. So, so that needs to go that wording. And the only thing that needs changing here is I believe is the one that I made with adding MPP value support. And Kristoff is making a comment in the PR. So this, this is one, this is one that this is one also that there are two changes, but I don't think there's nobody should have an objection to fixing up the necessary things for MPP. Okay, I'm not thinking some notes here. I'm getting. Okay, so let's go back to your comments. I, Kristoff, I made a comment on yours in my notes. Oops, darn it. What I do. Good. What am I doing? Hold on. I apologize. My screen is like responding ever so slowly today. So it's really thrown me off. We're on the fix up one. So a lot of minute. There we go. So on this comment here, it wasn't clear to me what, what change are you asking for comments. I am asking for, there's a, it says in the data section, the way how the data section is stored says the data is stored in MPP data field. That's what the PR says. Okay. Effectively what this needs to do is it needs to do the same. So this, that data payload shall be met to a single MPP data section. The data section is per se binary only. Which means if you now have structured data inside of the memory data field, you can stash it in there because there's no, there's no serialization model for this, but MPP value is a serialization model for that. So effectively you would take the text that I have in the JSON, the JSON PR. We say, if this is a binary, you store it into data, if this is sorry, it's big data, data basics for anything else you store into data member. The same thing happens here where you go and take, if it's a binary then you stash into MPP data, and if it's anything else you stash into MPP value, MPP per se has that sort of distinction built in. So I think conceptually that makes a whole lot of sense, but because that's more than just a couple of wording change, it actually introduces a whole other sentence or two, possibly talking about a different attribute. We can table that. Yeah, I was going to say, I don't think we, I don't, I feel uncomfortable approving that right now. Yeah, can you do me a favor though? Can you put the exact text that you'd like to see as a comment inside the PR right here so that way it's really easy for Evan to just copy and paste? Yes, sir. Thank you very much. I think that'll speed things along. Okay. Any other questions or comment on this or we're concerned with with the direction we're headed here. Okay, so I'm assuming people will give that general direction we get those changes in so I want to say comments to. Okay, cool. I have a real mapping. Let's see. Around this one. Okay, Clemens. Yeah, I'm not happy with that change. But you're not. So you said with that with that change in particular, I'm not happy. Okay, what about this one worries you. So, what he does now is he did a, so this. So this schema exists not to perfectly describe an event and cloud event in Avro. But it exists so that we can so the schema exists so that we can even serialize an event in Avro because Avro is this is a format that requires a schema to do serialization at all. So you can ask this recursion. So that you can go and effectively serialize structured data inside of the data field. That's why the recursion existed. So now we change the rules around data and we took, we took the map thing away. So he made a change that the, that data is always a type is always bytes. So again, that doesn't, that's not symmetric with how what we do with Jason because in Jason we can have structured data instead of of data. So the recursion that the old schema has where, you know, data can contain any other, any other fields is that something that we need to preserve otherwise we can't, we can take a structure data into that field. So there's a, this is a, this is effectively a change in how the Avro serialization works because we are constraining the serialization schema. We're not. So this schema is a little different in that it doesn't sell the validation thing as you would have that for Jason schema. This is, this is the thing that actually drives the serializer. So with, with that change, we would take the serializer's ability away to deal with structured data and structured information set the data field. And that's why I don't like it. Okay, I don't think he had a chance to comment back to you on that one. No, he hasn't. So that's a brand that we need to leave pending until next week. Yep. Okay. Off mute is a new one or an old one. Did you have a comment on this? Well, you know, my feeling about adding these extra serialization, but I still believe that comments point is very valid where we're using the constraints of the serialization to kind of bend the spec, which is kind of. Is there are additional changes you'd like to see beyond the Clemens said No, actually, I think the main one is a comment points out is the data fields and eliminating the structured data type, if it's all going to be by the race. Okay. Any other comments on this one then any disagreement with the With the proposed changes that Clemens was talking about. Okay, so we obviously can't approve that one. Hold on a minute. Okay. Before we move on to the next one. Yes. There is another transport event format that we're missing, which is the protobuf. I don't know if everyone wants to open up PR. Because. Yeah, I can. I'm not a protobuf expert. I can volunteer to do one, but I just do what a newbie in protobuf would do. So maybe so. Not bad. Sorry. Yeah. I'm not terrible. Well, I know Evans did mention it somewhere. I came aware that he was going to take a look at the protobuf one, but I don't think he had a chance to do it yet. But yeah, Christophe, if you want to take a, I'm sorry, was that Christophe? No, who was that Christophe? Yeah, it was me. Yeah. Okay. Christophe, if you want to take a first pass at it, go for it because I know Evans is busy as well. Okay. Let's sign me up for it. Whoops. I can't get it as a compliment. I like assigning stuff to Christophe. I mean, too with Clemens. So, okay. Let's see. All right, I'm going to pick out Scott here. Since Evan is out on the call, you get to be as proxy. I think you had some comments on this one. Scott, you want to explain the issue? Yes. So there's a, there's an issue in, there's an issue with HTTP binary where the extensions that you might add don't, don't flow through middleware and then turn back into what you expected on the HTTP side again, because the spec says you have to prefix all extensions with the CE dash in the header. This spec allows you to send other things that are not prefixed, but there's no guarantee that that extra extension that's unknown to the middleware will actually make it to the other side. So cloud events becomes a very lossy protocol if you switch the transports. Yeah. So let's, let's click the distributed tracing things to understand as we can understand that issue. So, so what this does here is that the four HTTP, so this extension has a special rule for HTTP, which says you don't take, you don't take the trace. You don't call this CE dash trace parent and you don't call this CE dash trace date, but you really use the W3C default headers and, and use those. So not the CE ones and you don't send the CE, you don't send the CE prefix. Of course, middleware as rightly stated, that is not aware of that mapping will then basically pass that event through, but it will pass that event through with HTTP headers. If there is an unaware intermediary and that will then not send HTTP along, but it will send HTTP along. It might then strip the, might then strip the information at the CE level. That's correct. However, that is a decision that the extension makes. So this is not a general problem, but it's a decision that the extension just made here because it's a, it decided that it wants to have a different mapping. Right. It is a, it's a different. So HTTP using JSON would not be lossy over multiple protocols that also publish JSON bodies because, because of the way that things are bundled. Because we have to, because we have to change the header name, HTTP and binary mode is, is kind of at a loss. So let's, but, but it does. So this is this rule here, right, is causing this. So it's not, it's nothing that's in the, it's nothing that is in the score or spec or in our HTTP binding is really that this extension chooses to do this. It chooses to do the override. We permitted the override specifically for that purpose. Now here's why that's right. If you use distributed tracing and if you use HTTP, then it, so these, these mappings go both ways. It is legitimate for a, for you to send the cloud events to a proxy. And the proxy, let's say, envoy decides that there's a configuration that it now wants to go and add W3, W3 trace context into that, into that message because the application doesn't care about it and it doesn't have tracing. So you want to go and start doing tracing as a, as a, by intermediary effectively. You want to do a, you want to do that by the interceptor. Now, now that is just standard tracing capability that is, that the envoy proxy ads and it adds trace parent trace state. Now, if you deserialize a cloud event and you have that extension activated, so to speak, you get that injected context back into your cloud event because we have this mapping and that's a, that's a function of HTTP and a function of this whole trace context story that is intentionally so. So, so what we're, and, and because there's also a mapping for, so in this trace context world, they have a mapping to HTTP, they have a mapping to NQP and they have a mapping to NQTD and they actually make rules that if you get an HTTP request and then you use, you make any request for which a binding exists, then you have to go and pick up that trace parent, then trace state and you have to use it in the downstream in any downstream request, which means you are supposed to propagate it, which means specifically for trace context, the propagation is guaranteed more or less by the trace context specification and not by us. That's why that's legit, but that's a, that's a problem that exists specifically in quotes, a problem that exists and what we're handing off control because this HTTP mapping here choose to do that, but it's not a problem we have in principle. It's not something that is, is if we define an extension and we don't make that extra rule, then all the extension attributes will obviously be free fixed and then they will also be propagated. Okay. So I think the order of people raise their hands is Christoph myself within Jim. So Christoph, I think you're first. One thing Scott said was that this would only apply to the binary but if I'm reading this correctly, it should also apply to the structured mode or the, you should also use the headers in the structured mode for the other libraries to work. And one solution I, maybe it's not a good one, what could be to duplicate them. So to use those headers and then also have it in the structure or in the binary in the headers twice. I know it's more data, but maybe then both ways are being served. This is a particular case for tracing because it must be in the headers. Yes. And so that's, that's something that makes, that also makes sense to me to do the duplication. So basically, so I think to solve this particular problem, we can make a clarification in this spec that, that talks about, maybe talks about has a sentence or two about what I just said. And then clarifies that the data should be duplicated into the HTTP headers and then should, or should also be carried in, in CD headers. That sets out something that we have in, so in trace context, Microsoft is proposing in W3C and MVP mining. There's a draft out there and I don't know what, why and why that hasn't settled because W3C is even slower than we are. But that proposed since a piece message is not changeable once you send it. But this is about propagating states and through intermediaries. And the thing we're proposing there is that you have to any changes you make to the trace context. So the original trace context goes into the properties of the message and then any changes need to be, need to go into the annotations of the MVP message, which means there's also duplication there. The, the, the advantage of that is, that you now get effectively two trace context paths. One is you get the trace context as it originated in the application that is what would end up in the CE dash header because HTTP and HTTP infrastructure doesn't know about this and won't touch it. And then you have the, the trace context fields which are presented to the HTTP infrastructure. And if the HTTP infrastructure wants to manipulate the context because that's like a bread crumb thing that happens, like you can, you take your input trace context and then you make changes to it. And then you stamp your outbound message with the new trace context that gives you no two paths. One is end to end where it's literally just cloud event to cloud event to cloud event with all the transport into infrastructure in the middle. And then at the HTTP level, you get all the, all the infrastructure traces and all that context along with it. So it basically sets up two paths, if you will, where you have one layer which is end to end tracing one layer which is end tracing with all the transports in the middle. If we, if we duplicate it, that's why the. Sorry, sorry, Clemens. I think it's this issue is much more simple than, than I think the distributed tracing link is throwing you off. What we're actually talking about is if you add the header foo. Yes. It gets dropped. No, it doesn't. So if you add, so, so that's the point. Hold on a sec. I let it go just because it's, it's enjoyable listening to you talk sometimes Clemens, but we do have a speaker queue. So hold on to the second. I'll go back to, to, to Q and a second Clemens. Let me, let me hit the other people like you first. So I actually raised my hand to raise the exact same point that I think Christophe did about how this only applies to, I'm sorry, applies to both binary and HTTP. But I also wanted to comment that I don't believe this is a cloud event specific issue. I think if you have a piece of middleware today, that suddenly starts to process cloud events, it doesn't necessarily even know is doing cloud events necessarily. And it's going to either drop or pass along these, these HTTP headers. Perched current rules and processing. So I don't think introducing cloud events necessarily changes anything. Right. It's either going to take all unknown headers and pass them along, or it's going to drop them. And I don't think cloud events changes that at all. And so as I personally don't see it as a problem, because as kind of what Clemens was hinting at earlier is that the extension is chosen to live on the edge by not prefixing things with CE and living by the other rules. So I don't necessarily view this as a problem because I think the problem exists today without cloud events. So that's all I wanted to say. But I think Jim, your hands up next and then we'll go to Scott and Clemens. I think my comments were along similar lines in that really it sounds like what this cloud event extension is is we want to use a W3C tracing. And so in reality, it should be that spec that's dictating how the transport encodings work. Because realistically the trace context is not is it really an attribute of the cloud event? I think that's the thing or is it an attribute of the larger processing and then a transported context? So I think we're probably talking about the same thing. But it does sound like Scott had a bigger issue than just tracing. I got myself across there. Okay. Well Scott, actually since I interrupted Clemens, Clemens I'll let you go and then Scott you're next. Yeah, Scott made me the comment that generally they are dropped in the HTTP binary mode, the attributes or do not. So HTTP binary mode forces prefixing for all the cloud events context attributes, which are, which is also true for extensions. And only if the extension chooses to override, which the tracing extension does, and that's the only one that does that, then do the change, do the rules change for that particular header? And then yes, if that then it gets lost if it doesn't, if the intermediary doesn't know about it. But that's that's something that you effectively buy into when you follow the rules of that extension. Okay, Scott, I think your hand's up next. Yeah, so I think it does drop it if you convert to another transport and then back to HTTP. That's, and that's the issue. You can't just, you can't decorate the event with, you know, your extension headers and expect them to show up on the other side. So that's true. If you don't follow the trace context rules. So if you follow the trace context rules, which you effectively invoke when you, when you use, when you use that extension. No, no, no, for good trace, for good trace, just like add, add your own header to it. If you, if you decorate anything in HTTP binary mode and you don't prefix it, a lot of events will drop it. Well, that's correct because then you're just, you're just decorating the transport frame and that doesn't go into it. So I would propose that we drop extensions, having CE in them. I would actually go the other direction and require all extensions to have CE in them. But you're modifying the HTTP object. What do you mean? Like you, you can't go from, so look at this little, so you go from a client to HTTP to some sort of middleware. Yeah. And let's say it, it ends up HTTP again. But what the client sent is not what the processor is going to get. Yeah, but, but so, so HTTP per se. So you're not setting up an HTTP proxy route. Right. You're, you're sending, you're routing the cloud event through multiple hops. And what you're entitled to is everything that's part of the cloud event, but not everything that's part of the original HTTP message. Right. But one of the pitches that Doug's made is that cloud events is just adding four headers, three to four headers. And that's not, that's not, that's not, that's not really the case with the HTTP text trademark, but that, that's not quite right. Because if you just add those four headers, almost all of your message in the headers gets dropped, but, but so, so are you saying that if you go to, you create an HTTP message and you put a cloud event in there and then you also add the header foo to the HTTP event, up on the other side? Yes. I don't think that's a reasonable expectation. Yeah, I would agree, because I don't think that has anything to do with cloud events at that point. Well, so if you have an existing webhook endpoint, it invokes you, and you want to turn that webhook into a cloud event, it should be as simple as adding a couple attributes into the header. Keep the payload the same, keep the headers the same, and then middleware shouldn't drop it no matter what transport it goes over. Yeah, but you can't expect that, because there's a bunch of headers in HTTP, which only makes sense inside of HTTP. That's right. So we could choose to drop the known doesn't make sense headers, and keep everything and expect them. If it's not a known HTTP context header, like content length, doesn't make sense to send along, we drop those, and we bring along everything else. The only way this works is if you stay at the cloud event, you can't start at the transport layer. You have to start with a cloud event, with definitions of the cloud event, and then you map that to a transport layer. You can't start with an HTTP message, and you expect that everything that's in that HTTP message will then end up in that cloud event. That doesn't work. Since we're running out of time here, let me go to the people in the queue. We may have to stop after Kristoff. So I think it's Heinz and then Kristoff. So Heinz, go ahead and go first. I think your name is up. Yeah, just two quick things. One is it doesn't magically go from HTTP to some AMQP broker. You're going to need to use a mediation layer, where you've got to put some kind of code to do that, where even the cloud event headers between HTTP and AMQP do not use the exact same name for the name value pairs in the header. So you have to have some mandatory processing in between. And I would assume, and again, maybe that's a bad assumption, but that is then under the control of whoever's doing the transformation from an HTTP transport to an AMQP middleware transport. So if there's got to be code, it's going to be some developer who can make that decision. I don't think you can mandate, as you're going in between, especially since we don't use at least the specification for the two transports, do not have the exact same header value, at least as far as the name and the name value pairs. OK, Kristoff, I think you're up next. Yeah, I wanted to comment on the add four headers to your existing HTTP call. And then you've got a cloud event. I think that's a catchy marketing claim. And it's good at that, but it's not the full specification. So it works in some cases where you only use the HTTP body to transport your message, but it doesn't work if you also use the headers. So I wouldn't take it to literally. I would take it as a marketing claim to get people interested and listening. But maybe it's not ideal to strive for that this becomes, in all edge cases, also represented in our specification. I think you just criticized me for being too close to marketing. I'm not quite sure. But OK. Oh, I didn't try. No, I said it's a catchy marketing claim to get people interested. That's for marketing. I know. I'm just joking. No, I agree with you. All right. So go ahead. Last up. If you have a custom header that's called my API key and then you put the API key in there, if all headers ought to be forwarded, you're now leaking the API key because the intermediary doesn't know. So you can't. There's all kinds of stuff that is literally just hop to hop that you can't blindly forward without having a rule for it. And the rules that we have for forwarding things is by making those cloud events attributes. OK. And with that, I think we're going to call it time because there's two things I want to do. First, it sounds to me based upon some of the changes we propose to some of the open PRs that we actually are not ready to claim, even with these current PRs that we just approved today, that we have an RC1. Because I think, for example, we have Evans to PRs. They need to get resolved. Does anybody disagree or does anybody think we actually should try to push for RC1 with today's approved PRs? OK. I'm not hearing any objections. We'll push this date out to the 12th. So we'll try again for next week. So please work on your PRs and get your comments in there. Try to get them out this week because if you wait until next week, Thursday will come up really, really fast and people will have time to review it and think about it, especially since Tuesday is the deadline for normative changes. And finally, Doug M and Fabio, are you guys there? Want to make sure I get you on the roll? Yep. I'm here. Fabio. OK. I got Fabio. Doug, are you there? Doug's here. All right, cool. Anybody else that I missed for a roll call? All right. And just to be clear, what's going on in the chat relative to the protobuf? Who has the AI to work on the protobuf stuff? So Scott said they internally agreed to drop it, the whole thing. So he's going to make a PR to do that and then there's no point in me making a change to the protobuf spec itself. OK. OK. Sorry, hold on a minute. You're going to drop protobuf bindings all together? Who's that? I won't. Scott will. That's the proposal, whether he gets agreed upon or not. Who knows? Do we know why? Sorry. I'm assuming they'll mention why they want to do that in their PR. Short reason is the current protobuf is broken. And it's not something you'd actually want to use or promote. And if we v1 with that protobuf, we're stuck with it forever. So it'd be safer to remove it. It was a nice fight to get that in here. But OK. Yeah, but I mean, the compromise that we had to make to get that into the spec broke it. OK. OK. All I would say is if we're pushing in Avro, then we should do proso at the same time, even if it means reworking it. But we can take that offline. Yeah. I think we need to take a little longer time. So last reminder, please add all your comments to Evan's issue. They were just discussing about why it's a good or a bad idea to do what he's suggesting, or if you have other alternatives. Like I think someone said, duplicate the data between CE and Rumble headers. So please put comments in there. So we can try to resolve that one during the next week or so. All right. And with that, over time, I apologize for running late. But thank you guys very much. And we will talk again next week. Thanks, guys. All right. Thank you. Bye. Thank you very much. Bye.