 Then we'll get started. All right, three after we'll go and get started. Let's see, two more groups so far. All right, I don't see Rich on the call so I can't nag her about her AI. We can do that offline. All right. Okay, community time. I don't see Ivan on the call. So I'm not quite sure what to do with his suggestion for a topic. So then I think he's the same person who may have pinged the group during a Slack session asking for people who may want to present at Ukrainian conferences. You may want to look for him on Slack if you're interested in doing that. So just mentioning that. Let's move forward, SDK work group. I don't believe anything's going on there other than if you are part of the group that's working on SDK and we are hoping to do some sort of interoperability demo showcase thingy at KUKANIU. But I think the biggest hurdle in doing that is we want people to try to show interoperability across all the various versions of the spec not just the very latest version. So we may need a lot of updates from the SDK authors out there. So please, if you guys do have an SDK please mention your interest in participating in that in the SDK Slack channel so we can start organizing something there. Scott or Mark, can you guys think of anything else? SDK related, then I'm forgetting to mention. Okay, moving on then. Scott has been doing a ton of massive changes on the go SDK and it's looking really good. Yep. I also broke a bunch of issues out into the issue area. So if you want to come help, come pick up an issue or talk to me. There you go. All right. Moving on. Scott or Doug, would you guys like to bring us up on where we are on the demo work? I think I can take a pass at it and then maybe Doug can fill it in. But we had a meeting with Heathrow or Heathrow was in attendance and we're still trying to figure out exactly how Arcus and the airport can be worked in so that vendors can participate in a demo that actually shows what we're trying to show at like a KubeCon-like event. So we're still trying to figure out exactly what the demo should be. Doug, anything you want to add to that? Because I think the only other thing we're probably at worth adding is you had an action item to come back on next Monday's phone call with a more detailed description of the exact scenario and the various roles that participants can play in the demo itself, right? Yeah, I had emailed you and Scott last night. I thought for the interaction part of the demo, so looking for your feedback before I move forward with some of those specific conditions and actions that would be part of the workflow. But it was all about being role-based, I think is what we agreed on so that the attendees could select a particular role that they wanted to play in a collaborative workflow process that involved some manual tasks that the participants would perform combined with automation through microservices. And all of that, all the interactions would be through content, contextual content, past and cloud events. Yep, okay, any questions on that? All right, cool, so thank you, Doug, and I'm looking forward to reviewing the note you sent last night and then my nice phone call. Sounds like it should be fun. All right, moving forward, Kupkan, EU. Last time we had a conversation or a meeting about that, most people were busy with other stuff, so there has been a lot of progress there, but people aren't stressing too much yet, so we decided to take a little bit of break and come back in a couple of weeks. The one thing I will point out though is there is work planning on being done up there in the white paper and landscape doc from the serverless working group. I think Scott and two other people, can remember who are taking the leads on that. So, but if you are interested in making updates to those docs, please reach out to me or Scott and we'll get you in contact with the other folks so we can coordinate the activities there. All right, let's see nothing about Kupkan, China. I've done a request of the session, so let's go ahead and jump into PRs. Unfortunately, I don't see Kristoff on the call. So that, oh wait, no, I don't see him. That's gonna be a challenge. So, let's skip those until, actually, Tapini's not there either. So Clemens, you may be up first. Let me just think about this for a sec. Okay, yeah, Clemens, did you wanna talk about this PR? Yes, well, that's been a while. Yeah, I think there might be some outstanding comments for you. Okay, well, let me go into the other view. Hold on a sec. Yeah, go to that one. Not that one, I think this one. I think Evan was just pointing out you may need a constraint section. Yeah, that's probably better from an editorial perspective to go and put that into a constraint section. Yeah. But I believe that's more syntactical. So I think the broader question for the group is, is the gentleman, let me, I thought I hit it. So let me just summarize it for the group where this comes from. This was raised by Alan Conway and because he's been trying to do some intermediary work and what he found is that he looked at a data field and couldn't figure out what that was, whether that contains the binary content or whether that contains a string that he adds to interpret. So we've been effectively picking up the content encoding field from MIME and which is also used in HTTP and basically label the data field. In addition, so there's the content type which says this is what is in there and the data content encoding now says, and this is how this is encoded. So if it's a string based encoding, as it is with JSON, you say, this is base 64 encoded binary. We basically declare with that that the data field, if it's a string, then you know it's a base 64 encoded binary. That's what that's for. And it's optional means you only provide it when, and I just missed adding these other fields like whether it's optional and then the constraints, things in editorial, that's editorial, why is this correct? But that's what the function is. So as we talked about, obviously there's some syntactical things that need to be changed here, but from a semantic perspective, what do people on the call think about this? Does it sound like it's headed in the right direction? And I didn't see any complaints about it in the PR itself over the last two weeks or so. Anybody have any concerns? It'll help a lot, actually. That's a good comment, thank you, Scott. Yeah, I was gonna say the same thing. Okay. All right, not hearing any complaints then. It sounds like it's just a matter of my editorial thing and then we can hopefully maybe approve that next week. Okay, let me, I'll clean that up. Excellent, thank you, sir. Next, primer edits. This one is yours as well, Clemens. So let me, I'm not sure if you had a chance to talk to this one yet or not. So let's, maybe you should do that first. Yeah, I've, and I have not read the comments just because of being busy and sick and all this. This is effectively an explanation of something that we haven't had yet in the spec and that's how the spec set is actually layered on top of each other. So there's the base spec. We have extensions that go in layer on top of the base spec. We have format encodings and then we have the transport bindings and then effectively just explains how this stuff is all layered on top of each other. And you will also find these dependencies in the SDKs and there's a similar section with a bit of a different wording in the SDK doc that kind of gives guidance on how the object model should hang together. And that's kind of reflective of that base layering that we have in the specs. So that makes the basic architectural model that I've been, when I wrote the initial set of encodings and transport binding specs, that's kind of the model that I had in my head. And so this is the first time that I actually wrote that down. Yep. Any questions or comments on this? Okay, and I'm hearing an A. I can't remember if there are any comments in the PR itself. So I think these are more editorial. Did you want to try to address these comments from Christoph? Yeah, yeah, I'll look through those too. You mean right now? No, no, no, no, no. Okay. No offline. I was wondering whether you thought he was totally wrong. You just wanted to push forward without those edits or not. No, I'll address those. Okay, cool. So I'm not hearing any complaints, then maybe next week we can approve those going forward. Thank you, sir. Don't know if this one's actually ready. Oh, I don't think it is, but let's still check. Yeah, I think this one saw some discussion. So this one is about trying to figure out what the uniqueness aspect is of our properties. And I believe he's headed in the general direction of saying that ID and source are the unique aspects of cloud events. And in particular, he's talking about how if those are the same, the receiver can then treat them as duplicates if it wants to do some de-duping logic. While the discussion is still going on, I wanted to get a sense from the group here in terms of what you guys thought about this general direction. I know some people raised some concerns about trying to do anything in this space at all, but I wanted to hear or wanted to open up for discussion if you guys want to talk about it. I would see it as ID, source, and event type. Yeah, okay. Okay, I think someone actually might have suggested that at 1.2 in the PR itself. Oh, huge arrow. Okay. Scott, can I get you to make a comment to that effect in the PR itself just to get that conversation going? Sure. Okay, cool, thank you. Anybody else have any comments? Does it seem like this is something that we definitely want to add to the spec? Yes, I think it helps to avoid further confusion, at least to have a position on it that we can point to. Okay, thank you, Jim. Anybody else want to comment? Okay. Actually, I'll make a comment that if we're talking about multiple fields, should it be talked about via the ID or do we just need to put it into a different section? Oh, I see what you mean. Just from a skeptical perspective, should it be outside of one particular thing? Right. That's an interesting point. Because I think that we have best practices somewhere. Maybe we may. Yes, especially if it depends on whether it's normative. Yeah, because if it does have some normative text here, so that would mean it stays in the spec itself as opposed to going to the primer. You want to make a comment to that effect in the PR mark? Yeah. Okay, cool, thank you. All right, I've seen some plus ones in the chat. Thank you, guys. Okay, anybody else want to comment on this? I'm not hearing anybody object to heading this direction, so that's good. Okay. In that case, oh, I wish Christoph was here, because I really wanted to talk about his minimum support of this thing. So Christoph, okay, let's, obviously I don't want to talk too much about it, because he's not here. However, I do want to make sure you guys are aware that he has made some changes into this PR itself. And taking my chair hat off for a sec, I think this is a fairly serious change of spec. I mean, by serious, I don't mean good or bad, just it's a significant change. And I would really, really like people to take a look at it to make sure that they're okay with this direction, or if they have concerns to raise those concerns. I don't want this one to be silence equals consent kind of thing. This is a pretty big change, in my opinion, but maybe I'm wrong. So please take a look at this when you get a chance. Is there anybody who'd like to talk about it now though, even though Christoph has done on the call? I have something that I want to tell. Okay. And here at ETAHU, we are implementing called events at version one, output one has a starting point. And we already have exceeded 20 characters long for the source attributes. So I don't think that it would be a very good rule. So I don't know, because we have a lot of systems here. If we don't specify which, service is creating that event, we don't have the capability to know where it came from. It's not 20 characters for the value, it's 20 characters for the key in this PR. Well, that's what I was just checking. So the sources of URI reference and URI references to now to see 2K. Oh, okay. So I miss others too, sorry. But having direct implementation experience is always good. It's like you're testing this, which is good. Okay, anybody else? Sure. So I've not seen this before. Is there any reason why the size came down? The overall size came down to 45K? I thought we had a bigger number than that. I don't recall. That's the attributes. I was hoping that that would give some grace that would keep it below the 64K. Oh, sorry, so I'm giving you some head room, yeah? Yeah, the idea is that he wants to kind of guarantee to a user that's submitting an event in their kind of native format that when it gets then serialized and sent across the wire, it will successfully proceed through a series of middleware and end up with the end consumer, predictably. And having some head room there helps ensure that if the encoding increases the bit by size. Okay, thanks. Thank you, Eric. I still think it's an extreme amount of effort for the re-encoding case. And also, now you're actually with a single event, you can violate, I don't know how many, I haven't counted the rules, but there's a lot of rules you can now violate that you all need to be aware of, of the quotas that you have, which could be a little frustrating. So Clarence, will we be proposing this as a hard thing or just as an interrupt thing if you really wanted to guarantee stuff that would work across boundaries? I would be, yeah, so there's a should. I'm still okay with basically saying 64K wire-sized encoding is okay. There's a comment to that effect actually in the comment section, which is 64K wire-sized is the limit. And how you get there is ultimately up to you. And if the publisher gives you an event encoded, and you're middleware, then unless you are re-encoding, you should go and just forward that event as is, right? So yeah, re-encoding should might go and make the event larger. But I prefer that risk to doing all that math and having to enforce 20 rules, which could be pretty frustrating if you just have that one long string field and have a monstrous long URI, which probably even includes a token or whatever we need there. And then you can't do that because even though you are under the limit for the event size, that one particular field is constrained. So all of these, because the limit is always, the limit we set here is arbitrary of 64K, but now we have not only one arbitrary limit, we have like 10. That's the part that worried me was all the rules involved and someone's gonna look at this and say, oh my gosh, what am I signing up for? Anybody else wanna comment on this? Okay, so Clemens, I don't know if you've made a comment yet, but if you haven't, can you please comment with what we just said into the PR itself? Yes, sir. All right, thank you. I appreciate that. All right, any other discussion points on this one before we move on? All right, I think that's it for the PRs themselves. I think Topini is still working on his. What I wanted to do now was talk about some of the security issues because our next milestone has a requirement they're supposed to resolve all known security concerns or issues. So this one is about encrypting the CloudEvent data attributes and do, do, do. I was wondering, okay, based upon the comments and this issue, I think between Evan, Jem, and at least one other person in there, I put together this very rough outline of a proposal just to get the ball rolling on this. And then Eric, I know you, you wanna maybe do some wording smith, word smith on it. But I was wondering what people thought about this general direction. I'll give you guys a second to actually read that. Let's just comment here. Yeah, I'm sorry, I was slow commenting. I wanted to word smith bullet point three because I think a bit after the comma could be dropped all together. Well, this is not a pro proposal. This is just, are you okay with the general direction? Okay, yeah, no, I'm fine. Yeah, I like that too. Okay. All right. Is there anybody who wants to volunteer to turn this into a formal pro request? I'll do that if nobody else wants to. Excellent. Thank you, Jem. I appreciate that, unless I was gonna feel obligated to do that. Just from a point of order, where, which part of the spec would you see this sitting in? Out of interest. Out of interest. Or do you just want me to take a stab? I don't think we have a security section yet. So maybe this will be the first thing to go into a security section. Okay. Does that sound right to people? Okay, not hearing any complaints. All right, cool. Thank you, sir. I appreciate that. All right, next issue. Do you need to be really in that context? Okay. So, so Klaus is gonna do a PR based on what's been going on in here. But Klaus, would you like to summarize sort of the direction that you're gonna head? Well. Let's see if I hope you will think about it. Question was if any middle-ware can somehow modify event attributes and or if there are any rules for this. And well, over the discussion, you can see this has been opened already in August. So it's an old issue already. I think we are the people who discussed kind of agreed that there are a lot of cases and it's difficult to define very strict rules on this. But that's maybe some notes in the primer would be good to emphasize that there are certain restrictions on the attributes, like for example, what we had a few minutes ago with a source and ID that should also be kept when someone in the middle is somehow modifying attributes. Or maybe another one if the time attribute is updated, like Evan posted, then it should be assumed that it's a new event and then also the ID should be another one, things like this. Try to collect some of those and then put them in some section for the primer, I suppose, and we can discuss. Okay. Anybody have any questions or comments for Klaus? Does it seem like putting some text into the primer is the right direction for people? Is there anybody who thinks that maybe we need something normative to go into the spec itself? Or we can wait till we see the text and see what people think. Okay, not hearing anybody. Okay, I'm gonna take silence as consent, Klaus. So if you could do the PR for that, we can move forward on that one. Thank you very much. Next one, I think this one actually, Eric, is this yours? Yeah. Okay, if we don't need to cover this one, right Eric? This is already covered by the other one we talked about. Agreed. Okay, cool, thank you. Let's see. Okay, this one. All right, Adam, this one is yours. Would you like to introduce this issue? Sure, so to give some background, I work on the Knative eventing piece, which usually is cloud events as kind of the interop between all of our different servers. And I looked at the spec, written up a curl command that followed it exactly, and then realized that none of the libraries we used actually worked with it. And tracked down the problem to the spec specifying in HTTP binary mode that any string in a header value should be surrounded by double quotes. But none of the implementations I came across, which concretely were only two distinct go implementations, actually respected that. They always interpreted the quotes into the value itself rather than removing them. And some people I've talked to basically said, this looks like a spec bug rather than an implementation bug. I just wanted clarity on which one it is. And if it is a spec bug, we should go back and fix it, because all the current examples basically state, these have to be present and have to be removed, but no implementation I can find actually does so. All right, so my question is for Clemens, because I think you may have actually been the person to write this up. And I think this came around because of some language in the spec that talks about doing adjacent encoding on values and stuff. Was it really your intention to include quotes as the headers or in the headers itself? No. I didn't think so. So you're open to a PR that removes these quotes, right? Yes, I am. That was not the intense, but that's a great, fabulous bug. Yes, it's also humorous that no one noticed it till now. That's true. Oh, that's why we're doing all this testing stuff, right? I love it. It's just amusing to me, because I'm sure everybody's looked at the examples in the spec and they looked at the double quotes and just sort of ignored them. Yeah, you reach through them effectively. Exactly, just amused me. Okay. Maybe that means we need to have examples as curl commands versus this kind of undefined printout. You know what would be really cool is if there was some way in markdown to do an include, and that way we could have test cases that actually run on this file, basically, and that can do a curl with it, but then also sucks it into the markdown. You don't have to do some weird copy and paste thing that gets out of sync. Yes. No, we are getting into the wonderful world of documentation infrastructure. I know. If only this was HTML, anyway. Yes. Okay. So I apologize, Adam, I couldn't remember. Assign that bug to me. There we go. Okay, that's what I was looking for, an owner. Well, of course. I wasn't sure whether Adam wanted to sign up for that exciting piece of work, but no, if you want to do it, Clemens, go for it. And then if we end up, if we're switching the spec, is this going to be retroactive to 0.1, 0.2 as well or only 0.3 and above? So far, but nobody has cared and has been able to read events anyways. Yes, everything is moving forward. I'm wondering if it would be worthwhile to make a note, not in the old versions of the spec, but somewhere in our documentation that says, we noticed there's a typographical error in the previous versions of the spec while we're not updating it. Influencers should be aware of the error and basically remove the quotes. Yes. So maybe you can- No, there's text that explicitly says the quote things. Yeah, I know, but I don't want to go back and create. Do you guys actually want to go back and update the specs themselves? Do you want to have interrupt on the 0.1 and 0.2? Oh, that's an interesting question as well, babe. I think of all of these as working graphs. And I think once we have 1.0, we can go back and make a rata, but I'm not sure we need to do that on the older ones. We can always go update the old tags, but- So let's pause here for a minute, because this is a discussion that came up in the SDK call, I think it was last week, where there was a whole question of, people actually have running code in production of the current versions of the spec in some form or another. Do we want to, as Scott was alluding to, try to make sure those guys continue to work and have interoperability by potentially updating the spec? Because if we're going to do an SDK demo interrupt thing, and if we do interrupt on, say, version 0.1, 0.2, and 0.3, if there's something flat out wrong in the spec, should we update it? Or, as Clement was suggesting, do we say, no, these are all working drafts and we're only going to interrupt testing on the latest one. And if you haven't had an implementation of anything older, well, you're on your own. Lots of options here. Personally, I think it's... My preference before... I'm sorry, go ahead, Dan, say that again. Projects I've worked with on in the past, thrift is an example of that, that it was annoying when they would switch, but we knew what we were doing. We knew that we chose a beta version of pre-1.over. Okay, thank you, Dan. Scott, were you going to say something in there? I was thinking it may not be that bad to go back and correct the 0.1 and 0.2 versions of the bindings for HTTP binary mode, because it's not like it's the core of the spec. It's one particular encoding type for one particular transport. It's definitely an option, yep. What do people think? Do we want to do this? I've heard opinions of both sides. My preference as someone trying to use the spec is that at the very least, there's a note in the spec itself saying, this isn't accurate and you should do this instead, because looking at an older version of the spec as a lot of our things emit 0.1, I shouldn't have to have kind of arcane knowledge for, oh, ignore this particular part. I want to be able to look at the spec and immediately see what is correct. So I guess in terms of old versions of the spec, I think there are two options there if we decide to do something. One is actually fix the spec proper and create a 0.1.1 or 0.2.2 and 0.3.1. So that funky in there, but you know what I meant, add a .3 to the minor version number, or add a note that says we're not updating the spec, but remove the quotes, it was a typo. But we didn't want to take on the burden of modifying the full spec. Those are both valid options. Even though that- Both are fine with me as long as it's easy when I'm looking at the older versions of the spec to know where those notes are exist and that I have to read them. Yeah, of course you got to wonder if we're going to go in there and touch the spec at all. Would it just be just as easy as to fix it? Right. So before we think about this, I'm not, now that I read through this in parallel, I'm not convinced that the spec is wrong. Well, I think according to the spec, it's probably 100% accurate because I think the spec talks about adjacent encoding. And because I actually make the difference between the cloud events fields that are encoded and they have the quotes if they're strings and then they have, in fact, they're using adjacent encoding all the way, which is then correct with quotes because I have other fields which are not, which are mapped effectively natively, like content type is mapped explicitly to an HTTP field. And then that is of course unquoted because that's what the normal format for content type is. So my question again, from that curl example, what did that buck come from? But a header is already a string. So if you quote that. No, but it's a string that contains, all of JSON is a string, right? But values are representing either strings or numbers or booleans. And so that's how JSON distinguishes that. Well, not all of JSON's strings. Well, JSON is a string encoded. Right, so like if you take the CE type example that the cursor's on to encode that in as it sits right now in the spec, it would have to be quote, quote, well, quote, and then escape quote, com.example.sumEvent, escape quote, quote, quote. No. Unless I misunderstand how headers work. So here's what the spec says. It should be header values. The value for each HTTP header is constructed from the respective JSON value representation compliant with the JSON event format specification. There is, and then there's more, but there's no rule that is further saying anything about quotes. Clement, I'm not sure I'm following what you're saying. Are you saying that this example is correct or incorrect? Let me switch back. This example that we have here is correct perspective. But is that really what we want to do? Because people, I believe, will pick up the quotes as part of the value itself. So there the question is, if we allow for extensions, one second. Compare content type with CE type. Sorry, my phone was getting in the way of my headphones. If we want to allow for arbitrary extensions to pass through HTTP, we need to allow for new numbers because we just added them. And for strings to be encoded, which means we need to make the difference, because otherwise we can't decode them anymore. So we made the rule to say we're basing this on JSON. And so JSON is the way how those strings, the header value is to be interpreted. You run it through, you read those through a JSON decoder. And the infer type is what you get. That's not how the internet works most of the time. You get the value of the header and then it's a string already. And you're saying to quote the string again. No, we have a type system that we define for cloud events. And these are attributes of cloud events that we carry through HTTP headers. And we infer the type of a field which is not well known, but is an extension on the other side. We infer that through the JSON translation. But if you remove the technical hat that you're wearing, Clemens, you have to admit, this is just funky. No. No, it's not, no, it's not because literally, JSON, if you take a look at JSON, that's what JSON does. I understand that. But like I said, you get to remove your technical hat for a second and look at it from somebody looking at this with a first set of eyes. They're going to look at this and say, oh, someone by mistake put quotes around stuff. Obviously, these are all strings type, you know, time ID, those are all just strings. Look, the quotes, there must be a typo. Everybody assume that as we're writing the spec, and we're experts in this. I mean, we can, we can drop them. I need to effective. I need them to work Smith effectively an exclusion clause. They're in the spec. That says if. So what's the distinction now between a field that carries a number, like the sequence, our sequence extension. And a field that carries a string. Right. I guess I kind of assume that if we were going to fix this, that we would basically for all the well known types, we know how to encode those HPB headers already, and you're going to know what the value is. So when you decode it, you know whether it's a number or string already for extensions in general. It seems to me you could say, if you don't know what the extension is, because it's just a random extension to you and you don't actually have a formal support for it to know whether it's a number or a string. Treat it as a string. Okay. Yeah. So I'll, I'll write. I'm, I'm okay. I'm okay with, with writing the, writing, writing the clause that basically clarifies that and gets rid of the quotes. Okay. Well, but okay, so. I know Scott, you're probably okay with that direction. What other people think on the call? As much as I had said, I can see where Clemens was coming from when he wrote this because I would like to understand how you would pass a context attribute that's numeric, not a string. And have it work all the way through a pipeline. Well, the pipeline doesn't have to care because it's just a header and it's data all the way through. And if it does understand it, then it knows how to interpret that, that string of bytes. Well, I think I don't want to put words in Clemens mouth, but I think if you started off with an event that had a context attribute that was a, a number. And then it got encoded like this, what it got to the other end. And someone then tried to move it onto a different transport or turn it back into a JSON document. Without these markers, you don't know how to map it back again. You don't know whether to turn it into a, a JSON number or a JSON string. Correct. You lose the type of information. You lose the type of information. So this is really funky. And I was just looking at the MQP spec because this would apply across the board on all of the transport specs. Yeah. So the issue really is how do you, how do you communicate the type of an attribute? MQP is different because the properties there are typed, which means you literally take an integer and map this to an integer in MQP and you take a string and map it to a string in MQP and you don't lose the type of information there. But I think they have a map to application properties, which are all strings. Yeah. So you have the same problem. I believe. No, it's a string keyed. Yeah. Properties. Yeah. Properties can be, can be typed. Can be typed. Okay. All right. But, but Jim, does it not address your concern if we say things that are of unknown types, like for example, extensions are just treated as strings and it's only people who understand those particular extensions that they, they'll convert it into something different. And then wouldn't you say those extensions must be strings. They can't be anything other than strings. Otherwise you've got an interrupt problem. Yeah. So you change the spec rather than the transport finding. I don't know. I have to think about that one because I was kind of assuming that if you don't know what it is treating as a string, isn't that big of a deal? Cause even if it's a Jason blob, you can still treat it as a string and then somebody who wants to actually process that process that thing will know, oh, this is Jason because that's what the definition of extension means for his says, and I'm going to decode it as Jason. I think what we're, what we're putting into question here is really can we have any data types that data type in cloud events that does not string because removing the codes means everything is a string now. Or we treat the, the spec level fields differently than extensions. That we could do that. So my, my, what I've been trying to do, my intent here was to have a type safe mapping of the, the, the information model of cloud events onto HTTP headers in a reversible way so that the, the, the receiver would be able to go in and restore effectively the same types using Jason type inference. That was, that was the goal of that. Yeah. And it definitely appreciate the goal. It just feels like technical purity has, has bumped into reality in a bad way. Unfortunately. How do our SDKs work today? Do they honor this? No, at least not the go one. And if any of the others interoperate with go, they don't either. I guess there's no one even noticed the culture there. So, Clement, it sounds like you volunteered to take this one, but you may want to do some more thinking about it. Is that fair for you to go off and do that thinking? Yeah, I'll do that. Okay. If, but the other question that, that came up was, okay, what do we do about the old specs? I don't think we can answer that question yet. Until Clemens comes back with a proposal that we all agree to, because we may come back and say, no, it's perfect the way it is. And at that point, we don't need to touch the other specs. So I think we need to wait on that decision. That's all right. Okay. I'm going to assume you guys agree. Okay. Anything else related to this issue we need to discuss Adam, did we miss anything? No, I think that's it. I just want clarity on which is the correct one. And then hopefully the SDKs match whatever the decision is. Yep. And thank you very much for bringing this up. It's a good one. All right. Um, we may have time for at least a quick talk about this one. So there's a bit of an old phone call. Someone asked what is optional actually mean. For receivers of cloud events. And I think Clemens. He basically said optional means optional to send and optional to handle. Um, does everybody agree with that general direction? Cause I think we should probably add some clarity to the specs since this question has come up, I think more than once. Yeah. So I mean, I think the comments here in line with my thinking, um, I sort of was questioning the term optional to handle, which I think is what Clemens added. I, but I think if we can agree that, you know, middle middlemen have to pass those optional things on, even if they don't understand them. I think I'm, I think that's really what the intent was. Yeah. Okay. Is anybody who disagrees with the general direction things like optional to send and optional to handle. And we hear as much propagate, because I feel like actually goes a little bit beyond the question of optionality, but still a good thing to probably put somewhere in the specs since that has come up more than once. Anybody disagree with that general direction. Propagation one way of handling. So why would one be optional to handle it, but still have to propagate. So how can you, um, if a middle or doesn't support, doesn't understand there will be a header coming through. So from some extension, it will just drop it. So how can be optional to handle. They must propagate. That doesn't sound right to me. That's an interesting question. So those extensions would propagate. Yeah, because we're going to test my spec knowledge here. So those extensions have to be prefixed with. See, don't they? So you can look at that. They don't, I thought you could look at the head of that. And I think there's this trace one. It just has traces. Perfect. As I could see. Yeah, I think it, I don't know where, but there's some conversation about this where we talked about how maybe we should add text that says, if you don't prefix your extensions with dash CE. I'm sorry, CE dash. Then you run the risk of them being dropped by intermediate areas because they have no idea they're cloud events headers. Cool. But, but, um, uh, Steve, Oh, do you think that's that cloud event extensions that have the CE dash prefix can be dropped by intermediaries? Yeah, I didn't know there's a special treatment and there's a different, uh, So see and not see. I didn't notice that in the spec. So this has certainly some impact on the PR. I'm supposed to prepare regarding the immutability of the event context. There, there was a consent. I think that we shouldn't impose two strict rules. And I mean, removing an attribute is maybe an extreme form of modifying it. So, um, if we, uh, say here must, then this would also impact this rule. I'm surprised Clemens, you haven't spoken up. I think you have some strong opinions on this one, right? My opinion is right there. Uh, wait. Or what were you talking about this line right here? Oh, yeah. That's always my assumption that whatever intermediary gets, it must propagate. Unless, unless, unless, unless, unless it's directed at it. So for, so for instance, a, uh, the tracing extension is probably targeted at every, at every intermediary. If you do that sort of tracing, which means, um, that's an optional attribute, but the intermediary may want to go and fuss with it because it wants to go and add its own context. Uh, Clemens, I think this is really good point. Um, I feel that the intermediaries must propagate and really, unless it is directed to them, then they have the freedom to modify and possibly remove it. Yes. We should link this, this, uh, issue to that. The one that was talking about the immutability of the context. Cause I think they're related. Yeah. I think that's what class was saying. Yeah. Sorry. Sorry. So I do feel like, unfortunately, we, we actually merged two issues together. At least in my hand, there are two issues. I think the optionality thing is different than the forwarding thing. Klaus, would you want to take a, would you want to take the responsibility of including this part of, of this discussion in your PR that you're going to write up? Yeah. I'm just wondering how to handle it. I mean, so far the direction for my PR was to just add some guidelines into the primer. And this must and capital letters somehow, um, um, directs more to the, yeah, for the spec. So do we have to add something to the spec or is it also just for the primer? Well, if we want normative language, it has to go in the spec. Yeah. If people, if the group wants that must in all caps, then yes. We may need text either in both places or the, all the texts that you're proposing to go to the spec. It's kind of up to you. Okay. So I can try to come up with a proposal and then we can see. Okay. Cool. Thank you. Um, so on this one, is there somebody who'd like to volunteer to write up this piece of it? Optional, what optional actually means. Okay. Not hearing anybody. I'll take that one since I was the one to open up this issue. So hold on. Let's all forget if I don't do it right now. All right. I think of that. I don't think we have time to talk about anything. Addition on today's call. So let's quickly do the final roll call thing. So don't vanish yet. Um, Roberto. I'm here. Excellent. Thank you. Uh, they met. Are you there? They met from Verizon. Okay. What about Vladimir? I'm here. Thank you. Oh, and that's okay. Got you. Thank you. Um, David Baldwin. David Baldwin in there. Excellent. Cool. Richard. Hello. Yep. Uh, Christian. Right here. Hello. Okay. And Matt, are you there? I'm here. Okay. Thank you. And Christian, your pure question is the recording should be available. I think it takes about a week or so for the guys to do that, but it should be on the website soon. Is there anybody on the call that I missed for the attendance? All right. In that case, I believe we are done for the day. Thank you guys very much. And we'll talk next time. Thanks. Good day.