 All right, let's see. Somebody else just joined, so I can throw this out. No, okay, no. Okay, let's get started. Only 14 people. That's fine. AI is a skip. Okay, community time. Are there any topics from the community that people would like to bring up that are not on the agenda? All right, not hearing any. I think technically, on our calendars, there may be an SDK call scheduled for today, but I think some people are traveling and I know Clemens is on vacation, so I'm going to end up canceling that call unless someone really, really wants it, but I don't think there's anything really big to discuss other than Clemens' PR, which I believe Scott had an AI for, but he hasn't had a chance to do it yet, so there's nothing really to discuss there, so I was going to cancel that meeting in case you guys were thinking about joining. Incubator, I'm still waiting for three end users. I know some people in the previous call mentioned that they were going to try to get me some. Just a nagging reminder to do that when you guys get a chance, and of course, please review the proposal itself to see if you guys think that the text in there says everything appropriately. I think aside from getting the list of three end users, the only thing that may be a little contentious is the fact that they require a good list of maintainers of the project, and obviously we do maintainership or approving a PR is slightly differently than other code type projects. If you guys can look over the description about why we do things differently there, and if you think that sounds good, or if you think we need to change the description in there, please let me know. Let's move forward, just jump into PR review. First things first, hopefully this is an easy one. This was technically noticed by Gem. I think it's just an oversight. Since we removed the extensions bag itself, the spec.json schema still mentions it, so I just removed these sections here. I'm not a JSON schema expert, but I ran it through a simple online tester and didn't complain about it, so it seems like it's right. Any questions or comments on this? Okay, any objections? Oh yeah, go ahead. I'm not the schema expert either, but now we can put the extensions on the top level, so does the schema allow for that? I did a quick test of it, and it seemed to, at least through the online testing tools I did. I gave a cloud event, JSON, with the required attributes, and it was perfectly happy with that. I had a new top level attribute, and it did not complain. Okay, cool. Yeah, obviously. You may, I'm not sure if technically we need to sort of make the statement that additional properties are allowed. I don't have to check that. I'll double check, but I thought we already did say that, but I'll double check. I mean, in the JSON schema itself. Oh, yeah. Are you allowed to put comments in JSON schema? No, there's a construct where you can allow additional properties. I'm fine with the change, although obviously it must have happened before my time on this project, but I'll check with the correct, yeah, if there's any additional syntax. Okay, yeah, because we can always open with a PR to add that additional syntax if needed. Yeah, yeah. Okay, cool. In that case, any objections or approving that one? Okay, cool. Now, Jim, I know you said you needed to go, so I wanted to bring up this issue first, because I think you had some strong opinions on this one. I think you even had a comment recently. Did you want to talk to your concerns first before you have to vanish? Yeah, sure. Okay, so this is all related to the issue Evan had raised with, if we have maps of maps, then you can go down a rabbit hole trying to encode or decode those. And I think his proposal was to say, let's do away with the map construct. So what I started to really sort of have thoughts about was how that applies to extensions that actually have multiple attributes or properties associated with them. So in that example, we're looking at there, this is a structured event that, assuming we've taken on Doug's recent PR change, the extensions are modeled at the top level. Yeah, so I have a bag of extensions for the sequence and a bag of extensions for a proprietary one called BPS that no one has any knowledge of except the people that deal with that. So now comes the test. Yeah, okay, I want to take that structured mode, structured content and re-encode it into binary. Now I've left fill in the blanks there because I would propose doing it one way and I think Evans would change that. The interesting point is whatever you do there, how do you then go on to step three and say, okay, given that binary representation, how do I recreate the structured one? Yeah, without any knowledge, when the advanced knowledge of any of those extensions. So my position at the moment is that if we said that an extension can be a bag of primitive types, maybe just strings even, there's a way using our existing transport encoding mechanisms that we can encode the extensions into those headers in such a way that we could decode it back into that structured event format. Without some sort of way to do that, I'm not sure how you can do it. If you simply just flatten them. So correct me if I'm wrong, but I believe that the current spec says that this sequence block right here would get serialized as HTTP headers in the format of ce-sequence-sequence colon 99, ce-sequence-sequence type colon integer. Yes. And then you should be able to do the exact opposite for your step three, right? Yes, you should. But if we do, I thought the whole point of this was trying to do away with even doing away with the construct of sequence or BBS being a collection of attributes. Right. So basically these two become the top level things and these two become top level things. Right. And so now you've lost any sort of encapsulation for those extensions. And also, and I think this was what I was trying to get at in my slacking on the phone, was that the SDKs, as they're written today, honor this sort of notion of an extension that has a set of attributes and that there can be multiple extensions within a cloud event. So if you, if you flatten everything up to the top, you lose, you sort of lose that construct altogether. I'm not sure how you can recreate that an SDK level, how you could represent. I'll hear all the attributes associated with the sequence extension if you didn't have pre-knowledge of what that sequence extension looked like. So I want to make sure I understand what your concern is because it sounds like it could be one of two things. At a top level, it sounds like you agree that you can technically serialize things back and forth and go between the binary and structured with things at a top level. It's you're more concerned with this, the cemented grouping of things. Do I have that right? Well, I mean, I, with the flattening, assuming we did the flattening, yeah, well, let's see, this is where it gets challenging. In Evan's proposal, my structured event wouldn't look like that at all. Yeah. So I have no natural grouping of extension attributes. Correct. If you, if you, if you adopted Evan's proposal, this sequence would go away. These would remain as top level things. The BBS would go away. These would be top level things. But the, I suspect the recommendation would be, well, these aren't really descriptive enough. You may want to put the word BBS in front of both of these as they, as they become top level things, right? That's true. And you get, well, that's where it gets interesting. You end up having to name space them anyway to make them descriptive. So if you're going to do that, why not just stick with that structure? Right. And then, but I think that gets to what I was trying to figure out is, are you trying to make it so that SDKs can take that top level flattening that you're talking about and in essence, magically recreate a structure like, like here, where it's BBS with sub properties? Well, yeah. And this is where I, I, when I joined and I looked at the spec.json file and sort of got my head around what was going on, that's exactly what I thought was happening. Yeah. Now, maybe I was misled by the fact that the spec.json file was, had not been updated since, you know, a previous decision had been made. But I'd always naturally assumed that they were grouped. Yeah. And when I looked at the SDKs, the SDK writers appear to have gone down the same road. Yeah. They were offering the ability to add an extension and an extension. And I think even in the C sharp use case, I'm not a C sharp expert by any stretch, that seemed to imply to me that the SDK writer had assumed there was a set of extension groupings. Yeah. And that each extension had a set of attributes. Okay. So I think even that's back up. I think I want to make sure to run the same page here. Let's go back to this one. So even with the old schema, this was just a single bag, right at the top level, where it's an object. Yeah. So it's a yeah, but it's an object map. Yeah. So the, the, what I read into that was that I would have a top level entry of an extension, extensions. And then inside that another object, which, you know, would be, you know, property type bbs, which in itself would be an object. Yeah, this is the map of maps problem. Yeah. Well, yeah. But so I believe what this actually says here, though, is there was a top level property called extensions. Basically, as we call it in the past, a bag. And inside of that bag could basically be anything. It could be a list of name value pairs, or it could be a list of named colon objects, right? As they said, you could have bags inside bags, right? But it also could just be a flat list of top level things, right? Could be. But this is the disconnect for me between the way that reads and the way that some of the SDKs are presenting. Yeah. There's something adrift somewhere. Maybe it's just in my mind. But I'd always imagined, especially the way we even define extensions. Yeah, we say, here's, here's an extension. And in some cases, I've seen, I thought I'd seen extension writers say, and this is the in-memory name given to this extension. Yeah. Yeah, I think that leverage is somewhere in one of the extensions. But you seem to be claiming that basically, all extensions are themselves bags with multiple properties underneath. And I don't believe that's true. Extensions, as of today, can be bags, but they can also be just single-name value pairs. Right. But I think that would still work. Well, even if you... Yeah. Right. Because what you're asking for when you're asking about, you know, that magic happens, right? Where somebody says, if they see BBS context and then BBS correlation, if you're asking the SDK writer to convert that from a list of top level things into a bag called BBS and removing the prefix for those two things, I would be blown away if any SDK actually does that. Because that seems like a little bit too much magic and reading people's minds. But that's what the transport binding spec says. The transport binding spec tells me how to take those... I always interpreted the transport binding spec would understand how to do that. But it only... But it does that because of the dash that we put in there? Exactly. Yeah. So we're now saying, let's take the dash out. Whereas all I'm saying is, let's leave it in, but just limit extensions to one level. So you don't have a map of a map of a map. Just to make sure you're... You could still have a top level single, an extension with a single primitive. It doesn't always have to be a bag. But yeah, I don't know how else to explain. I think I may have misread or read too much into what I'd sort of gleaned from other aspects of the specs and the way SDKs were evolving. Yeah. And to be clear, I'm not necessarily trying to advocate a position one way or the other. I'm just trying to make sure that we're on the same page because I still think there's a little bit of a misunderstanding in the sense that my understanding of Evan's proposal is that if everything gets flat into the top level, SDK writers would not try to do that mapping that you're talking about. They would not try to do that correlation. If everything is at the top level, everything is at the top level to the end user when they look at the extensions. Yeah. I think the other thing for me is from a purely with a Jason hat on, you'd end up with really ugly documents at that point. Yeah. Because you just end up with a cloud event with just a random assortment of header items. But that's more of a stylistic thing. And to be honest, that's what I always thought this came down to. Because you can do it as a flat list or you can do things as bags. It's just a question from a human readable perspective, which one's easier to deal with? Because I think machines could deal with either one. So now you've touched on an interesting one. Is it a machine issue or a human readability issue? Because really the machine issue should trump the human readability issue, shouldn't it? Well, yeah. But I don't think there's a machine issue though, to be honest, because I can put BBS context, BBS correlation as top level things. And if I know what property to look for, whether I have to look in BBS dot context versus BBS context, it doesn't matter to me as a machine. Yeah, I get it. Maybe part of me is just sort of railing on this a bit because what we're sort of ending up with is a payload and a set of headers, which maybe is the intention. Yeah, the same issue you have with, well, not issue, the same way that HTTP is just a payload and a set of headers. And I'd sort of imagined that the way you guys had sort of come at this was just a bit more structured than that. That was all. To allow that sort of, I'm automatically namespacing my extensions by putting it in a BBS placeholder in this instance. Yeah. So I know we've done many of the conversation. I'd like to hear from other people on the call. What have you guys been thinking about relative to this issue? I think technically from a straight coding perspective, we could make either one technically work. I think a lot of it does come down to human readability or wrapping your head around things. Because obviously having a grouping like this is much easier for people to wrap their head around because they feel like there's a security blanket of some sort of uniqueness by putting this here. But technically, you can always prefix these things after top-level things. Because more of a human readable thing in my mind. But I don't want to say discount that, though, either because sometimes that does matter in terms of usability and people's perception of usability at the spec. So what have the people been thinking about this issue? Hi, this is Vladimir. Go ahead. Vladimir, go ahead, please. Thank you. I'm concerned if we flatten this and I can see something like sequence dash sequence. If I recall correctly, the limit for the keys is 20. So that basically means you have 10 characters for one and 10 characters for two for the second. In my experience, we often see there are some applications and their identifiers will be longer. So I'm afraid we will get in the situation, we will restart kind of shortening and condensing, skipping wall walls and stuff like that. So we'll end up with kind of less usable code for the developers. Just to point out, though, even if we keep bags, when they get serialized as HTTP headers, we can catnate them together. So I think that same limit still applies, doesn't it? Was the limit for the length not the number of headers, though? Well, I thought I was talking about the length of the header name was the issue. I think the guidance, it's not a strict limit, but a guidance we define is that a key should be 20. It's not a hard limit, though. That's what I recall. I'm not 100% sure. But I think the total header length isn't really limited from a suspect perspective. Well, I guess what that means then is if we do adopt this proposal, we would need to possibly change that recommendation of 20 and maybe perhaps make it longer. Maybe. Yeah. Anyways, who else had their hand up? I know someone else did. Vlad, were you going to say something? I can't remember. Yes, I was. I was confused why the sequence-sequence thing and sequence-sequence type is something that's not going to work because that is both, that's easily understandable by machines. And I definitely, as an SDK user, I definitely want to be able to do sequence. sequence. So I don't understand why that is dismissed or what's the issue with that. Could you please expand on that a bit? What do you mean by it was dismissed as a thing? Not dismissed. You said there's a problem with that, that sequence might get dropped, the first one. Why was that? So in doing sequence-sequence value 99, well, comma 99, you said the first sequence would be dropped under some proposal. Oh, yeah. Under this PR, Evan is suggesting that we don't allow match at all, so that if someone wanted to represent this after this PR is merged, they would either take these as properties as they currently exist and make them top-level things, or in the case down here, if these names are too generic, they would prefix them with BBS. That's all I was saying. Okay, so we're forcing namespacing to the actual property name, got it? Basically, yes. Yes, that's a good way to phrase it. Oh, I disagree with him or her. Hi, this is Tim. Yeah, Kate, Tim. Hi, can you hear me? Yes, we can. I'm negative on the proposal. I think that namespacing, and then you're going to get something that's complicated, whether you put it in JSON or PutoBufs or whatever, if somebody wants to have a structured value for an extension, I'm fine with that. The problem it presents is not that big in the solutions, I think, are uglier. Over. So do you disagree with the original assertion that the current civilization we have of maps inside maps inside maps is, you disagree that's actually a problem and that we deal with it just fine with the civilization rules we have? Yeah, I think that's probably okay. We're asking Tim, right? Yeah, Tim, I just want to make sure, yeah. Because we, yeah. So Tim, you, okay, no one ever mind. And anybody else want to speak up? This is Colin. I agree. I think it's fine as is. Okay. Okay, because to be honest, I have wondered that myself. I've wondered whether, yes, from a pure technical perspective, someone could produce maps three level deep, and then it's going to maybe cause some sort of problems. But just because the spec allows it doesn't mean it's necessarily smart for people to do it, and they'll quickly learn to not do stupid things. So maybe that, maybe that's the point that we try and say something in the lines of, you know, you shouldn't have nesting of more than, you know, n levels, you know, whatever n is, you know, if it's one or two or whatever. And that you guarantee that everything will work up to that level. And beyond that, you know, your rate of success is invariant. I'd almost rather tweak it slightly, or if you were to head that direction, and rather than try to give a limit, just warn in the primer what some of the things are to watch out for if you do nested maps, rather than try to overcome the limit, because any number we come up with is arbitrary anyway, right? Sure. But I mean, again, we're going to have to, you know, I don't have Scots on the line, there's got to be some guidance as to what those SDK writers are going to have to expect to deal with. Yeah. And I think that's where Evan was coming from. Yeah. As he started to write code around these things, he was coming up against this, you know, how far do I go statement? Yeah. Gem, would you be willing to take an action item to possibly write some text to the primer, as a sort of an alternative to this PR, that gives the guidance we're talking about here, but that basically doesn't change the spec itself? I was saying Gem, but it made it sound like Tim. Yeah, I was on mute. I was talking on mute. Yeah. Okay. Yeah. I can do that. And then we can have a sort of pistols at dawn scenario to, I guess, decide which PR wins. Yeah. I like that. I like that image. That's good. Vlad, your hand is up. Yeah. So this whole PR started from the transports not easily being able to encode nested objects. So assuming we leave it as it is without merging this, how are the transports going to handle that? Are we fine with that trade off? And I think that's the point. I think that if I understand it correctly, I think the binding rules we have today actually will work. It's just because it's sort of recursive, there were concerns that you could end up with maps with maps of maps of maps of maps, and you would really go down a rabbit hole. Yeah. That's my understanding. Yeah. I think I'm trying to remember everything because it's been a while since I've actually looked through like 460 and 456, but I think, I don't think there was anything technically wrong with the spec. I think it was just, as Evan said, it's just, things get problematic and ugly is I think what it comes down to. And so he was suggesting to keep life easy by trying to limit things. I think that's what we're going to hear. Yeah. I'm also not sure if this, sorry, go. I remember Clemens saying something about AMQP not being able to easily encode this and having to do some really dirty hack, but my memory is fuzzy on that. So you would think Clemens would have mentioned that when he was writing up the AMQP spec around this section. So I came up with the, I remember Clemens saying that I just checked the AMQP transport binding says nothing about maps. Clemens was saying that when he actually started implementing it, that's when he hit the wall because AMQP has no maps. There's no way to encode them really. Yeah. And I think that's where you would follow a similar scenario to the HTTP binding. Yeah. Where you would, you sort of fake the map through the property names. And I must admit, I thought that was probably what would be going on. The other one, Doug, I'm not sure if this is also related to another issue around the fact of, if you have NIs or maps of NIs, then how do you transport like ints or timestamps when your transport binding doesn't even have that construct? So AMQP has very rich primitive types. Yeah. But obviously, HTTP only has strings. So do you also try and restrict your attributes or context properties to B strings and not these sort of complex types? And I'm not sure if we're conflating all of those issues into this one PR. But didn't that recent PR that we merged that I think Clemens wrote up where he said everything has to have some sort of string serialization to it? Doesn't that address that concern? That problem was addressed by Clemens with the everything has a string encoding. And it's the responsibility of the client receiving the message to reconstruct the types if the transport didn't support the actual types and their strings. So the client should know what type your attributes are, because well, we should know they're either cloud events attributes or extension attributes. If it doesn't understand them, it'll leave them as strings. And when someone does understand them, they can decode the string. It's yeah. Okay. I do have to I have to draw Doug. I'm sorry about that. No, that's fine. Just just you'll take the action. I'm to write up that guidance though, right? Yes, sir. Okay, cool. Thank you. Okay. So, okay. So Jim is going to write up some proposed text to basically keep the spec as is, but then provide guidance that says don't do silly things. Is that where most people's heads are at on this issue on the you know, for everybody on the call? Or does someone want to speak up to know there really is an issue here that we need to address? So did someone actually give a good reason to keep the maps though? Because the last time I was here and we spoke about this, it was just like nobody could come up with a good reason to keep them. It's just a headache. Well, I think what we were hearing from Jim as well as Tim was they do see value in keeping them. Okay. That's the overall question, right? Go ahead, Christoph. I do agree that for namespacing, where you have a defined extension that we have defined like the sequence type or something that you as a company define, it's valuable to have that namespacing to group your multiple attributes together. Yeah, sure. The counter to that last time was that you can also do namespacing without maps just in the name. But I guess it's harder now that we don't have anything else but lowercase letters. There's no separators or anything. And I think somebody did mention that if we did get rid of maps, we could loosen up those restrictions if we really wanted to. But I think it still comes down to we're asking the user to do namespacing through prefixes, right? Yes, because that's that's because in many transports, that's what will actually happen anyway. So it's not actually a big change. There are very few transports that actually support maps in headers. So that, well, yeah, it's kind of whatever someone else can argue with Clemens when he's not. That'd be fun. Tim, were you going to say something? I thought I saw you come off mute for a sec there. No, just listening. Thanks. Okay, cool. Okay. Anybody else want to speak one way or the other on this one? Okay, because I have a feeling that this may just come down to a very binary choice, you know, maps or no maps, basically. And then once that decision is made, we just need to decide what the ramifications are. For example, you know, if we decide to kill maps, then we can loosen the restriction about character sets and stuff like that. And if we if we keep maps, then we just need guidance for things to for people to watch out for. So we'll see what what Jim comes up with. And we'll, I guess, come down to a vote or something at some point, which would be unfortunate. This is this is Colin, do we have some actual use cases of maps of maps? I mean, I can understand the use case of maps, where, you know, you want to avoid a prefix, right? Yeah. But you know, in serialization, that's probably what's going to happen anyhow. But are there any concrete use cases? I mean, is this something that needs to be tackled today? Yeah, I think that's where Jim was actually suggesting that maybe, excuse me for right now, we start off with saying, sure, you can use a map, but only one level, because we can always extend it later if we've really had a need to. Honestly, I don't know. I thought to go back and ping Evan. Unfortunately, he's not on the call to find out whether this was just a, you know, experimentation and what the spec allows versus reality. Hi, this is Jim. You know, I can I'm starting. I'm starting to appreciate the other side of this one and see why you might want to restrict to these things to primitive values, you know, singular values, not plural. What I would really be against is trying to invent a namespacing syntax. I think that would be a really, really mad idea. So I'll make sure we I understood what you said there, because I don't think anybody was advocating a, I don't think anybody was advocating the for the spec to define a namespacing syntax. I think that'd be up to the client to decide that if at all, it was just, it would be up to them. Okay, fine. Sorry. Yeah, okay, I obviously misinterpreted people talking about some namespacing inventions. Please no. Yeah, no, no, no, no. If namespacing was used through prefixes, it would be user defined way of doing it. But the other point of what I want to understand your first point, though, was when you said you're you're starting to look at things for the other side. Do you mean that you're starting to think that maybe maps are a bad idea or that you want to maybe say only one level of maps? I guess I would go with either zero or as many as you want with caution. But put me down as undecided. Okay, because it does seem like you went back and forth on that one. Okay, cool. In that case, let me, Evan just joined the call. So let me pick on Evan for a sec. Evan, are you actually there? Yes. Excellent. So there was a couple of questions that came up. We were talking about your PR. Yeah, joined just in time. A couple of questions. I'm trying to remember what they all were. I think one of the big one was obviously one of the concerns was maps of maps or, you know, a whole bunch of nested maps. Was that something that came up because of a real end user use case that you ran into problems with or was it just trying to push the limit to the spec? Are you saying, where is the user actually trying to make maps of maps? Yeah, basically. I think instead I'd found that some of the usage of the SDK was awkward because it needed to support this. So it was affecting users who didn't want to use maps of maps. But it made the encoding and the SDK API more challenging. Interesting. And so it had an impact on people who weren't using the feature. Oh, and by the way, I can find no evidence of anyone until now wanting to use the feature and looking at AMQP and HTTP and Google PubSub and MQTT, none of them support these maps in a header natively. And I'm not sure about any routing tools that would be able to process them natively either. Unless you do something like we do in the HTTP bindings where we use hyphens to separate them and it doesn't work very well. Okay. Anybody want to jump in on that conversation? Why does it not work very well with hyphens as separators? Well, so one of the fun cases is that you... Let's see. What was it? Oh, yes. Maps can contain any string key, including empty string. So you end up with a header with a hyphen at the end, which I'm not sure is actually allowed. Okay. JSON also allows that. Well, let's fix just then the empty string thing. That doesn't make any sense. Well, yeah, that sounds better. But there's a lot of complexity in the translation and it makes the names of our headers worse, of our regular attributes worse because we can't use hyphen as a separator because we're using it as a delimiter between map and the key in the map on the HTTP binding. Okay. So remember how we flattened the set of allowed characters in the attributes and now there's no... It's only letters and numbers and there's nothing like a hyphen or an underscore. If we took maps out, I think we could put a hyphen or an underscore back in some of our names if we wanted spec version, for example, to be hyphenated. So I think, Christoph, I can't remember who's first, but Christoph, on my list, you're first. Yeah, I think that was in the original discussion that was had. We discussed multiple things, including having a separator, but for the most interoperability, it was decided that we only have the letters, no separators at all, because any transport could theoretically use that for something, so including the hyphen. And then because the hyphen is free, the HP headers can use it. Well, the HP headers could also use something else if they wanted to. So if we introduced the hyphens for it, we could also use the underscore or whatever in the header. Well, HTTP doesn't allow underscores, or anything else really. But you could imagine having a single delimiter character that if it was underscores in JSON, because a hyphen in JSON means you have to quote your keys, which is kind of sucky and annoying. Otherwise, you're subtracting two attributes. But you could translate between underscores and hyphens. Yeah, but I think the point was to be sort of forward compatible with anything. The idea was also with potential programming languages in which you have the SDK. The idea was if we have nothing in terms of separator, we have also no problem. We can map that to any programming language SDK, we can map it to any transport binding. That was the original idea to not provide something like the hyphen in there. And then the hyphen became available in the for the HTTP header. So Tepini, I think your hands up next. Yeah, so I just wanted to point out a fun thing where we don't want to we don't want to prescribe prefixing conventions or namespacing conventions, but we actually do now because we create the HTTP headers with maps. And we would have to create the MQP headers and Kafka headers and everything with maps encoded strings with hyphens in the middle. And that's a namespacing convention. True to some extent. You would like actually on the wire your messages would have a prescribed namespacing convention because of how the spec works. You mean when we're yeah, when you're when you're mapping, you know, like this into the header, yes, we're defining a prefix mapping scheme. Yes, in a sense, I agree. Kind of. Okay, Vlad, your hands up. Yeah, I'm giving a bit of rope here. I'm trying to summarize this to make sure I fully understand it. So we have we have to choose between two things. One of the things is forcing the user to do namespacing themselves. How they do it doesn't matter. We're forcing the user to do that themselves. Because it might be hard or impossible for the SDKs and transport to do it. Now, is it hard or is it impossible to do it in the SDKs? Because if it's just I don't want to say, like, do we want to make the user experience worse instead of the encoding code? Like if it's impossible or very, very hard to do in encoding and transport, okay, offloaded to the user, they're going to do namespacing. But if it's just annoying to do it in the transport and SDKs, do we want to make the end user experience worse? Did I understand this correctly? Evan, you want to talk to that one? Yes. So my comment was actually that having to deal with the map case as a user made the SDK worse. Because you suddenly had wildly variable types that were coming out of the SDK when you got an attribute. And so you had to treat them, you know, sometimes they were structured objects, and sometimes they were simple strings, and sometimes they were URLs or something like that, which we mostly treat as simple strings anyway. But my experience was that there were a couple of places in the SDK where like the typing became more difficult because you had to be able to return a map as well as a simple object. Just want to poke on that a little just to make sure I understand it. Today, you may get the property that's say integer versus a string. The SDKs in general have to be able to support returning, in essence, non strings, right? So in general, they have to be able to support more than one data type being returned for an extension, right? There's an interesting case of known and unknown extensions. For unknown extensions, we don't have a strong enough type system to be able to tell that this thing that was sent over the wire is actually a number in many cases. Right. The PR we just merged keeps it as a string basically. Yeah. So like spec version, as an example, if that were an extension rather than in the core, looking at what's on the screen right now where it says 0.5, is that a floating point number? Is it a string? Like right now in that particular case, it's quoted, but if it were an HTTP header, it would be non-quoted and we don't know. Right. If it's unknown, we'd have to treat it as a string. Great. But with the curly brace stuff that suddenly like, not for the curly brace stuff, but from HTTP, you can tell that it's a map and so it's, there's not a fallback to, it's a string or a map if you don't know what it is in the SDK. And string or a map is a kind of funny variant type to have. In your programming. That's interesting because you're right. We do treat, at that point, we would treat unknown extensions that have any maps differently than unknown extensions that are any other type. Yeah. Like that does, that, yeah, that's interesting. We don't have to be region to pay any extensions that have a single attribute in a map get flattened. Wait, how does that help? It solves the map or string question by saying, if it's a map with one thing, it's always a string, otherwise it's a map. Sorry. The return value when you're phishing out an extension needs to be string or map. True, true, true. Yeah, sorry. In your code and that's a annoying thing to work with. Okay. So we now can resume the fact that the user either has to main space their extension themselves or treat reading events differently for different types of extensions. But, well, hmm, I was actually wondering about something different. And then would it, what would happen if we said unknown extensions regardless of the type, including map, are strings? In that case, I think you would probably solve a lot of these cases, but it seems like it would be, we'd have to change the HTTP encoding, which is not bad. I think that would probably be a positive change in any case. But it's not clear. Would we say that the maps are serialized as like the string representation of a JSON object? Maybe. That either means that every intermediate thing that wants to be able to, you know, handle or filter or something on an extension needs to be able to crack open that map or they're just going to treat them as plain strings. At which point we could just say it's a map from string to string. Oh, and by the way, some of these strings happen to be able to be popped open into JSON maps if the endpoints know about it. Yeah, that's what I was thinking. Nobody in between needs to know that it's a map. Unless they want to be routing or filtering or something. And then I'm just going to say just matching against JSON encoded maps is really hard. I agree. Yeah, I think that's one of the reasons people wanted to drop this. But then I'm kind of wondering, so it seems to me in a lot of the discussions is so the difference between the spec being very flexible versus trying to stop people from doing things that are really hard or really stupid. And if they want to do filtering based upon a map, it's their life is going to be held no matter what they do. It's especially when things get serialized, they should be headers. So it seems like to me that they very quickly stopped doing that. And they would they would not use maps in that point. They make them top-level properties no matter what. And I'm kind of wondering whether this is more a question of maybe change what we're doing a little bit here, like stop trading maps differently for extension to maybe say their strings. But maybe most of this is really just around providing guidance that says, look, you can do lots of stupid things with respect because it's flexible. But you really shouldn't do this, this and this. If you if you want to make sure you're interoperable or make it like easy for people. What do you think, Evan? I'm in favor of making it a map of string to simple type, which is basically what the PR does. And then, you know, have people use prefixes and maybe introduce a non-alpha character that would be that, you know, every transport has to say how you map underscore to the right thing in that transport. Well, I want to make sure I say what you said there. I thought your PR removed maps entirely. Well, the top-level object is a map, but yes. It removes maps except for the data part which we're dealing with separately in 470 or 471 or jumping off a bridge or something. Okay. I think the problem we're running into is people seem to be going back and forth. There are times that I think, yes, fine, screw it, maps are too hard, let's get rid of them. But then when you start looking at things from an end user perspective, people start thinking about, really, I'm going to have to figure out my own little prefix scheme if I want to do this context and correlation thing. I can't do a very simple little grouping mechanism like this BBS thing. I would love to be able to say BBS underscore CTX and BBS underscore correlation. As individual attribute names that you could filter on individually. Yeah, I think a lot, but then that puts the burden on the client to figure out their own prefixing scheme, right? But they still have that burden here, don't they? If someone else picks BBS because it's not sufficiently unique, you could still collide. No, I'm not talking about collisions. I think I'm trying to echo what I'm hearing from other people is this grouping mechanism is very natural for people to wrap their heads around to say, oh, you want a grouping mechanism? Prefix your things with some BBS underscore thing. Technically, it can work. What I'm hearing from people is it's a non-very user-friendly way of doing it is what I'm hearing and I think that's the concern a lot of people have is we're taking something that everybody seems to wrap their heads around very nicely, which is, hey, in JSON, you can do this simple little, at least one-level grouping thing and that's really convenient for me. But in cloud events, suddenly you lose that ability. I think we're hearing some people getting a little nervous about that. I'd like to let whoever has their hand up top first. Yeah, sorry, go ahead, Spini. Yeah, I just wanted to add one point. You were talking about the JSON encoded objects. That's actually what we do now. This string encoding for objects is the JSON object. So it's for all transport bindings that don't specifically state that they do the hyphens or something like the HTTP one. So for example, for the MQP transport binding, the correct way would be to JSON encode it now. And is that something that you could actually use in one of the intermediaries without a lot of work? No, absolutely not. Let's go to Vlad first, Vlad. Yeah, so somebody mentioned treating unknown extensions as a string which might be a map and then somebody else pointed out that that would make routing on that and filtering on that very, very hard because that's very hard to do in JSON and very computationally intensive. And that was a very, very good point. And as a user, I would much rather be able to filter on the stuff I'm putting in a map than not be forced to do the big force to do prefixes. So whenever we choose, we need to be able to have easy way for transports and intermediaries and middleware to be able to do filtering on whatever level depth of a map that's an extension. Because if I put something that has an extension, I definitely want that to be able to be filtered now. Right, I think as Evan said in the chat, I think that's probably why we did the HTTP header encoding the way we did, where we don't put the entire map as one gigantic thing. We try to split it out individual properties. Okay. With that dash, you know, prefixy thing. Yeah, yeah, but that would need to be if we do keep maps, I am at least I am strong in the opinion that every single transport that doesn't naturally support maps and headers would need to do that because otherwise filtering is impossible. Okay, we're almost out of time. I'm not sure we're necessarily circling around to an answer here or consensus. Evan, can I ask a favor of you? Would it be possible for you to write up sort of the pros and cons in a very, very concise list, not rambling paragraphs, just very, very short little list of here's the good things and bad things about what they think are today. And here's the good and bad things about things the way life will be after your PR. I will do my best. Okay, I appreciate that. I know it's hard. Or some cons of the proposal I'll miss because I'm obviously in favor because I bothered to write it up. Yeah, because I'm biased as you can, but I understand it is your baby. That's fine. Understood. But I think we need something that's simple for people to sort of look at it and compare it without getting lost in pages and pages of text. So, okay, maybe that'll help people make a decision one way or the other. Okay, and just so you know, before you joined, Jem was gonna write up some text about guidance if we kept things the way they are today. And so we'll see what he comes up with. Have people looked at all at the Amazon event bridge stuff that came out recently? Yes, I did. I did too, and I really want to see cloud event support. So, one of the interesting things that they include there is the ability to extract some of the JSON data. And that seems like something we should have a position on in cloud events. Do you have a point to that? All right, this is from AWS. What do you mean by extract JSON data? I assume to recall that in the rule, you can say you can select specific parts of the message to be sent to the target. Correct. We have a filtering technique that allows you to match as deeply into the nested JSON structures as you want. But I don't see how that relates to this. Well, if you look in 371, for example, we have kind of three different modes. And for pure binary data, if people are sending JPEGs as the payload, that's gonna work kind of differently than event bridge. Yeah, since we're all JSON all the time, if you're sending JPEG, you'd have to base 64 it. That is an approach we could take. And I just thought that people should take a look at that product because it looks very interesting. Yeah. Okay, unfortunately, I think we're out of time. I was hoping to actually get to some of the other PRs to try to resolve them, but I don't want to rush it either. Okay, any other, any other last minute comments on this one? We need to make some progress on this, otherwise we're gonna rattle forever. Okay, let me just do attendance then since we're basically out of time. Ginger, are you there? I am, Doug. Okay, Christian, you're there? Hi, Doug. Hello, William. Yep, I'm here. Excellent, cool. Mohan, Mohan, you're there? What about Barron? Yep, I'm here. Okay, Mohan, Fabio. I think, oh, thank you, Fabio. I'm sorry, I completely missed you. Okay, Mohan, one last chance. Okay, is there anybody else I missed? All right, cool. Okay, thank you guys very much. And just a reminder, if you were gonna join the SDK call, we're not having it today. Got canceled. All right, any last minute comments or questions for anybody? Okay, with that, we're out of time. Thank you guys. Good discussion. Cheers. Bye. Bye.