 Let's see, who else is on? Tim, are you there? Tim Brady, there we go. Got you, Tim. All right, anybody else I miss on roll call? Do, do, do, do. Have a low count today. Okay, let's go ahead and get started. First of all, apologies. I am in China. My internet is flaky at best sometimes. So that's why Clement is presenting. But if I suddenly drop, it's not because I'm running, it's because I got dropped. First off, because of the Fourth of July stuff next week for the States, I would decided to cancel next week's call. So just a reminder, our next call will be in two weeks. And let's see, next up, community time. Is there anything from the community people would like to bring up? All right, moving forward. We did not have an SDK call last week, unfortunately. We didn't have enough people for Quorum. So I believe the next call is scheduled for, well, it would be next week, but with the Fourth of July stuff going on, I'll send out an invite to have one the week after, if you guys are okay with that. Clement and Scott in particular, and I guess Mark, are you guys okay with meeting the week after next, after the regular cloud event call? Yes, and that's the last chance before my vacation. And so we should definitely meet because there's stuff to resolve. Yep. Yeah. Okay. Because I have a PR for SDK guidelines that we need to discuss. Yes. And we can't have that on this call, but we have to talk through it because there's contentious stuff and then we need to talk through contentious stuff. Yeah. I'm actually on PTO that day, so I won't be making the calls, but please feel free to meet without me. Okay, Scott, you can make it, right? Yep. I'll be there. Okay, cool. I'm out next week, so it's better for me. Yeah. Okay, cool. I'll make sure there's an invite for that week or for the two weeks. Cool. All right. Obviously this week I'm in China because of KUK on China. I did present the cloud event stuff, the service working group stuff. Nothing eventful happened that seemed to go really, really well. The cloud events one was more attended than the serverless one. Even though they're both fairly well attended. As I've said in the notes here, nothing really unexpected happened. The serverless one, I did try to make it into a little bit of a bird of birds of a feather session to get some feedback from the audience on why they're using serverless or why they aren't using serverless and stuff like that. The audience here isn't the most eager to speak, so it was a little bit challenging, but I did get some information out of them and for the most part, it was consistent with what we heard at Barcelona in terms of why they're not using it yet. Just starting that far along in their progression in terms of using the technology. Some people didn't think it was mature enough yet, relative to tooling and stuff like that. So like I said, very consistent with what we heard in Barcelona. Nothing new there. I did have a sort of interview with a analyst from Doublehorn Research. She wanted to find out about cloud events. The best thing about that was he seemed genuinely excited by it because of its simplicity and its usefulness at the same time. He loved the fact that it was such a simple idea, yet it seemed to be something that really, really fit a need in the industry. And I thought that was a really, really nice sign that maybe we were doing things right, right? Keeping it simple, but filling it or scratching a real itch if that's out there. So that made me feel really good and he seemed to really like that. So good job, everybody. Oh, after the serverless working group meeting or presentation, I can't write the gentleman's name, but there was a guy here from China who approached me and said, he wants to participate in the calls, but the time just is not AP friendly. And obviously, midnight is not great. I asked him to send a note to the mailing list so we can get a conversation going about looking for another possible time, whether it's on a regular base, I mean, I'm sorry, whether it's a permanent switch or just periodically we do an AP time or not. We'll figure that out, but just a heads up that we may be looking for an AP friendly time, if nothing else, at least occasionally, because he did indicate that there are quite a few companies over here in Asia that he could probably pull into the discussion and they really want to give their feedback on both cloud events, as well as what we do next in the serverless working group. So keep an eye out for that. And does that tie into some of the questions around voting and governance of having to be here? He didn't bring that up as an issue. So I'm not sure that was an issue. I think it was more he just generally wanted to get involved and participate. I didn't get a sense, it was due to the voting and everything like that. Okay, thanks. Yep, yep. Let's see, okay, incubator proposals. So I believe on last week's call, we agreed to go forward with a try to get incubator status in CNCF. There is initial proposal PowerPoint. The link is in the meeting minutes. Please take a look when you get a chance. Hopefully nothing in there is controversial. Just a reminder though, that we do need at least three end users listed. I believe I have stuff from Codit already in there. But have you have customers of your product that are using cloud events and they are comfortable with their name being listed? Please ping me offline. While the requirements document or criteria does not say that they have to say why they're using our product or how. If they're comfortable sharing their use case or why they find it interesting, I think that'd be great information to add to the slide deck, but it's not a requirement. I just think it'd be useful. Additionally, well, it's not a requirement for graduation at the next level. It would be great if you could list all the companies that are choosing to implement cloud events. I already have a whole bunch listed on there for the ones that I know about and have publicly stated that they're using it. But if your company is not listed and you wanna be listed or if your company is listed but we don't list the product name which would be useful as well, please let me know. I do have some of the obvious ones in there. Anyway, take a look at that. I'd like to see if possibly within the, by the time we have the next phone call in two weeks, whether we could approve that or not. If we can, then I'll ping Chris Anichuk to get on the schedule for the TOC. So hopefully you guys will review that within two weeks and get the list of three end users, okay? Doug, do we need to have any, like a beginning slide that says here's what cloud events is or at the end to look at here's next steps, futures in this next step or is it strictly about the graduation criteria? I was assuming a strict about the graduation criteria but let me ask Chris. And the reason I probably think we're okay with where we are is because- Do we need to show this? No, you don't have to. Okay. I'll link in chat if people need to look at it. Yeah, we don't have to go through the slides right now. That's not the best use of our time here. The reason I think we're probably okay, Mark, is because I believe within the last month or month and a half of the most, we already did a presentation on the status and what is cloud events to the TOC. So they should already be well versed on that but I'll ask Chris Anichuk just to make sure. Okay, thanks. Another question here, Doug. Regarding the adoption list or companies adopting cloud events, I see we're kind of listing K-native there and that would include four or five different companies, right? True, but I need to get verification from them that they're okay listing their names on there. Okay, yeah, so to that point, I mean, happy to include Red Hat there as well, even outside of K-native, by the way, yeah. Okay, so in and outside of K-native, right? Yeah, as well. Yeah, we're using first through K-native but then also outside of K-native or messaging efforts too. Excellent. All right, cool, thank you. Yeah. All right, so pick me offline if you wanna get any of that information added from your company. Thank you guys very much. Anything else about the incubator status? All right, next up is the PR discussion stuff. Okay, so we did agree to the Kafka transport binding. Unfortunately, I needed just one more LGTM before I actually merged it. Is there anybody, actually, I guess I don't need to ask for that. Is there any disagreement with merging given the current state? Because last time we approved it conditionally upon fixing the rebase issues. Now that's out there. Hopefully people had a chance to quickly look at it. Is there any disagreement now with merging it? I would likely say just go ahead and merge it and if there's changes that can be done in additional PRs. Yeah, I was assuming no one was gonna speak up to God. I just wanted to run through the process quickly. Yep. All right, I will merge that after the call. Thank you guys. Next up, this issue, I just wanted to get it out. Just bring it up very quickly to get out of the way. Someone was requesting, or this issue, you guys can't see my screen, can you? The next one is the request for transport. Do you guys come in and show on that one? Yeah. This one, this person was requesting that we include a binding for RC8030, but Clemens basically said that there isn't much of a difference between this and ECP already and since you already have that covered, we really don't need this. The person never wrote back to disagree. So I'm gonna advocate or propose that we close this one with no action. Yep, is there any questions or comments on that? Okay, any objections to closing? All right, cool, thank you guys very much. Got that one out of the way. All right, next, modify the roadmap. This one was mine, I think. Last time there was some wording changes on the after 1.0 section that we wanted to change. So I think Clemens, if you show that and just scroll down to the bottom, I believe the waste section I changed was the post 1.0 stuff. Go to the file changes section. Oh, you wanna see the, yes. Yeah. So I'll leave you guys a second just to read the post 1.0 stuff, make sure you're okay with that. Basically I had a firm release date or release target out there at 1.1, but that was premature. We don't know what we're doing. We might not actually do anything. Anybody have any questions or comments on that? Okay. Is there any objection to adopting this PR then? All right, done, thank you guys. All right, next up, Doug. I wanna make sure I was interpreting your comments. Excuse me. I'm the allow for broader scheme of URL. You are advocating that we close that PR, correct? Doug, you there? I see you came off mute, maybe double muted. I was double, yeah, that's fine. We can reopen it when it becomes an obvious issue. Okay. All right, cool, just wanna double check, thank you. I will close that one. All right, next, fix some typos and grammatical errors. Actually, we can skip that. Scott already approved that one. It's just typos and stuff. I'll give you guys to the end of the call. If you've seen anything in there that you think objectionably, and I think it's pretty obvious, I won't merge it or we could bring it up, but otherwise I'm gonna merge that one after the call, see straight forward, just wouldn't need to waste time on that. Let's jump to one that's much more exciting, added data payload. I don't believe James is on the call, no he's not. So Clemens, I know you've had some time to look at this one. I was wondering if maybe you could talk to what he's trying to do here and then leave the discussion on whether you think it's a good or bad idea. Yeah, so this was a fairly broad change and now the changes are less. So effectively what James says is data should not have any type. And so we have already identified that data is different from context attributes. And James was initially objecting to data called attributes altogether, and then went on a giant edit of everything to change it to attribute to payload everywhere. Then apparently he reconsidered and he's leaving that as attributes, but it's effectively he's changing the, and this is where all the deletions are, effectively he's saying that it has no specified type, but it needs to be encodable as binary. And then basically, and then has basically most of the rules stay in place, but he's effective for removing the type system constraints or the type system from the data attribute, if you like. That's mostly what this is about. That's my understanding. Okay. So you no longer have to think about things like strings and maps and arrays and any of those things that we have in the abstract type system. The abstract type system would further only apply to the attributes, the context attributes, but data would be exempt. So when I looked at this, I don't see if my hands up, so I'll jump in, but I looked at this the high level direction of trying not to assign a particular type to data made some sense to me. The part that I honestly got a little lost on in the text was the entire discussion about making sure that it can be encoded as binary under certain conditions and not other conditions, stuff like that. And I think I just need to go back and reread it more carefully to fully get it in my head. But I was wondering what your opinion of all that stuff was, Clemens. Do you think that makes sense or is there something funky there? So, well, I'm not sure this changes a lot. So we have special rules here already around JSON, specifically, because this is in the JSON encoding. So this is in the JSON format here. And in the JSON format, can you see the email notification set pop up or does it share the browser window? No, you're okay, we don't see that. Okay, great. Those things weren't secret. So in the JSON format, there is a rule around how to encode data and that is then special, like if it's JSON content, then it's rendered as JSON inline. So there's a special treatment here for data that's at the encoding level. So this is what this entire discussion here is about and that doesn't change. In the core spec here, this is really what changes. The type goes from any to unspecified and must be encodable as binary. And then here, and that's what's missing and that's a comment that I made on the PR is it basically makes a rule around binary encoding, which we already have elsewhere because we're already defining this entire rule in the data content encoding attribute. So there we are already saying it's binary and then it's base 64 encoded and then we have this JSON discussion is in the JSON encoding. So I'm not sure what this PR changes except moving words around and saying, I think the most substantial change in the entire PR is this and I'm not sure what that buys us. Okay, so Tim's hand is up. Tim, you wanna go next? Yeah, I'm also baffled. What problem is this trying to solve? I just don't get it. Yeah, see, that's what I'm also baffled about. So there's a long and drawn out argument that's being made in this PR discussion and then there's an issue which is like why, which is arguing why the attribute type system cannot apply to the data element because the data element is not like the others but I'm not sure it solves this PR solves anything real at the interrupt front. We could just as well say, well, it need binary but then the special handling that we do for JSON where we can go and take JSON data that's inlined. Well, that would no longer work because we then need to have, we need binary and we need to have map already as a base type because that's how we do this. So it's, I don't know what this solves. I've just been defending it because Doug told me, Doug told me. So thank you for blaming me, I appreciate that. So, okay, I thought there was a particular scenario in which James was saying that things might break down where a receiver may not necessarily know how to decode the data type and this was a way, or the data attribute and this may be a way to resolve that. Yeah, but I don't think that's true because we already added the data content encoding which effectively indicates that this is a binary and it's basically four encoded and that's the case where like, if you can't tell what this is because it's not a string or it's not something that you can go and interpret as a string or decode from a string where we have our string, we have the canonical format for all of our, all of our data types. So you can go and take a look at this, you could even infer the type from it if it's a string and in the case where it is in lines and it is binary, then we have the data content encoding indicator that will then tell you, hey, this is basic 64 so you can, so you know. So I don't think there's really any case where there is a doubt and there's no case that I could see where there is a doubt that where this PR would go and change it. So can you go back to the comments section on this PR? Yes. And look at the, and then click on the link to the 261, the very, very first comment. I want to go back to the issue that he claims this is trying to address and see whether you guys agree or disagree with his premise. Yeah, I don't saw that, I don't see how that's related to that. Well, let's start off with the first question there. Is he correct that this is not a map or that's not valid for data? Well, you can't see that, you can look at this in isolation, right? If you have the content, if the content type indicates that it's JSON, then you read it as JSON. You don't read it as a cloud events anything. If the content type says this is JSON, which means application slash JSON or text slash JSON or whatever plus JSON, then you go and take a look at the data field and then interpret everything that's in there as JSON. That's what the JSON format says. So this is legitimate, this is fine, right? If it's not indicating JSON, well then it's a map and then that's false. Then that's incorrect because we haven't defined the boolean type. Okay, so it's so what? Is it the string that is then interpreted as JSON or is it a binary that's interpreted as JSON? So if it doesn't have quotes, and if it's inlined, then it's a map. I mean, if it's a data content type JSON or application JSON or something plus JSON, then this has to be right. Yes, and then that is right. And otherwise, and so if this is exactly, so if the content type is JSON, then this is being interpreted as JSON. If it's not JSON, but this is a map and this can only, so this exact text can only occur if we are looking at a JSON encoded document. And this only occurs in structured encoding. So we have a structured encoding, JSON encoded elements, and then this represents, if the content type is not JSON, then this is not valid because this is not a map because the true, well it's a map, but the value is not supported. So you guys are now claiming that his statement after the first chunk of text there, after the first example where it says, it's not a map, why? Because true is not a valid object according to CloudEvent's spec. A valid object must be a map string or binary, and CloudEvent's spec says that maps are valid mapping strings to objects. Now, what I'm wondering is whether that is an incorrect statement because we've recently changed the spec to say maps can have, maps can include any types as the data for a particular key. So is this statement that he makes in there where the paragraph starts, it's not a map? Is that an incorrect paragraph? No, it's not. So it's not a map in per hour encoding, but we have a, but what you're looking at depends on, really depends on the content type. Yeah, the correctness of the statement depends on the value of data content type. Correct. In general, the statement is incorrect. If he says, and he has to, it can only be correct. You can only think about it once he's told you what the data content type is. Right, so this is valid, so this is perfectly valid, Jason, and he needs to interpret it as a Jason object, basically. I would like to point out that this is an old comment before data content type even existed, I think so. We had content type for a long time. We didn't have data content encoding for a long time. Oh, that's true. This was open July, oh, yeah, it's really old here, yeah. Yeah, but it's not, content type was one of the earliest things that we had. Yeah. Okay, well, okay, let me put it this way. Is there anybody on the call who understands, either understands or agrees that there is an issue in the spec or understands what James is trying to get to because I think based on what I'm hearing, I think we need to go back and have a conversation with James about really what the issue is and whether that issue actually exists before we go make change to the spec because I'm not sure I'm getting the sense that everybody understands what James is worried about. So this is close. Yes, class. I just, when he brought this up, I just wondered if we really need to be able to apply this type system we have for the attributes to data at all. So because of this, I kind of agreed to his proposal to remove this type any from data. I mean, you could even use the type system, our type system for designing your actual messages, the event payload. And I don't think that's our intention. Yeah, that's actually a part of the PR that actually did resonate with me was the notion of just saying it's not of any particular type, it's just data per the encoding that you specify through data content type and data content encoding. Exactly, yes. Yeah, but it's the rest of the PR that kind of lost me. Well, yeah, I was only following in the beginning when he was trying to remove this notion of data as an attribute at all. So I think it would also make sense but it would be a big change, so. Okay, so for you guys that seem to really understand what was going on here and why it's problematic, is there anything that you see in this PR that we should look at and say, yeah, at least this part makes sense or do you think the whole PR isn't quite appropriate? Someone's, you know, Dr. Tom. Oh, yeah, that's, sorry. That's okay, just sounded cool. Okay, so is there a, well, I definitely don't wanna close the pair without him being able to talk to it. Is there anybody who advocates for this PR in any way? I advocate for the data payload not being any type and not constructing data payloads out of our type system. Okay, so Clemens or Tim, since you guys were speaking about this, what do you think about that aspect of it? Is there some, is there some sort of change in that space that might make sense to make? Sorry about the bells in the background. I think it's okay, I think it would be okay to say, you know, I have no strong objection, let's put it this way, I have no strong objection to say that the type system doesn't apply to data and that data needs to be, you know, binary or string or special, but then you're quickly back to the normal type system because ultimately what the type system is is strings are binary. Well, do we actually have to say what it is? Can't we just say something on the lines of data is in whatever format you specified for the data content type, data to content encoding, or if those aren't there, then it must match with the rest of the payload, which in most cases is probably JSON, and it's up to you to make sure that it aligns with it and we don't necessarily have to say what type it is or what I'm suggesting not making any sense. For me, having some constraints on it makes more sense than removing constraints and that's what that would be. So as long as you have some level of constraint and we have the type system right now which puts some constraints on some formatting. We talk about strings, we talk about binaries and we talk effectively about particular formats of strings that map to data types, which is per se not terrible. And we say for data, data can be any of those, primarily binaries and strings, probably, but all can also be maps. So if you have structured data, you can go and express that if you stay within the limits of the type system that we have that doesn't hurt. That's a set of constraints. And we have made some special provisions for JSON so that JSON in its entirety can be represented. And I think we have, I'm not sure we, no, we don't have the same thing for even for MPP. But we might do something similar. If someone comes up with a C-BOR format, then that might have similar provision which allows the full C-BOR type system or if someone comes around with message packet would have a similar provision. But we have some constraints around it and worried about removing constraints can say data can be anything because then the implementation can also be anything. So that's why I'm worried about saying, this is, this can be anything. I apologize. I feel like I'm missing something. Can you elaborate a little bit on why it's bad to say it can be anything as long as it is encoded according to the data content type and data content type encoding? Because then it's effectively, then you have to treat it as opaque and binary at all times. But what if the content type is JSON? Why would you trade it as binary then? Right, I mean, it seems to me that data can be pretty much anything because it's whatever business logic you have. And so as long as you can encode it per what you say it is, JSON, ETF or whatever, as long as it adheres to that and adheres to the format of the overall envelope structure, then why do we care what type it actually is? Since it, since to us at the spec level, it really is opaque, right? Well, the only change we would make them though is really to leave the data type open. Right. So off this entire change, and that's the previous one, right? Yeah. Off this entire change, the only thing we would do is to say unspecified or even delete that line. Yeah, I think we'd have to say something like it's unspecified, but its serialization has to align with, you know, the content type or the encoding type, the content type, all the other things that are specified that tell us how to decode it. Or if those aren't present, align with the rest of the envelope. Yeah, it's event format specific or it's actually events format specific. Yeah. So, Jim, your hands up. Yeah, it was just, the more I look to that original issue and that's what I've been trying to type in the chat, both of those examples that the guy put in the original issue, I believe are valid. Could you go back to the issue? Yes, I can, I think. So, yeah, I think I understand where this guy's coming from, because I mean, if you look at, for example, the Protobuf spec, it defines a cloud event as just a map of either strings, bytes, ints or maps, yeah. So, what's going on here in this example is that, we're right in that looking at this in isolation doesn't make any sense. But if this is a structured cloud event plus JSON content type, both of these are valid. The first one just resolves to JSON explicitly. The second one, we would expect there to be a data content type attribute present as well, which also said JSON, yeah. But it wouldn't, I wouldn't expect an SDK to naturally demarshal it into a JSON value. That would then become an application concern. So, they're both valid forms. In the back of my head, I can see what this guy's getting at, but I don't know if it's a worthwhile argument. So, Jim, I want to make sure I understand, when you say the second one, you're talking about the one that says data colon and then the entire thing is putting quotes, right? That's still valid. Well, I guess it's valid, but isn't the value of data in this particular case a string? It is, but it would have a data content type of application slash JSON. Yeah, you can't say it's valid without telling us what the data content type is. Yes, exactly. And I think when this issue was raised, data content type wasn't even defined. We didn't have that, yeah. Right, but I just want to make sure I understand because if data content type on this second example here is application JSON, I don't believe this is valid. No, it's not. Right. It is valid, but it's a string. It's valid, but it's weird. Yeah? No, it's not valid. No, it is valid. It certainly is valid, but it's a string. Correct. If you encode it as JSON, if you decode that, it's still a string. It doesn't change into an object. It's just a JSON string. Yes, the content type is the structured cloud event JSON, you know, in the HTTP header, and the data content type attribute would say application JSON. Yes. And if it does say that, then the second is valid, but it's a string. Yes. It's not an object. Yes, okay, great. Okay, okay. To be pedantic, it's a valid JSON text as that term was defined in the ROC. Correct. Yes. That's... So we vehemently agree with one another. I don't think... Totally. It's not an issue. Okay, so it sounds to me that this, okay, this has been very useful, I don't know about for you guys, but at least for me. And I feel like maybe we're coming up with a possible solution or alternative solution. Clemens, would you be willing to do two things for me, aside from sharing your screen? I know it's a huge ask, thank you. One, comment back to him on why his original premise may not be correct given all the new attributes we have. Yes. And two, propose some changes to the definition of the data attribute to talk about it or to do everything we just talked about. Basically saying it could be any type as long as it's encoded properly per the other attributes, blah, blah, blah. Yes. You'd be willing to do that? Yes. Cool, you're on the recording. Can you make notes, though? I will do that. In the notes, just for me to... Thank you very much. Okay, cool. I feel like I made some progress there. Hopefully James is okay with the direction we're gonna go, but we'll see. Next up is Evans. Yeah, Evans. I don't believe Evans is on the call, unfortunately, so we've got to be guessing here. Actually, Scott, you might be able to talk to this one. Scott, do you understand Evans' concern? I do. Oh, okay, go for it, Clint. Yeah, go ahead. But Scott, go ahead, if you want to. Basically, like the explosion of maps inside of binary HTTP mode, it doesn't make a lot of sense in most cases. Yes. So we have a concrete problem in AMQP, which I can't get over because we have a... But the same is true for HTTP. Effectively, HTTP only allows strings as content, and AMQP only allows multiple types, but they can only be simple types, which means no complex sets, no maps, no arrays, in application properties, which is the similar thing. So we now still allow, in our context attributes, we still allow maps, and that creates a mapping problem. For HTTP, since everything is text and we have this slight bias towards JSON, it seems relatively straightforward to go and encode that as JSON and then stick that into an HTTP header. And AMQP, that starts to get super ugly because I can't put application properties, I can't put a map into the application properties. And the only work around, and we've discussed this amongst the AMQP folks, the only real work around we can think of is to, which is we can't amend the spec that ship has sailed. Would be to stick that, stick JSON into that, into the attribute, and that then becomes really weird because we have our own type system and all of a sudden you have to use a different type system and that's all super ugly. So I'm actually in favor of this. The other reason why having complex types in context attributes, so maps is difficult, is that most of the things you want to, some of the things, most of the things you want to do with these context attributes is that you want to go and drive infrastructure through them. So you want to go into filters and et cetera. And filter languages such as what you use in JMS with SQL or any kinds of other prefix, suffix expressions you want to go and run on entities, they don't compose well with complex types. For instance, if you wanted to go and map a cloud event to AMQP and then run that through an AMQP broker and then use whatever JMS expression or some of the, our new filter expression that we've defined for AMQP, you can't get into, you wouldn't be able to get into these complex properties. You can't navigate through them, which means you can't really do anything with them. So it is better if you have an extension and that extension needs to define four, five, six, seven different things, those should be four, five, six, seven different attributes rather than one attribute with multiple things inside of it. So Clemens, can you show his changes and talk to it? I got to be honest with you, I got lost on why he's making a change and why it's the right thing to do. So I agree with the spirit of this. I haven't, any context distinction that I'm not bought into and I have, so this is not something I think we can go and approve now because I haven't really understood that yet. Okay, because to be honest when I first read his issue and that PR and stuff, I kind of got the impression of, oh, he's gonna try to get rid of maps, but he didn't. He kept maps and created a brand new type called any context and it got completely lost as to how that solved the problem. Yeah, this needs work and I haven't spent the time of, because it's relatively new, I think. It's three days old or something like that. Three days old, yeah. I haven't looked at the details of this yet, but I agree. So this comes from, since this comes from the discussion that we had about those attributes, I'm sympathetic with the change but I'm probably not the details need work. Okay, Tepini, your hands up. If we want not to use the map type for context attributes, which is starting to sound quite sane, unlike I thought at first. Can we just remove it all together? What would we use it afterwards for? Yeah, that's right. I think there's much of a reason to keep it then. So that also solves problems with the data. It can still be any type afterwards because it can't be a map. Correct. And it's not so much. That makes matters, that simplifies matters. So if we just throw out map, then that would indeed help. So I apologize, I'm confused. If we throw out map, how does someone define, say an extension that is basically a structure? You can't, because the extension is an attribute and the attribute can't have the map as a type. Because the extension is just metadata and the metadata needs to be evaluated by infrastructure. And if the infrastructure most commonly doesn't know how to deal with complex types, which is how things are, then that extension probably won't help you. So the extension should, instead of defining one structure, it should define two, three, four fields. Like what we do with trace parent, trace context in, for the tracing, right? That is conceptually that's one structure, but it's been separated out into headers. And that's what happens throughout the entire, like the W3C, trace context expects, defines those two fields, and they always go together. Okay, so personally, I'm okay with that direction because I do think it makes life easier. The reason I'm bringing it up though is because I don't wanna open up sore wounds, but when we had the entire discussion around extensions, one of the things we talked about was, oh, if someone wants to have a complex extension, basically a structure, that's okay, right? Go ahead and create a bag in that sense. Remember that discussion. And inside that bag, you can put everything you want. And at that point in time, we did say, well, it's up to you. You could have a choice. You could either create a bag or a structure, or you could put all the things in the bag with top of the attributes and you can prefix them with the word foo, right? Everybody starts with foo. They all been semantically are in a bag. It just doesn't look like a bag. And everybody was okay with that. And now what we're basically saying is, nope, sorry, we're gonna get rid of the option of doing bags. And if you want to conceptually group things together, do a fancy trick like having all your extensions start with the same prefix, right? Yeah, but we started with a world where we had two bags and we needed to have a way to express bags. Then we effectively banished the bags completely out of the core spec and relegated the bags to the extensions. And I think what we're saying now is, well, you know, we already banished them here, so let's banish them everywhere. And if you haven't, and that's not okay with that. I just wanna make sure that everybody understands the ramifications here. And the opposition against that, like since we've now evolved some and we've started to implement, I think that there were theoretical objections against it. And now that we've written some code and see that stuff works even though we don't have bags, I think it's okay to go and take that step now and to get rid of them all together. Okay, so, Jim, I'm gonna pick on you. Oh, good, your hand's going up. But one question or one statement. Technically, someone could still support bags. It's just that would have to do something like encode it as a string. And from our perspective, it looks like a string, but they can still decode it as a structured map or whatever they want, correct? Sure, why not? Right, just wanna make sure. Okay, so, Jim, you're first and then I'll hit Topini. Yeah, I mean, I understand the driver for this. If there was any way we could keep this sort of bag concept for extensions, I think it's quite powerful. Even if we limited a bit more, because I think that may have been Evan's original concern that the way it was originally defined, I could create an extension which had numerous deep nested maps if we would sort of come up with a more limited definition of an extension so that we could still create a map, but maybe just a map of strings or something so that we can create some level of encapsulation. That would be my wish. Okay, Topini, you're up next. Yeah, originally I was also going to note that the bag idea is going to be thrown out the window with this one, but I think that there's a lot of different workarounds to having maps, it's not just prefixes or just two fields, also like four, one and four type situations where you just do come with the limited lists. There's a bunch of workarounds. None of them are as pretty as maps or as powerful and that's what bugs me. And I think that's what Gem was talking about. But on the other hand, trying to create something that's not maps but solves the same problems feels like that's going to be an infinite rabbit hole that we never get out of. And it doesn't really serve the interoperability that much because it's something new we find. Yeah. My main concern is that they cause trouble in mapping into transports. And that is an HTTP, you have to put them into a string. In an MQP, we have no choice but putting them also effectively into a string. Or the MQP transport would even have to go and do its own encoding trick that will then have to preserve the map in some way. And if we just say, you know what? If you want to have structured contents and you write an extension, well, define in that extension how you want to have that structured content serialized. And if that's adjacent, if that's adjacent string containing JSON, well, that's fine. Okay. So we're running a little long time here. I don't think we're gonna say they come to a conclusion here. What I'd like to do is continue discussion in the PR itself but it sounds to me like we have three or a couple of different options. One is keep things as they are. Another is go the way of Evan's pull request. Another option is to keep maps but only make them one level deep. Another option is remove maps entirely. Is that correct? Are there other options I'm not thinking of? Okay. Okay. So what I'd like to ask is everybody, please think about this. I'll add a comment to the PR talking about our discussion here a little as best I can summarize it. And then please put your comments in the PR itself and not just on Evan's exact change but also on the direction you'd like to go or why you might have concerns with some of the other proposals or other options available. Because I think this is kind of a big one and it feels a little bit like we're reopening the extensions discussion slightly, which actually is not unexpected because we did say we might revisit some things based upon actual real-world experience but this is exactly what this is. So from that perspective it's good but it's bad in the sense that this could rattle us if we want to wrap this thing up quickly. So please don't be shy, put your comments out there, okay? We don't have time to say dive into anything too deep. So is there any other comments or questions we want to make on this one? Because we do have another couple of minutes and we really want to continue this. Can we also allow extensions to specify encodings per transport? Yes, we do. Yeah, so that gives a lot of flexibility to extend the extension. Yes. Okay, good point. Okay, Jim, let me pick on you just for a sec. It would be useful if you could make your comment in the PR directly as well when you get a chance so people understand why, not just that you want to use bags but why you feel like that would be better than just doing a prefix kind of thing and keep everything flat, you know, that kind of stuff. Okay. Okay, appreciate that. Thank you. All right, with that, is there any other topic people want to bring up that's not having to do with Evan's PR since we have another three minutes? Okay, in that case, Mehmet, are you there? Mehmet? Yes, I'm here. Oh, there we go, okay, gotcha. Anybody else I missed for the attendance list? Okay, so since we're not meeting for two weeks, even more so than normal because we are trying to wrap this thing up, please, please, please look at all the open issues and PRs in particular and comment on them. Don't just assume, I don't want to just assume that no comment means you're okay with it, please give an explicit LGTM on there because I need to know how you guys feel about this stuff and that will help move the discussions along. Obviously, if you do have questions or concerns, please bring those out because I don't want those two weeks to go by and not have a whole lot of work done. I'd like to see if we can resolve as many of these things on the next call as possible because I know a lot of you are anxious for us to wrap this thing up. All right, and a reminder, we will have an SDK call in two weeks after the normal cloud event call, okay? Anything else people want to bring up? But this has been Eric hiding behind the 206 number. Oh, Eric, thank you. I'm sorry about that. We were going through the raw keys. Sounds good, all right, thank you. Anybody else that I miss? All right, cool, in that case, Clemens, thank you very much for sharing your screen. I appreciate it. Thanks, everybody. Yep, and we'll talk again in two weeks. Everybody in the States at least have a good holiday next week. All right, bye, everybody. Bye. Thanks for the Christy Clean Microsoft quality. Thanks for complimenting him, but then indirectly picking on me. I know it. All right, bye. Okay, bye, guys. Bye.