 I was wondering when you're going to get a mic. Sometimes it takes a while. Yeah, how's it going? Good. All right, three after. Why don't we go ahead and get started? Let's see, jumping right into it. Community time. I think we have one person doing the call from Mohan for you. It's a time when people who don't normally join the call are able to bring up topics that are not on the agenda that I think might be important to bring up. So does anybody have on the call? I'm sorry, does anybody on the call have a topic they want to bring up? All right, in that case, moving forward. SDK stuff, nothing much here to say other than we do have a call right after this one. And the main topic for today's SDK call is Clemens PR on the SDK document itself. So I expect, obviously, Clemens for you to be there, Scott you to be there, and anybody else who's interested in the SDK stuff. So just a warning and reminder for people. Incubator, the proposal doc or slide deck is out there for people to take a look at. I think for the most part, it is ready to go. Mark was wondering whether we needed a what is caught events and status section to it. I did check with Chris Anacheck, and he did suggest that we include that in case it's necessary, so I did include that as backup material. I think for the most part, the deck is ready to go. What we're missing though are the list of end users who are actually using Cloud Events. So I believe we only have one right now. So if you have customers that are using Cloud Events and they're willing to have their name mentioned publicly, please let me know because we can't go forward without at least three end users. And so we need at least two more. So get that to me offline if you have some people that you can mention. Yeah, Doug, I'm still working on that from Adobe. Yep, cool, thank you. And I know I think Oracle might be working on it as well too, so hopefully we'll meet that bar soon, hopefully. But anybody else, feel free to mention it. The more we have, I think the better it looks, so don't stop at three. Cool, thank you guys. Okay, nothing here to discuss for V1, other than the list is going down, but not as quickly as we'd like because we did want to wrap this thing up within a matter of weeks and that was already several weeks ago. So please be reviewing these PRs and stuff offline as best you can so we can try to resolve them. Let's jump right into it though. Kristoff, are you on the call? I don't see Kristoff, unfortunately. Okay, so this PR right here, he just wanted to do some clarifications around batching. And I believe the bulk of the change is this sentence right here. I wanna make sure you guys are okay with that. It seemed pretty safe to me since it is just in the primer. Any questions or comments on this? I'll just say up here, this butt up here is also new, but it seemed relatively minor. Any questions or comments on this? Looks good to me. Okay, anybody else? Okay, any objection to approving this? Excellent, cool, thank you guys. All right, Evan is not on the call. So, tell you what, can we pick on clements? Cause I think you've had some input into this one. Yes. Maybe you could talk to why this change is needed and the change itself. So we have a problem in a few places with maps and attributes. Specifically when we're mapping to, when we have the binary modes where we explode the message onto transport frames, then we need to be able to go and take the attributes and map it into a transport header. In HTTP, that's only a string. And conceivably, you could use a JSON encoding there. And that's something that everybody would feel comfortable because it's string and everybody's favorite encoding for text, for such objects in JSON, in text in JSON. In NQP, it's getting a little weirder because in NQP, in the application properties where we map that to, it's explicitly not allowed to use a map. So you can only use normal types. The reason for that is, and this also brings us to the point why it's not so useful to have these, have complex types or some maps in there is that in NQP lands, and we see this also in cases with those new event brokers, like what we have in Event Grid, you have filter expressions of some sort. And these filter expressions go and pick up the metadata and allow you to go and compare them, et cetera, and have logical operations across them. But it's really difficult if the context is, if the content is complex to go and navigate that content in those expressions. So usually that's not supported. So if we map a cloud event onto NQP and we run this through a normal JMS broker, it's not gonna be possible to poke into the complex data that sits in the metadata. It's just gonna be unintelligible because there's no JSON parser in JMS. So it makes sense to constrain those fields for those reasons to simple types. Also because it gets a little weird when you need to go and resort to an encoder like JSON for encoding that data in there. So that's why we wanted to then exclude map from the permissible types. And instead, if you, and we have precedent for this, if you need multiple fields that are related for a particular purpose, you just make multiple fields. So our open tracing support that we have in the extension has trace parent and trace state. That's two separate fields, even though they are a structure. And you see that in all the transport mappings because HTTP doesn't support anything in this complex, it's too heteristic always to go together and you see that all the other transport mappings. So that's how that should happen. A map then is retained for the data attributes and that's a wholly different discussion that we're having in a different PR. Okay, thank you. I wanna pick on Jim, because I know you have, I think strong opinions on this one. Raise your mind, I've done this one, Jim. So I understand the problem. And I think I've commented on this one. I love the idea of keeping related items together. And I guess when I look at some of this stuff and then say, okay, what's this gonna look like in a structured JSON cloud event where stuff isn't being jammed into headers? It all started to get really messy and sort of somewhat unstructured to me. So that's why I think I was sort of proposing, okay, can we come up with a sort of slightly different angle where we allow for a very thin map? So not maps of maps, maybe not maps of anything but more constrained dictionaries so that we can use those constructs. Because it seemed to me a little bit like the transport, the issues with transport bindings were sort of wagging the tail of the cloud event specification dog. And that's not necessarily a bad thing, but it seemed a little bit extreme to sort of then do away with a lot of these sort of structural elements. Okay, anybody want to comment on that? Hi, this is Vladimir. I have one concern, we have seen in various other technologies where the space is flat. Then what people tend to do, they try to create some namespaces artificially by using strings and maybe using dot or colon as a separator. And eventually that leads to fairly long strings and if I recall correctly, the maximum length for the key is 20 characters. So soon we will run out of space and then people will start abbreviating and the readability will decline. So that's my key concern. We've seen that in various other technologies and would like to prevent that. Thanks. Just one more comment actually, maybe I misinterpreted what Clemens was saying. The tracing extension we have, interestingly, if we have tracing in the tracing extension in the cloud event, we would also have the W3C headers in the HTTP headers as well. So we'd actually be duplicating stuff. So it's not like the extension is gonna remove the need for other stuff to be in the headers. So I mean, you still need that namespacing of things that are defined in cloud events. That's so exciting as you're on that one. So with that, yes, that's true that you would have both of them. We actually made this a feature in the official W3C MQP mapping. Since the message is immutable, so you send a formulated MQP message and it sits in the application property since the trace bearing and trace state. If you want to mutate that on the server, the trace information, then you have to, an MQP you made that explicitly, you have to move it into a message annotation. Here we're already separating those things out. So effectively the cloud event tracing information is end to end. And then on the HTTP path, you get the same tracing information. You might go and actually replicate this out. That's something that the binding, I don't know whether the binding actually specified an override for this or didn't. But it's effectively a feature to allow preserving the original information from when the cloud event was published and then use it as an end to end tracing information and then use effectively for the HTTP flow a separate context. Right, yeah, I think as part of this sort of thought process it suddenly struck me that maybe in that tracing extension we need more guidance as to how SDKs are meant to handle this stuff. But I mean, that's a completely different subject to blackening maps, yeah. So we've been talking around, I think earlier with whether we could go and, this was when we still had the various extension bags and we had this, we had two discussions about bags if I think Doug remembers those fondly. And there we were thinking about how we could go and resolve that for headers, like how we can go and project those things into headers. If we allow, let's say we allow the simple thing where you can have a map for an attribute and then, but you could strain it such that you cannot have maps inside of maps, then you have a one level construct. And then you could conceivably say, okay, so there's a mapping here where we go and say we're gonna take the name of the context attribute and then we're gonna take the name of the element that's inside of it and we can concatenate those in some way and then that's how we map them into headers. What gets a little strange about this is if you are routing this through infrastructure, through intermediaries which have filtering capabilities, you now need to know those construction rules. Like you literally need to go and use them and you can't go and take the event and then go and key off fields though you actually have to go and say, oh no, that's the property dash this. You have to know how you have to be keenly aware of exactly how the APP mapping works. Yeah, that's absolutely true, but you'd have to know that anyway because you'd have to know that all the cloud event stuff gets prefixed with a CE in the first place. So you're always gonna have to have some knowledge of the way these things are mapped at the transport level. Yeah, I'm just, I'm not sure it's worth the trouble. And back to the namespace and discussion, I think in the beginning of XML, because this is a good thing to look back into, in the beginning of XML when the namespacing stuff came around, like what, 98, everybody was kind of in the belief that everybody would be making these grammars and then they would all be gemmed together and collide in all these giant dictionary in all the giant documents, which would then consist of, you would have one document that would consist of elements of 500 different schemas. And in practice, that never happens or rarely ever happened. I mean, there are weird outlier cases, but typically you have documents which pick things from, you know, have a main schema and then they pick things from two or three other schemas with collisions being, collision risk being very low. And I think for events here, we're mostly dealing with, you know, a main events which has then probably some application to find extensions and some of the standard extensions, but I'm not sure that the collision risk is really that significant for events. All the experience from XML, SOAP and all that world suggests that the collision risk is not necessarily worth the trouble of having namespaces. If I was doing all of that stuff again, it would probably do away with namespaces altogether. So, Jim, I got a question for you. If we were to have some constrained version of a map in there, would you still want each individual property of the map to be serialized as an independent header in the HTTP case or would you be okay with the entire map being serialized as one long string? I must admit, I thought our existing HTTP transport binding would map them into separate HTTP headers. It would. That's why I'm wondering whether you want, whether you're asking to keep that or whether you'd be okay with. No, no. And I guess that was my sort of another angle to what I was sort of proposing was that it didn't actually change the transport binding space. They would still work exactly the same way as they do today. And it really becomes a guidance or, I don't know where you put enforcement to say, well, if you've got an extension, then you know what? You can only have one level of attributes or properties in that extension. Right. Okay, so I'm not quite sure. Do you have a use case for that? Well, I mean, you have, we have extensions today, like tracing has two attributes. Tracing has two attributes, but they're mapped especially for HTTP even, so. Right, but they would still travel as a CE extensions as well, yeah. And the sequencing one. The sequencing one. Well, I thought they only traveled as the other headers. I would assume that the SDKs don't care. They don't do anything special. They just say, oh, it's an extension. So I'll marshal it. The cloud, the C sharp SDK actually handles them with an override. Like I have a facility to do the over the overriding. And then they don't pick, they don't show up as a SCE because they're getting, there's an override. And they need to be mapped to specific HTTP headers for the tracing, like generic tracing mechanisms to work. They were happy for those, yes. Yeah, and I think that was my original point. I'd sort of assumed that they would end up in two places. In the W3C SPECT header and a cloud event header. And I think that's what Clowness was touching on earlier that maybe they're carrying slightly different levels of information. I don't think they do. I think we only define one, which one is it, it's tracing? Yes, tracing. Yeah, so I think we do just, I think it just appears that. I don't think there are any CE dash headers at all. Okay. I would be concerned if there were CE dash headers because then what happens if they're different? Exactly. Yes. Well, so there's a scenario here that I just laid out. Like if it was the case that there were duplicates, then if you are running tracing through a proxy. So you're running effectively HTTP, you're running this through an HTTP proxy and the proxy is choosing to mutate this. Yeah. Then it can, but then the CE headers still give you a way to effectively end to end tracing which is blind to all the HTTP transport stuff. And you effectively, like at the point where you publish the event over HTTP, you're splitting up your context and you're giving one path, which is inclusive of all the HTTP stuff and one path which is purely end to end, which doesn't care about the HTTP stuff at all. But at the far end, you have a set of cloud events attributes. Yes. Which one is the tracing header on that cloud event attribute if you've got both? Which means which are you gonna map back? Yes. I think the end to end one is the one that you pick up. I think that's probably going to make your tracing sad because you'll have like these tails going off where the HTTP stuff happens. Like nothing happens inside that trace which, and you have to go back up to a different level to keep up trace. So I don't want to go too far off track here. I know it's related, but I don't want to go too far off track here. Maybe that's a separate topic. If we look at sequencing, sequencing has two attributes. And I know within paper, we were looking at our own internal extension where we would carry potentially multiple contextual elements around as well. Were you gonna do filtering on that or just like the question stuff? No, no. This is purely like end to end security and tracing and our own internal tracing. So you still have a string that serializes a JSON object like a Jot does? Yeah, yeah. And I think all we're seeing. You need to know about it. Yeah, yeah. All we're seeing is I think the same problem rearing its head here that we see today with our current transports. Yeah, and I think we were hoping that we could sort of be more transport agnostic. I'm not explaining myself very well, but to prevent all of our framework code from having to scurry around in header properties of different transports to try and reconstitute contextual elements that we were hoping we could use a sort of bounded collection of things to sort of hold those. But I may be arguing myself into a corner now. Is there any reason that those have to be cloud event extensions as opposed to some place in the data attribute? Because I think we don't see them as things that the applications produce or are responsible for. Yeah, so the generation of the event is an application concern, but decorating it with contextual security or whatever is more of a framework concern in our world. Okay. So if the frameworks have to start scurrying around inside the business payloads, then we've sort of tripped over a bit somewhere. So I hate heading this path, but sometimes it's the only option available. It seems to me that going forward without maps and then adding them later is doable, whereas starting out with maps, removing them later is a breaking change. Yes. And I'm wondering how bad it would be if we started off with no maps and then waited until people yelled at us. And that's perfectly fair. And I'm very aware that I'm sort of a lone voice in the wind at the moment. Unless everyone else is silently agreeing with me. Which is always possible. You never know. We do have a quiet group sometimes. And I don't want to force the decision on this call because I do think it is a very big decision. And I suspected Kathy Ron, she actually would have a strong opinion on your side as well, Jim. Oh, okay. Just so you know, I don't think you're alone. But I'm trying to figure out a way out of this because I don't want it to come down to just a formal vote. That's the worst way to make a decision on this stuff. But at the same time, I'm not hearing any other compromise proposals being put forward. So Jim, let me ask you this. If we were to go forward without maps as of right now, do you think that by the time we go 1.0, because we are gonna have a sort of a testing validation period, do you think by the end of that time you'd be able to come back with a definitive, not happy but I'm okay with it answer or a can't live with it this way. And here's why with a concrete example of why your things fall apart kind of thing. I'm just wondering whether that would give you enough time to sort of make a stronger case one way or the other. Okay, let me put it this way. I don't wanna hold this up because I think there's a lot of pressure to get stuff out. I would like somebody to sort of sketch out how extensions do work in this model. How we manage, I don't use the word namesplating, how we protect the property names or the attribute names for extensions. And just how that mechanism works, if we're not properly encoding them and protecting them in some way, how we stop extensions colliding. So I think the answer to that is we don't. And I think you have the exact same problem even if we do allow maps. HTTP is a super, super complicated protocol or sorry, well not complicated but it's super widely used for all kinds of different scenarios. And there is no central registry. No, no, I get that. But I mean, we, at the moment, you've sort of created a safe space where I can, to a certain extent, extend without colliding. Although it's a bit ad hoc, granted, but there is a mechanism. But I'm not sure that's true though, Jim, right? Because I could create a property called dugs and inside there I can create another property called ID and then it gets serialized as CE dash dugs dash ID, right? And you think, okay, that's relatively unique. But there's no stopping someone else from doing the exact same thing. The other Doug on the colleagues example, right? He could do the exact same thing. And I would claim that the odds of us colliding when we have maps is just as great as if we just said use prefixes because if I was to prefix this thing, I would probably end up with the exact same thing as a should be header. I would still call it Doug or CE dash Doug dash ID. Right, but I mean, if maybe we're taking too much time with this, but if two different extensions both had a thing called ID, yeah? At least in the current model, dugs is protected from PayPal's, yeah? Because they're contextualized, yeah? And that's really what I'm driving at. Do I have contextualized? I could have sworn we had text someplace and maybe it's the primer that says when you come up with an extension, make it somewhat unique and descriptive so that you don't use something as generic as less ID. I think we have text like that someplace, but if not, we should definitely add that. Would that kind of thing help you in some way? You know, kind of implying you should name space your extensions so that you try to avoid collisions? Yes, okay. Okay, I can double check but I could have sworn we had that text someplace and I'll look for it while we have other discussions going forward. So, not to throw an opposite of a wrench in here. I believe that we took hyphens and underscores out of attribute names, attribute keys because of this mapping in HTTP that made things ambiguous. So we might be able to allow underscores and translate them to hyphens in attribute names now which would give you a nice way to do PayPal-ID, for example, or PayPal underscore ID in your attribute and use PayPal as a prefix that hopefully other companies aren't using. True, I think you're right. It makes it easier from a human readable perspective, but I think even if you didn't have any set or any special character as a separator, I think you'd still end up in pretty much the same boat. It's just not as readable. You also don't have a natural delimiter character right now in attribute names, which is kind of frustrating. Exactly, right. From a human readable perspective or even from a parsing perspective, if you want to parse them out in that way, I agree. But I think from a pure technical perspective, I think you're still in the same boat. Even though I do agree with you, it would be nice to add them back in, yes. Yeah, I'm worried about the humans here. Yeah, got it. Computers go, they'll sort themselves out. Yes, humans. Okay, so tell you what, why don't we, I think we have a potential way forward here. Let's take that offline and I'll take the action and divide up something to try to come up with that compromise proposal that I think we're sort of dancing around. And then we can review that during the week and possibly make a, or try to resolve it next week. Does that sound fair? Okay, I hear any complaints. I do think we're circling around something possibly good here. All right, Clemens, do you think you could summarize where you and James left things off in about five minutes? Clemens, unmute? Yes, I was on mute because I need to get, you just called me as the church, who's in the corner of the block, has been starting to ring the bell. So I need to get nice, insulate myself a little bit from that. Okay, so, yes, so on our, what's that, 470? Yes. What was the original issue, 456? Yeah, something like that. 457. Well, the original, that was the original PR. Yeah, that was the original PR. 261 is the original one. He was complaining about the JSON and data. Yes, okay. So we discussed this one last time. Can you get the other 457? Can we take a look at that? Hold on, you're asking a lot of me. Just, yes, that. I'll look at the text or the changes. The conversation. Okay, which one? Let's say, I forget where he has the midpoints. Okay, well, I'll talk through it without looking at anything in particular. Okay. But go through the original issue. Oh, the original issue, this one. Okay, okay, this one. So we had a lengthy discussion about the relationship of maps and strings and binaries and data, et cetera. And James pointed out a few inconsistencies and they're caused mostly by the rules being to relax. And he said, like, if I just go and have a structure like this, and that's where we were last time, where we're discussing this, like data foo equals true, like this is valid JSON, but it doesn't fit into our type system because we don't have a boolean type defined. And then we pointed out in the last call where we discussed this, well, if you define the data content type to be JSON, then that is valid because we then know how to go and decode this because there's a pointer in there. So we had some further discussions about this they're fairly long and I encourage people to read up on not this one but the previous, the other one, five, six, 57 and read through the discussion there on all the points. But where we ended, where we landed was a brief agreement that we formulated yesterday and that wasn't in the PR, but I'm gonna read that to you. If I find it, I sent that to you, Doug. And then out came the PR, but I just wanna go and get the summarized out of our call. Trying to find it. Yeah, I have it, okay. So one thing we found in the course of this entire discussion is that if you use the binary mode and you haven't declared a data content type, then you don't know what to do with data because then you need to stuff it into an MPP body or you need to stuff it into an HTTP entity body but you don't have anything to declare it with because it is strongly recommended that you define a content type and if you, the binary mode effectively has no event format. The entire event gets exploded out into the transport frame without any event format. So you don't have anything to go by. So if you use binary mode at all, then you must use the data content type because there's no other way. So you must pick one. That now, that causes the effectively two paths, if you will. You have the mode where you have structured events which are all together and it's one nice and neat package where you don't need to go and declare the data content type because then it gets rendered with whatever the event format is natively. And you can also go and, you know, you come out of an in-memory, let me call it InfoSet, you render that into JSON, you pick it up as JSON, you suck it back into this in-memory InfoSet and then you can go and render this out as Protobuf, pick it up again and that event is gonna be in itself consistent without needing any further information. If you render it, if you use the binary mode of any of the transports, then that's not so easy because you have a binary payload, that's what we call this thing, binary, which is presumably an entity body which is worth doing this binary rendering, which means it's presumably some format that is hard to express in the self-contained format. So that's the first constraint. If the binary mode, in the binary mode, it must be declared because otherwise there's no clean mapping possible into a transport that requires that you declare the content type. And the default assumption then, if you were omitting it, the default assumption for that, for HTTP, for instance, where we would be application octet stream. So it would then kind of default to be binary. If you have binary, so the binary mode of the transport it must use the declared data content type. If you have binary in the data attributes, so this is the binary mode, but if you have binary in the data attribute at all, then you must also declare the data content type because then you need to effectively say to the receiver just like you do with HTTP and as you do with MVP, what is in that event? So you have to go and declare it. And if it's binary, then you also must, and that's already in our rules, use the data content encoding that says it's base 64 if that is stored as a string and then therefore base 64 encoded. The data type of the data attribute that's the next thing must also follow the rules of the content type. So you can't use the media type, you can't use a media type called image JPEG and then simply put a string into the data element that's illegal and the way how you learn about the required type for the data attribute is basically by looking at the content at the media type definition effectively. So if it's by default application octet stream that mandates that it's binary and then you can go and effectively go into the catalog of the media types and determine whether it's a binary or whether it's text, but by default, it's effectively binary and you encode it using your default encoding or the encoding that's given to you using the chart set parameter. And then if you omit, so these are all rules effectively of the relationship of binary and data in the data attribute and how the data and the data attribute must effectively is kind of intertwined with the data content type in a very similar way as that's the case for HDP. What I wanted to preserve and so James original suggestion was and that's what I was reacting to was to make the data attributes completely type-less. And what we would lose is the ability to do this self-contained event format because effectively it should be possible to have a data element that contains a map and that contains even a map of maps that effectively contains arbitrary complex content. And you should be able to hold that in memory inside of the SDK or whatever implementation you make of cloud events and have an implementation that then implements the type system that we have to find. So it has a notion of an integer and it has a notion of a date. And if you look at the SDKs that we have today, we're having those types effectively as they are idiosyncratic for the respective platforms and runtimes. And so we're having these in-memory representations and then we just render them out using the rules that we have into the respective encodings. In JSON, lots of it ends up as string, in NCP or in Avril, lots of it ends up as effectively the direct mapping of the types. And if you then list it up again into a in-memory representation, you end up again with effectively the data element being a dictionary, including dictionaries of dictionaries. So that works. So today with the rules that we have, it's possible to carry structured data and set the data attributes and just have that natively encoded with whatever the event format is, which also then yields transcoding. If we wouldn't allow map, then if we forced to have a data content type at all times, we would lose that ability completely. If we would say you must use the data content type and you must set it to, let's say, application JSON, that's fine if you have it, if the outer event format is JSON. But then if you send it to someone and that next party wants to go and route that event over a different transport and wants to use a binary encoding like Avril with that declaration of the data content type application, JSON, you're not forcing that renderer, effectively, to go and encode the content as JSON and carry it as a string inside of an otherwise more efficient binary encoding because that's what the rule is. So by emitting this, you're effectively giving the implementation, the intermediary, the flexibility to go and encode it as whatever the outer event format is. And that's what I want to preserve. That works today and I didn't want to go and destroy that even though we needed to stricten up the rules around binary. Now, in terms of the concrete proposal that James made, I have provided, and that was something that he wrote today and he was saying, we talked this week and he was saying, I'll probably do this by the end of the week or beginning of the week in a dog, basically he pressed him this morning, like my time to write something up. So he probably wrote it a little bit in a hurry, I'm not sure. So there's James proposal, I wrote a bit different formal text, which is effectively turning the definitions that he wrote on his head and make this into must rules for the binary. And so that's mostly just a matter of negotiation. I think in terms of the rule set that we want to define we're pretty close together, but we just need to get the language sorted out. So that's in four, what I said, 470 is that proposal. So I would encourage you to go and take a look at 470 and see whether the rules work for you. I have a question. How does this proposal handle Boolean's in the JSON payload? So if you declare, if you can only use Booleans, you can only use the JSON Boolean Boolean type if you explicitly declare a Boolean, if you explicitly declare JSON. But at that point, it's binary from the point of view of the protocol? It's, as long as you're sticking to JSON, it stays in, so as long as the outer event format, the event envelope is JSON, then the rule is that any JSON, any valid JSON is allowed to set the data attribute. But what is the type of the data attribute at that point? Type of the data attribute at that point is a, it's a map. Or, well. Maps can't contain Booleans, right? Oh, sorry. So the JSON. Or arrays. Yeah, the JSON encoding, it actually has an escape hatch for this. Let's go and take a look at this. Effectively, there's no, there's a type, there's a type for. Does this mean we have three cases for data? What do you mean? We have explicitly declared as JSON, we have binary, and then we have, if you didn't do one of those two. Yeah, we have the native canned transcode, and we have the, you can stuff everything JSON, you can stuff anything JSON in there, and you can stuff anything JSON in there case, is the one that's, for me, on more shaky ground than the other cases. Yeah, but we have three cases, right? Because we've got binary, we've got treat it like a cloud event type system, and we've got, no, this is actually JSON. Yeah. Might be good to spell out that there are three choices. That's true. So what I don't want to do at this time, what I don't want to do is eat up the last 13 minutes on this, because it is still fresh, but I did want to bring it up to people's attention. So thank you, Clemens, for summarizing it, but I really want to move on. And Evan, I think you're bringing up some great points. Can you put that into the PR itself, and that way Clemens and James can go back and forth and try to address that, okay? But unless there's, I guess I'd like to move on, only because I just wanted to bring this up to people's attention, to take a look at it, and see how it's progressing, because this obviously is a very big issue we need to resolve relatively quickly, but it's not ready for voting, so I don't want to spend time on it when we do have other things that are ready to go. That's okay with people. But thank you, Clemens, very much for summarizing. Okay. Cool, thank you. All right. Eric, has anything changed the last time we talked about this PR? I had to rebase it, but nothing else. Okay. You want to just quickly give like a one-sentence overview through a fresh people's memory about what this one's about? Sure, one of the questions I had when I started joining the calls was whether persistence would be something that was addressed. I'm particularly interested in event sourcing, and that's why. But after discussions, it seemed like it would make the spec more brittle, and that it would bring up some very hard challenges that I don't think are solvable. And so this says we're not dealing with the issues of persistence, that is writing the entire event down and making sure that it's secure and that we know who originally wrote the event, et cetera, et cetera. And that all those might be handled by extensions, you're on your own. Do something smart. All right. Okay, and just to make a note, this is just in the primer itself, it's not normative. This is just explaining why we chose not to touch the problem. Any questions or comments on this? All right, now hearing any objection to approving? All right, cool. And I apologize, it took us a while to get back to this one, thank you. All right, that's weird. Okay, I'll fix that. Fabio's not on the call, however, I believe the last time we looked at the Avro transport, for the most part, we were okay with it, other than Jim, you are still on the call, right? Jim, you said you wanted a little more time to look it over. Did you get a chance to look it over and are you okay now? I think actually what Clemens referenced was what I was sort of thinking of, in that my original concern had been the difference between the way the Avro one had been put together versus the way Procebuff had been done. But I think the proposal as it stands now is more extensible and generic. So I think my original concern's been addressed. So I have, did this change recently? No, I don't think so. Ah, well, so yeah, my comment is based on if the changes were adopted, yeah. Yeah, so if you take a look at the comments because I looked at this this week, and if you, this requires a bit of knowledge of Avro. Oh, there you go, you're right. Ah, okay, so great. So if we do this, then, and well, this was not very clever of me, that was very Googling the right thing of me, but sometimes Stack Overflow and other things are just helpful. And so that schema now, if we modify this, the PR accordingly and adopt this schema, then that will work. Effectively, this turns the Avro encoding into a map that understands all of our types. And then that becomes effectively the same, apparently, now that I'm looking at it, this supports batches and this supports all the types that we need effectively. Yeah, I think this just works now. It's very much like the PROSO model. Do we need float in there? No, we don't. Good catch. Or Boolean. I'll comment on this here. Okay, great. If we remove Boolean and float, does that address everybody else's concerns? Yeah. So let me ask this question. Do people approve of this PR modulo removing float and Boolean? I think everybody also supports binary. I think we need to have that too. It sounds like we need more talking. Yeah. I think what I did is I literally took, I found a post in some forum somewhere and literally just took the next copy that down without thinking. So maybe he probably would took my proposal a little bit more seriously than I thought he would. Anyone? Okay, well, when I resolve right now, why don't you guys comment on the PR itself and maybe we'll get a resolve next week. It sounds like it needs a little more tweaking. Okay, all right, moving forward then. Do you want to talk about air handling? This one was mine. I just added a little section here. I think I actually got some LGTMs on this one. Just want to make sure you guys are okay with it. Give you a second to read this. I think that's suitably hand wavy enough, yeah. Thank you. Goal achieved. Okay, and it is just in the primer. Any questions, concerns about this? This leaves the SDK authors in a rough spot. Does it? How? What do I do with air handling? Well, the spec itself isn't going to say that, right? If you're looking for air handling statements, if anything that might go in the HTTP binding spec, right? I think Scott's question might be if I receive a cloud event and the time field is not formatted correctly, do I toss the cloud event? Do I omit that field and pass the rest along? Do I ignite the computer on fire so that the evidence is destroyed? We don't talk about air processing. Because I've been doing the light the computer on fire thing and it's getting expensive. Google has plenty of money. I don't understand the concern. Well, so directly, like what I'm asking about is I still have no idea how to implement batch. And this statement doesn't help me understand how to implement batch. So I think there are two different questions being raised there. One is in general, you just get a cloud event and there's an error in there, what do you do? I would claim that the normal HTTP spec tells you what to do there, right? You got some bad input from the user, me and the client, and you're supposed to return some variance of a 400. Now, I think Scott's question is harder. I'm talking about every transport. I'm not sure I'm following, because wouldn't most transports say what to happen if there's bad input? Or if they don't, is that really our problem to solve? Because they obviously don't solve it themselves. Scott? I feel like the spec is saying we are defining the transport and yet we are only defining the happy path. No, we don't define the transport. We just define how a cloud event looks on the transport. Yeah, but we also define a little bit of processing. And so we define response codes and things like that for certain things and when things should knack and act. But we don't really help more advanced processing. So, do you have a proposal for how to solve your concern? No, no, I just have a concern. Because I definitely understand your concern relative to batching. I think that actually might be something we may wanna talk more about. But in general, I didn't think we got into processing model stuff, to be honest. We just say this is how it looks over HTTP and HTTP tells you or doesn't tell you how to handle errors. What other people think? But we add extra semantics on top of HTTP. So if it's valid HTTP but not valid cloud events, should we say something about what to do there? Maybe in the transport binding, but it seems like we might wanna have some text at least. So different libraries behave the same. So one point in time, I can't remember who it was, but somebody opened up a PR to almost do that. They basically wanted to duplicate what the HTTP spec said relative to HTTP error codes or response codes. And I think it may even be Clemens who came back and said basically why are we repeating what's in HTTP? That wasn't me. So I'm okay deferring this one. I'm looking for something slightly different, but maybe this is not what this part is supposed to do. What I'm wondering is if I receive a cloud event and some part of it is malformed, should there be some guidance on whether that whole thing is not a cloud event or whether I should keep going the best I can? What other people think? Well, the Webhook specification which is, so our HTTP binding only defines how to map the cloud event onto an HTTP message, either direction, because they don't differ except for either having a request line or a status line. And then some connection headers like stuff, RC7230 stuff, but for cloud events they're affected the same. That's what that spec does. And then the Webhook spec is how we bind that to the transport and that has error codes. And it says, you can't use this and you can't use this. We must use this. So like 200 okay or 201 created or 204 no content and 429 too many requests and 415, like we specify that in the Webhook spec for various scenarios. And if you submit a malformed cloud request, more sorry, a malformed cloud event, I would expect you to throw a 400. So it's like, but that is specific to, that is really specific to HTTP, right? It should be has a set of rules around this. And then there's other protocols which have different rules about this. If you have an MQP path where the broker is cloud events aware, it would go and reject the message. That's just a different, it's a different mechanism, but it would use the MQP error codes and reject the message for you. So Clemens to that point, maybe what Scott, maybe I can channel Scott a bit. Should there be a separate document that describes that MQP behavior or would that be in the MQP transport specification? Well, my principle is to not repeat what's in other specs. If we point to those specs and you can read them. And if we, so at least not normative parts because those specs might evolve and may have further error codes. They might change your opinion on certain things and you wanna stay up to date. You don't wanna start binding yourself to a particular, you wanna stay flexible in having a binding that when it works with HTTP11 and it works with HTTP2 it works with HTTP3 without you necessarily having to track all those things. So that's why I'm trying not to import too many rules from other specs into this spec. But the primer and the implementation guide could certainly do that. So we're taking that a time. I think we're gonna have to stop here. Let's not prove this one yet since we may want to tweak it, that's fine. Quick question. Has anybody had a concern with this adapter document for how to convert some well-known events into cloud events? I was hoping to get this one approved today to get in there so people could start actually implementing it. Cause I think I actually had some LGTMs in there. Is there any objection to approving this one the last second or if you want more time? Any objection? I guess I'm trying to figure out, I mean this seems like a useful thing to do. Are these intended to be definitive or exemplar? Technically everything is exemplar. However... If it's exemplar, it's totally fine. Well, however I have been working with GitHub and GitLab guys to make sure that if they support this themselves, this is how they would do it. Oh, okay, well then that's great. That seems like it's more definitive than exemplar. A little bit, I just can come right out and say it, yeah. That's fine. If you're working with the producers, with the people who'd originally produced those events, then that seems like it's definitive. Yeah, and these are, yeah, these are, I believe, non-neuronative specifications. So they are from a legalistic perspective, they're exemplars, but they're, because they're non-neuronive. Okay, any objections to approving that one? Okay, I apologize, we're slightly over time, but I wanted to see if we can get to some of the older stuff, thank you. Okay, with that, everybody can go except for the SDK guys. I'm looking at specifically Clemens, Scott. Who else is on the call still? Oh, James, you're actually there. Hey, James. Who else? I don't have this meeting in my calendar. I saw the SDK one, I thought that was this one, so. Okay, well, I'll do the roll call anyway, so you can actually get credit for it. Who else is on the SDK work? Anyway, if you're on the SDK stuff, please stick around, we gotta talk. Klaus, that's what I was looking for, Klaus. Okay, everybody else is free to go, thank you guys. Oh, we're Claudio? No, okay. I'll give them credit, I forgot to ask, I apologize. And James Ruffer, thanks, Klaus. Okay, thanks. By the way, why this?