 We're a little late today. No, not a problem. Just about to get started. Just as soon as my clock clicks over to 12.03, we'll do it. Oh, and there we go. All right, let's go ahead and do this thing. Hold on a minute. Catch Fabio Jose later. All right, so agenda. I don't think there's anything too exciting with the AIs going on. Just want to remind people that per the discussion we had, I think it was two weeks ago, we are going to cancel the conference call. The weekly conference call for Coup Con next week because of being in Shanghai, and because of US Thanksgiving after the week after that. So the next two calls are going to be canceled. People have my calendar skills right. So just want to remind people of that. Actually, let me do this. Make it bold to remind people. All right, community time. Is there anybody on the call who is not a normal person or not a normal working member who has a topic related to activities from the community that they'd like to bring forward? All right, not hearing any. Let's move forward then. Okay, SDK workgroup. Now Austin, I know you weren't able to join the call that we had the, I guess it was Tuesday this week, I think. So let me just quickly summarize what I think happened there that other people can chime in. The goal line and Java SDKs are well underway. There's definitely some activity going on there. Clement said the C-sharp one should get started relatively soon, so expect some activity in that repo. We don't have a whole lot of activity. Actually, we have zero activity so far on the Python and JavaScript ones. Now, I know some people are interested in those. So if you are, please speak up and I'll give you access rights to merge and do stuff with PRs in there. But I'd like to see those get started as soon as possible. We did briefly talk about whether we wanted to have some sort of overarching design document to provide consistency across the various implementations. And the general consensus at this point in time was that while that would be good to see if we can provide some consistency, it may be a little soon to be thinking about that because we wanted to get some initial code base out there so people can start playing with it. And then we'll take a step back and say, okay, where should these different SDKs or different languages line up better than they do right now? Since anything would come out with any time soon would be an alpha anyway, we should really make some changes to align that then hopefully it would be more superficial, syntactical type changes anyway. But that's kind of where we landed at that. People were more focused on trying to get something up and running first rather than looking at the bigger picture type stuff. We are still kind of hoping to do an alpha, at least for some of these SDKs, by KubeCon North America. Whether that plays out or not, we'll have to see. Obviously each group is gonna probably decide that on their own, but that's still the goal. So if you want to try to make that happen, please join the activities in the particular repos. From anybody on the call or Austin, is there any other comments or topics that you guys think we need to bring up that I may have missed? On my end, we started an SDK design document that is there and it does provide some suggestions. It was never meant to be firm requirements, but just suggestions. So it's there for guidance, but I'm all in favor of just moving forward, getting something out there sooner rather than later. So I think that's the best course right now. And then if anyone is curious or wants to learn more about stuff we've discussed, you could go look at that document. Right, and just to let you know, you and I got an action item from this week's call. Whoops, sorry, I did that wrong. You and I got an action item from this week's call to basically see if we can produce some sort of status or plan or some kind of doc, because you're right, this doc is kind of what we've been using for those purposes. But the problem is this doc also seems to serve the purpose of a running notes document from each of the phone calls. So it may be a little difficult for someone new to the group to see exactly where we are relative to what they should be thinking about if they want to design, for example, a new SDK for a new language. So I thought it might be good to pull that, those types of design decisions into either a separate doc or a well-known location, like maybe at the top of this doc, just put them on those lines. So I was gonna work with you offline about that one. Does that make sense? I'm sure. Okay. All right, any other questions or comments about the SDK work from people? I'll read the doc as Neil here. I'm just curious as to how we move forward with transport bindings and SDKs and whatnot, but I'll read the doc and I'll come back to you. Yeah, one thing that did go up on this week's call was whether we were gonna have plug points or extensibility points within the SDK so people can create new transports per SDK. And if I'm remembering correctly, I think there was general consensus that that would be a good thing, that way you don't have to modify the repo or do a complete recompile of everything just to add a new transport, you don't have some sort of plug-in mechanism. But I don't think we got as far as to say what that plug-in mechanism would look like. It may be very language specific anyway, but that's as far as we got was just saying, yes, it would be a good idea. Okay, cool. And I presume security concerns are kind of just out in the future for now. They have not come up in the conversation as far as I remember, so I think it's probably fair to say yeah. Okay, cool, thanks. Okay. All right, any other questions, comments? All right, moving forward then. I don't see Kathy on the call, so I don't think there's been any activity in this workflow subgroup, so I think we can probably move on to the next topic. Kupkan Shanghai, the slides are pretty much done. We've sent them in for translation. Feel free to look at them. If you notice anything really bad that we need to fix, let us know. I think we can obviously make small changes, but I think the basic flow is there. So I think that's pretty much behind us. Interop work, we have four endpoints as right now. IBM Oracle, a K-native one, and then an OpenFaz. We are definitely shooting to do something in time for Kupkan North America. However, if we get enough participation, I don't know what the magic number is, but if we get enough participation that we feel like, hey, why not show this at Kupkan Shanghai? We may do so. But please, if you're interested in joining, or if you have an endpoint you want to include in there, if you're not already, ping me and I'll add you to the Cloud Events demo Slack channel. It is a private one, because we do talk about endpoint and stuff like that, so we don't want those advertised across the internet. But please join that, and this doc that's up here behind this work URL contains information about what your endpoints expected to do. And we do make some design changes periodically as we go along, so you do want to join the Slack channel to keep up the date with those. Any questions on that? All right, cool. In that case, let's go ahead and jump into the PRs. Now, I did try to order these, put the easy ones first, because we have quite a few to get through, but I want to get the easy ones out of the way before we get to the ones that might involve some discussion. So this one was mine. There's an issue, I can't read the gentleman who opened it up, but he basically was saying that as a newbie coming into our documents, it wasn't quite clear where to start in terms of which document to look at first, just to sort of introduce themselves into our work. So what I did is I added these couple lines of text to the read me, just pointing people to basically start with the primer, and then head over to the core specification. Just a little bit of guidance there, nothing too exciting, definitely not normative, it's just in the read me, but I want to make sure people are okay with this general text, obviously we'd be tweaking later if needed. I thought this was better than nothing, and the original author of the issue thought this was good enough for his needs. Yeah, it looks good to me. Okay, cool, any other questions or comments on it? All right, any objection to approving? Excellent, cool. Doug, I can't remember his last name, but it does start with an M, he's not on the call unfortunately. He had some minor wording changes to some of our documents, I think he was just trying to get consistent, for example, using the words producer and consumer throughout the entire doc, or throughout all of our docs, that was probably the biggest change. Minor wording changes here, I don't think any of his changes are normative at all, or if he did make any normative changes it was by accident, but I don't think I spotted any. So these are strictly just, in my opinion, syntactical or types of changes. What do people think about this? Any questions or comments on it? It seemed like they're all good changes to make for a consistent perspective. I think it's probably the biggest. But as the confusion we had before about the currents versus event, yep, that's good. Okay, any other questions or comments? Any objections to adopting this PR? Excellent, thank you guys. Oops, all right. This one, I think I did because of you, Austin. You had mentioned something about when we had dropped the event type versions property, you were wondering about whether it made sense, all right, what was our decision to include as part of the event type string itself made sense, and you thought maybe some guidance around how you might include a version string or some sort of version type of information within the event type itself. So I just added this text here and gave an example down here, just trying to address some of your concerns to provide additional clarity and direction for people. Is this, at least come close to satisfying your concern there, Austin? It provides additional clarity, so I think that's good. Okay. Definitely not normative. Any questions or comments on this? One quick question, Doug. In this example, the version is in the middle of the event type, so com.example.object.v2.delete. We do refer for it to be someplace else. Like after the delete? Yeah, I imagine that that whole event type is what the payload is corresponding to and then probably just pegging the version right at the very end. So you could keep it more sense. So you want that? Yeah. Okay. That's, it throws me off a little bit when I see it tucked in the middle there. Okay, that's fine. I think the only reason I did that was because I was thinking version one versus two may have actually different verbs. But I don't honestly care. I'm okay with this change. Anybody have any objection to putting the v2 at the end? Okay, so let me make that comment. I can get that change in there. With that one change, is there any other comments or questions before I ask the approval question? There was some talk about both the event type and the schema fulfilling parts of the version field. I don't actually have any idea how to clarify that, but someone who did have the idea of using the schema should I think comment on that VR. Comment on how that could be used as well. Are you suggesting that we may need additional text like this under the schema URL? I have no idea. I just remember from last week's call someone else pointed this schema field instead of the event type field as a replacement for the event type version. I would like to hear from whoever that was about what they thought. Oh, does anybody remember that comment? Would the person who made the comments on the call? Might have been me that I mentioned this, but only because it was mentioned at the original discussion a few months back. Yeah, because I know schema URL, at least in my experience, oftentimes does include a date string somewhere in there or some sort of version string in there. So I think it may have been mentioned in context as an example of this type of stuff being done before, which is why we felt comfortable using event or sticking the version string inside the event type. It may have just been used as an example. Yeah, I think we just talked of XML namespaces in that context. Yeah. Okay. Well, if someone thinks we need to do additional wordsmithing around schema URL, obviously we could do that. And if you don't feel comfortable creating a PR for it, then go ahead and open up an issue and someone else will pick it up. So if anybody on the call feels like we need that, feel free to open it. I'll say one thing and I'll be silent because I'm not feeling so strongly about this one, but Backcompat is a, hey, here's Microsoft. So backward compatibility might be a big thing where you have clients that are writing to the type, to the certain event type. The block created event, right? The one that we've used. And now that block created event changes slightly because it adds further information. So that should have a new version number, but now the question is, are you willing to break all the existing clients and then all the existing subscriptions that are based on that old event if you are just making a small and effectively ignorable change in the schema? So that's something that I'm generally worried about. So even if that's a rule here, our approach to that might be to go and just include a special version number in the message bot in the body or if we have a URL to have that reflected in the URL because I'm really worried about existing infrastructures and the existing applications that are assuming that they have subscribed to the event and that they expect a certain event type that they dispatch on and then you make a new change and you effectively breaking your entire employee infrastructure. So that's why I'm reluctant to have the event type as the core dispatch criterion change with versions. Oh, are you trying to figure out, are you suggesting that we may need just additional text in the primer to provide guidance on how to and how event producers should be producing these strings or do you actually think a narrative change to specification is required? Yeah, I wonder what that addition that we're adding here is sending people down a path that might have consequences that they might not see if they're reading that because if you only really make an added change in the schema, you really usually would not care. And so that now puts people into the conflict of when am I really gonna change the version number? Only if I have a breaking change, then that's probably justified. But if I do a point where at least I'm just adding data, which ideally by principles of, I always keep citing the same Ruby, it's okay to add things, then you never do a version revision and just keep things the same. So I said, from an implementation perspective, I know how to go and deal with that. It's just for me that the question of what the guidance is that we're giving. So I don't have a strong opinion if other people feel about this being the way how we should go and guide people, but I just wanted to point out the risk and with that I'm done. Okay, so Matt, I think that's a little simple as you were trying to raise your hand. Matt? No, that was me joining the meeting present. Oh, I thought you were trying to speak on this one. Okay, sorry. Okay, well, so let me ask you this, Clemens. Would you prefer if we hold off on this? The guidance makes me a little uneasy. Okay, well, like I said, I was hoping to get the easy ones out first and this one sounds like you're a little nervous so I don't want to rush it. So let's table this one for right now as that and then we can revisit it to see whether we just killed the whole thing, add additional text, either here in the primer, but let's talk about that later. Is that fair? You don't want to rush it. Can I just comment on that, Clemens? Since you're talking about additive changes, now that we're talking about it more, I think originally in the discussion, the point was that it would be major version changes, would be a part of the event type string, so breaking changes and additive changes would be dealt by having the schema change anyway, because if you have additive changes, you should be changing your schema URL probably with a date or something as Doug was talking about. So I would propose that we have a versioning discussion. So you can use the event type, you can use that event type attribute as you like, right? Obviously, you can use it for versioning, you can use it for everything. So I think instead of putting this here, we can make a version section that says, and here's three ways you can think about versioning and really how you deal with them is really up to you, but that's kind of, so basically making the guidance thing, but not making that necessarily a, and it can be even in the normative part, but I think standing alone is complicated because schema URL plays into that. And as we said, you know, you want to have a major change versus a minor change, and I think that needs to have a little bit more explanation. I'm just worried about it from an education perspective and not setting people up on the wrong path. Okay, well, like I said, let's hold off on this one. I don't want to rush this, especially if you are correct, Clemens, and that it may lead people down the wrong path. I definitely, that is definitely not my intent. I thought it was my, I thought it was mindless text and if it's not, then I want to take a step back and think about it some more. So let's take this one offline and see if we can address your concerns and then come back to the group with a more, with a different proposal. How's that? Great. Also, I just want to chime in real quick, Doug. I think Clemens has a great point and if we're going to usher in this event-driven future, we really have to give some pretty good guidance as to how cloud events can help data evolution. And I think Clemens' suggestion of just writing some of that guidance in here as like an initial step until we get this out to the market and learn more and see how people are approaching the problem seems like a good one. So I'd encourage you to see anything in that direction. Sounds like a good initial step. Very kind of you. Okay, cool. Thank you, Clemens. I'm glad you mentioned that. So as a reminder to anybody on the call, please, if you ever feel uncomfortable on something, want to bring up some concerns, feel free to do that. Save what Clemens did. Because at times when we start sort of rushing through some of these PRs, I'm doing it mainly because we only meet once a week and I want to make sure we get through as much as possible. But even if you have just a nagging feeling like Clemens did, feel free to mention it and that will slow us down and we can take it offline and discuss it. I definitely don't need to rush things. So thank you, Clemens for doing that. I also think the separate section for versioning or scheme evolution as a guidance sounds like a great idea. Okay. Okay, cool. Thank you guys. Next one, Kristoff. I think you actually just joined the call too. So maybe you want to talk to this one? Yes. So we had a PR that we discussed on last week's call from Fabio and basically he came and said, I took your JSON example and I tried to pass it by the JSON schema and it didn't validate. And it didn't validate because you used the relative URI but actually you say it should only be URI. And then there were multiple people including me who said, yeah, you're right. We only support URIs. But actually then Doc found out this is not true. So if you scroll down to the spec you would see that we actually said all the time here that is a string expression conforming to URI reference. Well, then we go and name it URI and then everyone is confused by that. And then if you look at the JSON schema you can see that the mistake was applied there. So in the JSON schema it actually said URI but it should have said URI reference. So with this PR I'm trying to make the name URI reference so that it should be clear for everyone who reads the spec that this is not a plain URI by the URI reference. And I'm also fixing the JSON schema and I'm also adding one more example to the spec the last change basically that's a relative URI. Any questions or comments on this one? All right, any, oh, I'm sorry. Is someone gonna say something? Yeah, I just wanted to ask has there been a discussion about whether the relative URIs are actually wanted? Many moons ago, yes. So this was, we had a long discussion about URI references versus URIs and we literally landed on URI references. We can open that box of Pandora again but there are cases where you are acting. So the argument was if you're acting holy within the scope of a single system then further qualifying the URI might not be necessary. Example, Azure, the events that we're raising from the Azure platform all have relative paths because they're relating to azure.com because it is the Azure platform that is raising those events and effectively the event paths are all ending up being unique because they're anchored on a resource structure. So for Azure, we don't need to have absolute URIs but we would want to go and distinguish between these and URIs that are not part of our scope. So we would also then allow absolute URIs if they refer to other scopes and other systems. Yeah, great. I just wanted to ask if there had been a discussion that sounded like that. Yeah, we had a debate about this and I think the debate lasted like eight weeks. Just an additional question since you brought up a great example. When you export events from Azure to another domain would you then still keep the relative URI? Yes, because the event type... So the event type is meant to be domain qualified and actually is. So you can already tell from the event type what that is and where that comes from. So you have that qualification, but yeah, if we're thinking about federation, if we're thinking about federation across systems then we might go and consider adding the URI. I don't wanna expand the scope here all that much in that discussion, but there are some thoughts that we're having in a different realm that's kind of related to this and I'm probably gonna bring this here because we have an NTP binding about addressing in general and thinking about what a logical identifier is versus actual network addresses and that kind of plays into that because azure.com in that sense would be really just the name. And then the question is, what is the right name for the sources? What is the right URI? What is even the right URI scheme to use for those? Because HDP or NTP is not the right thing because there's a lot of addressable entities and those prefixes imply some protocol behind it. So these probably should be your ends, but then what is... There are no rules governing the uniqueness of the authority portion if you don't have a scheme that matches that. So in NTP specifically, we're trying to go and resolve some of those concerns about multi-level routing, et cetera. And once we have a model for that, then we would probably want to bring that here. Or obviously you're all invited to come to ASIP. All right. So it's fascinating, sounds fascinating. So okay, so I think there are two things coming out of this. One is it seems to me it might be good to add even if it's just a short paragraph or sentence or two to the primer to explain why we allow URI references. And I added a note to our agenda doc to make a note of that and I'll do an action item around that or how to get someone to do an action item around that. I do think that'd be useful to explain that. And Clements, I thought you'd get a good example of why that might be needed. But the second thing and more important thing is this PR itself. Any other questions or comments on this one? I love it. There you go, that's a good comment. Anybody else? If nothing else, it definitely provides clarity, which is good. Okay, any objections then to adopting it? All right, not hearing any. Thank you guys very much. Clements, hopefully the next easy one, integers. Would you like to quickly talk to this one? Oh, I don't even get any Delta during this call. Yeah, so this was easy. That was an omission. This was, it was noted that when we added the integer type to the abstract type system that it wasn't in the mappings and that was correct. So I just added that. And for the JSON format, since we all be low integers and I'm trying to keep the, at least I am trying to keep our type system compact, I'm literally making the constraint that you can look at the JSON number there, but you can only really use it. And if you look at the JSON spec, we don't have to go and click that. You'll find that in that section, they talk about an int component and that is expectedly an integer. And then I'm just adjusting the text. And I make the text a little bit more robust at the bottom like line 81, where I'm saying basically it can be any valid type so that I get out of the iteration and have to go and adjust it all the time. And then above, I'm making effect of the same change, mapping the, our integer type to the MQP long for the MQP type mapping, which is the other one that we have. Yep, and that's up there. So there are no, there should be no surprises. And for MQP, I didn't have the, we didn't have that word that we have for JSON because MQP knows that construct of any type by itself. All right, cool, thank you. Any questions or comments on this one? All right, any objection to adopting? Excellent, thank you, Clemens. All right, ProtoBuff. I don't think Spencer is on the call. Now, this one has been out there for quite some time. I think there were some, in my opinion, I thought relatively minor updates recently, but at least three days ago or more. Let me ask the Googlers, Rachel or Scott. Is there anything you guys would like to add on this one before I ask if there are any questions or comments? Any comments you guys want to make? I think Spencer tried to address the changes that people brought up. So if you see something that is still not addressed, that let me know and you can address it. Yeah, I think he did get everything, I agree. So, hey, this is Jim. So I just wanted to raise the comment on the media types. And I know Spencer had sort of commented on that, but I didn't see any updates to the documentation sort of describing what we should do, yeah. So, well, so in the last comment, he was saying, oh, so you're saying that you would like to see a change in the PR itself? Yeah, so I mean, so my whole, when I looked to this, it was more around, okay, you know, when you pair this with the other transport bindings and they say, okay, you know, look at the media type, that's gonna tell you what the payload is, yeah. And given that there isn't a registered one for Protobuf, what I really was looking for was this standard to say, and this is the media type we'll use, yeah, to actually make it explicit. Okay, I'll pass that along. Is that something that would be in the Protobuf spec itself already? So, is that a question? Yeah, I'm just wondering whether the Protobuf spec itself dictates what the content type would be. It doesn't right now, but it could. Okay, I didn't want us to repeat stuff, that's the one I was asking. Okay, so, Jim, I'm trying to figure out if, okay, so you're asking for that change. The reason I'm kind of hesitating here is I know this PR has been out there for quite some time and I feel bad that we haven't merged it yet because I don't think there's anything major that needs to be done to it. I'm not at this kind of your content type thing, obviously I do think that's important. I guess what I'm kind of wondering is, would people prefer to wait until that change is made to this PR or could we do that as a follow-on PR? Well, to be clear, I don't want to say, we'll definitely include that. I want to say I can pass that along to Spencer. I don't know if we include that in the other specs. I assume that we do, but I don't know for sure I have to go check. We did, in the JSON format, in the envelope section, it says such a representation uses the media type application slash cloud events plus JSON. So this one basically would be plus, plus, plus protocol. Because that's what we're using the plus notation format where we basically have a base type. We still have to register that one, by the way. And then we're effectively adding the encoding format at the end of that. So it will be application slash cloud events plus protocol. So what do you guys want to do? Would you like to see if we're gonna merge this one as is and do a follow-on PR or wait until this change is made before we consider adopting it? I can go either way. Is that a question to the entire group? Yeah, mainly to, I think, you Googlers and Jim, because you guys are the ones that are focused on the content type question, but to the whole group in general, too. So I don't think it should necessarily hold this up, but if you merge this one, I'd like an issue or something to be opened up sort of fairly promptly afterwards to get it clarified. How about this? How about we merge this and then I will open a PR against this file as soon as it merges to make that change? Yep, that works for me. Like I said, I just feel guilty because I know Spencer spent a lot of time in this and it's been lingering for a while. I don't want to nitpick at the death. Okay, so the proposal out there is to try to get this one in so that we can prove it. And if it is, do a quickly follow on PR to adjust Jim's content type question. So any objection to that process? Okay, not hearing any objection then. Is there any objection then to adopting this PR as it stands? All right, cool. So let me just make some notes here. Approved. All right, cool, thank you guys very much. Jim, yours, oh, okay, this one. Would you like to quickly talk to this one? Sure, so this was one that I rather foolishly promised to do last week, just really to reduce and simplify the property and attribute naming. And I think the only one that was sort of questionable at the time was whether people wanted to reduce the attribute called cloud events version to version or spec version. So this change is actually to reduce that to spec version. There was one comment, I believe, from Roberto. I don't know if he's on the call. Yeah, on the call, I think ID is too generic because there's so many IDs in JSON. So I think that this is the one that I would prefer to preserve as event ID instead of ID. That's my only comment. Right, so I think my response to that was I have no strong opinion. I think I'd prefer it to stay as ID because it keeps it consistent with everything else. But I'll go with a flow if the group wants to change that then I'll do that. I'm not wedded. What I would say and apologize in advance to the people working on the Kafka transport spec because if this gets merged, then your spec will need to change as well. Yeah, I think we've got a few dependencies upon other stuff at the moment. Yeah. Okay, so I think there are two open questions then aside from the overall question of the PR itself. So first, cloud events version changing to spec version. Is there any concerns about that change? Okay, I think that may have actually been driven by Austin's request last week too. So hopefully he'll be okay with that. And I think he actually was for what he said in last week's call. So I think the bigger question, more interesting and possibly harder is event ID versus ID. What do people think about that change? Are we waving hands or? Yeah, or just speak up. You can either put a plus hand in the chat or just start talking if no one's talking, yeah. I'll be quick. I think it's contextual. So I'm okay with ID because you're only gonna see the ID ideally in the context of a cloud event. So I think it's kind of self-explanatory. Yeah, I agree. I'd rather have all of them either be prefixed with event or none of them having event as a prefix. Anybody else wanna chime in? Hi, this is Vladimir. I would also prefer just to have it clean without the prefix event in front of everything. Okay, thank you. And obviously I haven't biased him at all in that. Yeah. I guess I agree. I'm outnumbered. Well, it's like- I agree for sure. Okay, Roberto, since you seem like you wanna shortcut it, let me change the question. Is there anybody on the call who strongly feels that it should remain event ID? Okay, so yeah, Roberto, I think you're kind of outnumbered on that one. But it's interesting that you mentioned it though because this is the exact example I think I used last week on why I thought prefixing everything with event might actually be good. And because I said ID versus foo ID be kind of annoying. But anyway, so I think the general consensus so far is that keep it with ID without prefixing it. Is that fair conclusion or am I misreading the group? Okay, go ahead. I think the event ID is better but in the face of everything else changing, they're not having the events. Front of it, it would look weird and break consistency and the such even though it would be better in terms of understandable, it's not actually better. Yeah, I agree with that. It would be way clearer but if that one has events in front of it, all of them should. We can't have an event just on one. We would definitely get PRs and people would get confused if just one of them has events like, okay, time is the time the event was generated. Is it the time of the event? Is it the time of actually sending the event? So if one has event, all of them should have event in front of it. So I'll raise my hand here, not as a moderator but I would tend to agree. If we're gonna do it on one, we gotta do it on all for consistency. I like consistency a lot. My head is much happier that way. Anyway, so any other questions or comments on this PR then? Okay, any objection then to adopting it? I'm watching the ramifications fly. Okay, not hearing any objections. So this means that once this thing's merged, we gotta change our demos and the Kafka spec. We just broke the world but that's okay. We did just break the world, yes. Thank you, Jim, I appreciate that. You're welcome. I mean, it's so point one. So I know it's safe, it's just funny. Okay, last one. This one I thought might take some time to respond to some recent comments. Neil, would you like to talk to this one, the Kafka of transport? Yeah, sure. I mean, it flows in line with the other transport bindings. There's been a bit of a discussion about halfway down where we talk about the header mapping and a prefix everything with cloud events, underscore, yeah, that's it there. And we weren't sure. So I asked Doug and I also asked Clements. And I think the interesting thing is that the bottom Clements has said, it looks like we might be going for a CE underscore. My concern is if we're propagating cloud events between multiple transports, we don't, ideally we want consistency between those transports. So we don't have to do any kind of namespace mapping. But I'm really looking for opinions from other people as to do we have a standard consensus for how do transports propagate events between them? So generally, effectively what this prefix does, it gives you a namespacing within your transports. You have two namespaces, but you have your transport space properties and stuff that is necessary for routing and handling stuff at your transport level. And then we're effectively taking the metadata that the cloud events brings, certainly for the binary mapping, we're kind of overlaying this. So since we now make our world very easy and very compact by just did with ID, we now need to go in namespace things, right? So if we now say for Kafka, you make CE underscore ID and CE underscore time and all those, then if you're presenting this up in an SDK, then you would basically go and toss all those prefixes out and you would simply service the normal names for cloud events. And if we're doing these, if we're mapping stuff onto the transport frame as we're doing with that binary mode, then you already need some code that sits in the middle that kind of does the translation between transports. And the logical way of doing that is to go through the SDK. And the SDK will therefore go and present you normalized names. And then you go with those normalized names back to the transport web, the other transport, the next one, HTTP, where they will again be prefixed so that they don't clash with the wire reality in HTTP. So I think of these prefixes that we have in the wire representation really just as a namespacing trick to disambiguate from the native headers that you have there. But in terms of how do you project this out to an application, these prefixes basically just go away because they're giving you a filter condition for how you can find, even if you don't, if your SDK doesn't know about all the cloud events headers because we have the extensibility, then you know that they originate from cloud events because they are the CE underscore prefix, which means you can go and collect them all up, strip the prefixes and present them as a collection. Okay. So in that case, we can go with like a packaging semantic like the IO.cloud events. Yeah, exactly. Within the binding. Okay, that's good. Yeah, you can use what's idiosyncratic. So for AMPP, I'll probably go and change this to C, because that's the idiosyncratic way to represent this because AMPP uses as a kind of implied namespace model. And then we have CE dash for HTTP because idiosyncratic there. We don't need to have those things look alike, I think, because the assumption is that we'll normalize off the transport into the normal names. Okay, that's great. And the other thing was this, there's a dependent PR here with the event key. And obviously we can't put data into Kafka without it belonging to the correct partition. I have a comment on that as well. Yeah, so I think Jay Roper or John Roper opened a PR a while ago. It's been open for some time. And I know you left a comment earlier today about exposing transport binding dependencies up to the event model. The question is, if we do hide it within the data payload and that data payload is binary or it's encrypted or it's secured, then for whatever's actually trying to look at that to extract out the event key, it's a fair bit of overhead. It's also might break some security concerns which we haven't kind of got to yet. That's true. But then if you have payload-specific information, I would argue that you then have a header that you're promoting out. So I have to effectively attribute that you're promoting out of your payload, which is payload-specific. Because that is then an extension. And I think what the Kafka binding ought to do it ought to have an expression effectively that can define how you fish some element out of the event that you can then go and use as the event key. So basically in the subscription, so I think of these transports as effectively subscribers to the event flow. And you might have the situation where, as I wrote in that comment, where you have a device and the device is publishing events over MPTT because that's the only protocol it knows, those ends up, those end up in a gateway. And that gateway then turns around, takes that event and throws it down the Kafka pipe. At that point, the MPTT, the device is oblivious to the fact that you want to do that with it, right? And it's been shipped with MPTT support four years ago and you might not know. And of course, it has known cloud events will continue at that point. And then you need to have a mechanism, I think to go and construct, you need to have a rule to go and construct a partition key for Kafka because it always needs one from the content of the message. And that might mean that you map from an existing attributes, that might mean that you go and do a query into the payload. But I think it'll effectively have to do a reference to an attribute that's in the message without that attribute in the message being predestined to become a Kafka key. Like from a conceptual building, it's leaving your ability to do whatever you want. But from a conceptual perspective, I think that the Kafka binding should basically refer to stuff in the message and then make that the key rather than for us to have a preconceived notion of key from the start of the event flow until it potentially sometimes hits the Kafka endpoint. Yeah, I guess the aspect of this, I mean, until we figure out 218 that Doug had opened about the event key field or property is until we can figure that out, we're not really sure what to do in the Kafka transport binding. I mean, there's been some proposals to append it to the source as some kind of like a URL parameter or something like that. But we, yeah. What do you mean? Something that needs to, so like say the source is a IoT device and then you've got an ampersand event key or device source or something like that becomes the event key. You can, so I think the other thing you can do is the, so the simple way of solving this is of course the Kafka binding can make an extension and say, here's the default key that we're picking up. I mean, it could default to source, but that's not gonna please everyone. There's always gonna be some kind of scenario that source is not sufficient. And see, that's why that's what my concern is because our event key is not going to be because our event hub works the same way, right? And we just released the Kafka binding for that as well. So we have the same problem. The way how you're choosing your partition, your partitions really also depends on the use case, right? You might go and want to create partitions based on event type. You might want to create partitions based on the source. You might, I mean, there's all kinds of criteria by which you want to go and partition your event stream for processing and having some kind of mechanism that allows you to say, you know, in my binding, I can, and in my implementation, I basically select a criteria, the select effective expression, whatever that is, that then points to the cloud events message and does a transform and that transform yields a key is, I think, better because you have more flexibility than if you have to go and put a key in a priority into the message, even though the ultimate sender, the sender might not even know what your partitioning criterion is. Because there's always the case of, you know, the single device sits out on the edge, sends stuff in, and then you have, you know, concentrator, concentrator step, and then in the second level concentrator step, that's when you want to have some different partitioning that the device has no concept of. So we've got a couple people with their hands up and I want to get to them before we run out of time. So Christoph first, then Vlad. So go ahead, Christoph. Yeah, I agree with Clemens. I think it's somewhat similar to rooting. So I'm mainly looking at it from a producer perspective. I don't know how my consumer will want to root the stuff. So what I want to do is I want to give them options of what they can root on. It's event type, this on my source, where I may be going to put a bit more data than is strictly necessary. I may introduce extra attributes. So then the consumer or the middleware can decide where and how it's going to root based on these attributes. And I think the same should be true for the partitioning. I'm just providing attributes and then whoever sets up their middleware can decide what they want the partition on. This is for me as an event producer, it's really hard to pick the right partition key and there's probably not one that will fit all use cases. So whichever I'll pick, I will make some parts of some people unhappy. Okay, Vlad, your hands up next. Yeah, all my comment is on the same line. What about the producers that can't, for some reason, generate the partition key? That's not ideal and having this as a required field is not ideal. And I do believe this would be perfect as an extension. Like, if the partition key is there, use it. If not, we're gonna combine it in this way or generate it by concatenating these three fields with source ID, event ID and whatever. Okay, Tepini, I think you're next. Yeah, to continue from Clendon's comment, the producer cannot know the partitioning strategy for all of the consumers. It's just not possible. So, what I actually wanna comment on is, does it make a difference whether it's an expression in the field that points to another field or just duplicating that data from that field into the event key field? Why do you prefer a path expression or something? So can I jump in here just for a second? I wanna make sure that we're talking about this only within the context of the Kafka transfer binding. I don't think... I didn't wanna necessarily go into PR or issue, or I'm sorry, PR 218 yet, unless it has to be solved for the transport, for the Kafka transport. So let me ask that question, Neil. Because Kafka needs to have a key. Okay, that's what I was wondering. So Neil, is it fair then to say we cannot even think about merging your PR until we resolve 218? That's right. Okay, okay, in that case, let me continue. Sorry, I wanna make sure of that though. Yeah, it was mainly just a clarification that the producer will not be setting that key anyway, because he kind of know what the value should be. It will be the middleware or something else that actually does the partitioning and knows the strategy that it should be partitioned by. But I just wanted to ask Clemens, why do you think a path expression brings some advantage over just... If you want a reference, why are you not just copying the data into the event key extension field or whatever? No, you could, but it's a subscription operation. So basically the event exists, the event exists, and now you're having the Kafka transport provider, however the implementation happens. And now that thing needs to get configured to go and look at that event and now pick out something from that event that then becomes the key. So I think the point is mostly, the producer, it's a matter of the middleware configuration effectively and really addressing the specific concern of your Kafka deployments. And how you want to have the downstream consumption model look like is how do you want to go and select the key? Oh, so you're saying it wouldn't be the event key field that has the path expression, but rather the configuration for your transport provider. So I don't think there's necessarily a path expression. There might be, I just said there is some kind of expression, some kind of mechanism, but with which you can go pick something out of the event. The point I was trying to make is we can't assume that a priori there will be something key-like or Kafka specific key-like in an event that shows up at the client that goes and takes the event and then renders it onto Kafka. No, well, sure. There has been discussion just before in the event key PR I think about make that transport could require an extension such as an event key, but I do think it sounds very interesting to instead have that configured on the transport configuration instead of in the event, I think. That is the first time I think we really talk about some idiosyncratic needs, configuration needs of the transport, but I think that's the case here. Like we really need to write down in the, and this is the, it's a little weird because we don't prescribe a particular format, but we would probably prescribe a particular mechanism or it washing course, we describe a mechanism that basically says you need to have a key and there will be cloud events message showing up here and how you derive that key doesn't matter, but the key really needs to be there. I don't, I would have to think about how to express it in a normative way, but that's ultimately what it is. The key needs to appear for it to be put into Kafka, but the implementation of the Kafka transport needs to construct that from the message. And it might use extension, it might use existing data that's in the message that's really up to the spec and also I think it's up to the spec how specific it wants to be. Okay, and with that I'm gonna have to call time because we're at the top of the hour, but it does sound like we're in agreement that we can't address, we can't think about merging the Kafka transport until we resolve 218. So please can you discuss your 218 in the PR itself, especially since we're not gonna be meeting for the next two weeks. So let's see if we can get that one resolved and then Neil can finish up the transport, the Kafka transport binding. And with that, let me go back and just quickly do the last roll call for people who need to jump off quickly. So Fabio, are you still there? Yeah, I'm here. Okay, Brian, are you there? Yep, I'm here. Okay, David Lyle? Yes, I'm here. Okay, cool. Ehor, I didn't see you here. Yeah, Ehor, you left, right? Yeah, I think you left. Colin, I saw through the chat. Christoph, I heard. Klaus, are you there? Yes, I'm here. Excellent. Are you there, Renato? I think you spoke earlier, right? I'm gonna be not, Renato, are you there? And what about Siraj, S-I-R-A-J? Are you there? Yep, I'm here. Okay, and what about Renato or Ehor? Okay, is there anybody I missed for attendance purposes? Vladimir Bekvansky? Vladimir, okay, thank you. Thank you. Anybody else? All right, cool. In that case, thank you guys very much and we'll talk in two or three weeks, I guess. And I'll see some of you in Shanghai. Have a good one. Hi, everybody. Thanks, guys. Clemens. Yes? Can you write down even just a short description of what we were talking about in the issue since you actually have the idea and you clearly have more of a clear idea of what that is? Sorry, which one? Which issue? We're talking about... The 218, the MNK issue about how it might not be an extension in the actual... It might not be an extension attribute, but rather a configuration of the transport. That's a very... Yeah, to put an ugly picture in your head, think of it as an X-path expression or some reg X or something of that sort or literally a function that you can figure into your Kafka transport so that it goes and sees an uncommon cloud events message, can run that transport over the message and get the key out. Oh, sure, sure. I get the idea. Callback is what I obviously jumped to first because I come from JavaScript, but just to have that idea in the issue because that has not been discussed before. It's always been discussed as an extension attribute. Yeah. Yes, okay. Let me note that down. Because I think that actually sounds way more like... That makes much more sense because then that eliminates the need for the middle word to modify the event to get the MNK in. Yeah, correct. It's a part of the transport configuration. It's an adapter of binding configuration element, if you will, and we're prescribing... What we're describing is transport. No, transport is an abstraction, but we're really being prescriptive about how a piece of code ought to take this abstract notion of a cloud event, which will be backed by code eventually, and then goes and transforms it into a wire projection. That's what the transport bindings do is they basically prescribe all that software all to operate, and we now make an extra prescription here that says... And you also need to have a way to go and harvest information from the input event to go and generate synthesized effectively that key. And that might be an extension, but that might, you know, however that might be, but it's something that is in the obligate... It's an obligation of the Kafka transport implementation to go and create that. Oh, sure. The reason I was looking for an extension attribute is that I think it should be surfaced on the consumer side and the most obvious place to do that would be an extension, but that... But the Kafka consumer kind of gets that key, but once you know... Once you pop out of Kafka, it again probably doesn't matter. Oh, sure. But yeah. But if I have a consumer using Kafka transport, I would like to know... Yeah, but you already have that, like you're walking in with the key and you have the key here, something that is on the message frame itself, right? Right. But if I use an SDK that transforms the... That just gives me a cloud event. Well, then that should probably... If we support extensions, if we support transport probability, then we better have a way to kind of reach back into whatever the underlying transport is so we can go and find out details, because in MVP... Oh, sure. So it's like... You always need to kind of tunnel back into the original context to get stuff that you can't get out of our abstraction. Oh, sure. You could be an accessory next to the actual attribute accessor or data accessor to have those transfer-specific data. Yeah, that was some great, great conversation. Thank you. Thank you. All right. So come on, Joel. You'll add some more commentary to the... I'll find that right now, because oh, I don't think I have... Excellent. All right. Thank you guys very much. I'll talk later. Thanks.