 Clemens, how's it going? Can you hear me? Clemens? Can you hear me now? Yes, I can hear you now. Okay. I was having a chat with my microphone earlier in the week. I have failed to upload the document even though I've been editing like while, but I've been, I just missed the deadline. There's just too much going on. Okay. So I'm already on markdown with that subscription document. So I'm going to go and I'm going to upload this in like probably not today, but tomorrow morning. Sounds good to me. Hey Scott. From quarantine. I'm sorry. I hope you're good. Did they actually tell you to stay home or do they just say it's your choice? No. Well, there better be a really good reason to come to the office or be within six feet of anybody. Interesting. Okay. My world hasn't changed, but it feels more like a bonger now. Fewer cookies. I went to the Sounders game. It's a big sporting event here. Well, soccer game. And then the next game is canceled because we broke the rules by congregating in 30,000 people. Oh, geez. Yeah. I went to a football game on Sunday. That just, that still happened. And yesterday we had our big local derby against Cologne. And that was without spectators, which was weird. So we all watched, but like I had my stadium friends and watched on TV with them here. And now the discussion is whether they're going to go and cancel the entire league. Italy already did and Spain did. And that is like Italy in Spain without football is, I mean, that's, that's a real state of emergency. And, you know, Italy has all the restaurants shut down at all the stores and everything. It's quite grave. Yeah. Hey, John. Hey, Tommy. Hey. Wait a minute. Was that Tommy? Did you actually speak? Yeah. Oh my gosh. That's like a first. I must have caught you off guard there for a minute. You weren't thinking straight, were you? Yeah. There you go. Okay. I'm sorry to pick on you, but it's just funny. Hey, Eric. Hello, Doug. You know, they're extending my son's spring break vacation from college, and they're going to start doing classes online, which means he's at home now, which isn't, you know, a bad thing unto itself other than he keeps really, really weird hours. And he's a boy. So he eats a lot. So he's at like, he's up at like three or four in the morning in the kitchen cooking stuff. And it's, and he's a boy. So he's loud. So I'm getting very little sleep these days. And it's very annoying. I can't wait for this whole thing to be over. Hey, Ginger. Good morning, Doug. Good morning. Hey, Kathy. Welcome. It's been a while. Oh, hi. Good morning. Sorry, I was on mute. No, no problem. Morning, Jeff. Hey, good morning. Related to the previous topic, I think Doug will like this. You know, the company's been doing off-sites, but because of budgeting reasons, off-sites have now usually been on-site, but they're still called off-site because it's like, maybe you travel to a different office. And then those got canceled and they're going to be virtual. So now they are virtual off-sites. It was going to be on-site, but now it's virtual, but it's still called an off-site, virtual off-site. It's just amazing. And even if they would have had an on-site, it still would have turned into a virtual meeting. So it's just another meeting. Yeah, it's hilarious. It's beyond the conference call all day. Oh, man. All right, let's see who we have on now. Klaus, are you there? Yes, I'm you. Hello, and Nacho. Whoa, good God. Hold on a minute. My screen is getting all weird. Yes. Yeah, Nacho, if I can spell it right. I apologize. Vinay, are you there? Yes, I'm here. And Mr. Grant. All right, first time on meeting. Okay, yeah. All right, cool. All right. We usually wait until three after you guys are in here. So give it a minute or two. Thank you, Grant. James, are you there? I am here. Hey, this isn't your first call, is it? Yeah, it's the first one I made of this one. I actually signed up for the wrong call initially, so I only figured that out, honestly. Okay. To me, it's a favor. Let me paste the link to the minutes. You can just put your company name. Here's the link to the minutes. I want to be clear. I'm not officially representing my company. I'm here more as we talked about around the book, but... Okay, in that case, I'll just say yourself. That's fine. That's fine. Okay, cool. All right, it is three after. See if I miss anybody, then we'll get started. Okay, I got everybody. Okay, let's go ahead and jump into it. Okay, community time. Anything from the community people want to bring up? That's not on the agenda. Okay, I'm not hearing any. SDK, Scott or Clemens or anybody else who made the call last week, was there anything worth you bringing up? Okay, not hearing anything. That was in the middle of getting refactored to V2. We're not quite done yet. Okay. Yeah, I think one of the things we discussed was also how long we want to keep all the old versions around. Yeah, we're going to do a special provision for O3 for new SDKs. So that's... That's the... So basically, O3 is being the one that we still support and then testing the other ones. Yeah, we'll talk about that a little bit later. This PR down here, we talked about that a little and I did update that based upon our discussions. So we can talk about that again down there. Okay, any questions for the SDK team? Okay, Kathy, since you're on, is there anything you'd like to update the group on relative to the workflow stuff? Oh, yeah, sorry. No, not really. Sorry, I haven't really got much time to work on that. Okay, that's fine. Just give me the opportunity. Thank you. If I jump back up the SDK, I put a link in for the SDK subgroup next week. I have an Elixir SDK in the works that I would like folks to take a look at. Cool. Wait a minute. Actually, I'll do it later. Yeah, I'll add a link to that information into here. Just people, it draws people attention to it. I can drop it in. Oh, cool. That'd be even better. Thank you. In that case, let's go ahead and jump into the delivery stuff. Tell you what, I think yours actually might be shorter. Let's have you go first and then we'll jump over to Mike's PR. So what we did is, yeah, my story is shorter. So what we did this week is we actually didn't work on this document at all, but we broke out into a... So I took this at the... Basically, after our previous call, I took the text that's here and put that into Word doc as an interim step to preserve the formatting. And then, meanwhile, I have this down in Markdown. But I've done some cleanup on terminology and I'm turning this more into spec language, but I have not been editing like crazy. But with everything that's going on, I've just not been able to complete the homework. So I'm sorry. And I will... I aim to upload that Markdown document as a PR tomorrow. I think one of the things... So one of the key things we did in terms of clarifications is, since pull and push were contested and, you know, with some... And I'm buying some of the arguments, I'm trying to distinguish between effectively who's initiating the communication, whether the delivery... And really, like, what is the delivery model? So you'll see a term subscriptions with customer... Sorry, with consumer solicited delivery and subscriptions with subscription manager initiated delivery. Those are effectively the differences between pull and push to clarify what I mean there. And otherwise, the spec will go as far as to define, effectively, what the API looks like, the subscription manager API looks like in the abstract, but will not yet have to be down to the HTTP level definitions likely. But it will contain what we discussed in previous calls, like the filter, the simple filter syntax, etc. But again, so that's as much as I have. Go and take a look at the PR sometime tomorrow and you'll find a link to the document. Sounds good. Looking forward to it. Thank you. Any questions or comments for Clemens? All right. Not hearing any. Let me bring up Mike's PR so you can talk to that. I apologize in advance for not having much to say. I have not had a chance to work through all of the comments yet. It has been a crazy week. Short and sweet. Okay. Any questions for Mike? Those of you who don't know, Seattle is in a panic right now. I don't know why. Yeah. Any questions or comments? Okay. So just set of curiosity. And this question isn't actually just for you, Mike. It's actually for anybody, but just using your PR as a sort of a guinea pig here. At what point do you guys feel like we should just merge this PR? Obviously, most of the comments are addressed, but is there any like minimum bar you guys have in mind in terms of when we can actually merge the PR, or is it pretty much all the initial comments are addressed? You can just merge it at that point. We'll just do PRs against it because it's just like I need a doc in the repo. Do you guys have any opinion on that just from a process perspective? I just want to make sure we're not missing some step here that people think we needed to go through before we actually merge it. I think of this as the input spec, and then we start refining it. So I mean, we need to light it at some point. And I think we need to get to be able to comment effectively. We need to have a base stocking in the repo. So I don't think we need to, we can review on there. And it's a draft. And I mean, this is the same process as iterating over our base spec. That's the way I look at it. Yeah, I definitely, I think I tend to agree. The only reason I ask the question is sometimes it's easier to work outside the PR process if you have large scale changes you want to make, which is why we started, you know, with a Google doc to begin with and stuff like that. I just didn't want to preemptively merge this thing if somebody thought from a process perspective, we should hold off a little, but I'm not hearing anybody really jump up and objecting. Yeah, it's fairly hard to collaborate if it's not merged. I think if the original group of people who were in the sub working groups are saying, this is ready, then it should go in. The consensus I'm trying to get for our group is that everybody nods to the, this is our result. And then we're going to go merge it. So the document I'm going to give you all on tomorrow is not that yet. I'm still going to go and collect feedback from the group, but then I think we're ready to go at that point then. Okay. Anybody else have any comments on that? All right. Yeah, I think on this one, I'll make sure to note at the top that this is a working draft of the document. Oh, good point. Yeah, actually, you may want to, I think in the past we used to say like, it already says that. Well, status of this document. Oh, yeah. I was actually wondering about up here. Whether this is like version 0.1-RC or something like that, because I think that's the pattern we used to use because it's not technically a zero one yet, right? Yeah, it's like a pre-zero one. I can better see no one. Yeah, that's it. And I might put something in there like that. Other than that, it looks good. Any other questions or comments? Okay, easy enough. And thank you, Mike, for the link to the SDK. I'm moving forward. Okay, here we go. So I can remember for sure, other than I'm pretty sure Clemens, you said you were going to take a look at this PR to make sure there wasn't anything funky in here. Did you get a chance to do that? Or did anybody else look at it? Have any comments? I think I, yes, I think I promised, but I didn't. Okay, well, I'll give you guys in particular Clemens a chance to just double check and just to refresh people's memory, the must hear and the must hear are technically breaking changes, but the assumption from last week's call was that we assumed that that's what we meant to do. It's just more of a syntactical thing at this point, because that was always the intention for these requirements to be there. Yes. Yeah, I remember now. Yeah, I think this is okay. And I think that's where we were. Yes, also last time. Yes, I just want to give people a chance to look it over. Okay, any other questions or comments on this one? Any objections or approving? All right. Thank you, everybody. Thank you. Okay, this one. Well, this one is not ready to merge, because if we like it, if you like the general direction, I need to make the similar changes to the other transport bindings. Hold on a sec. Let me hide the comments. So just to refresh people's memory, there was a little bit of, there was lack of clarity in our specification in terms of knowing whether a binary message is a cloud event or not. For structured, it's obviously very easy. You can look at the content type, but for a binary, it wasn't clear. So this text in here basically says if all the required attributes, and I believe there's four of them appear as HTTP headers, then you can kind of assume it's a cloud event. Now, based upon slinky developers' comments, he had me change the wording slightly to make it clear in terms of what that means to think it's a cloud event. Does that mean it is a cloud event, or you can just start parsing it, hoping it's a cloud event or whatever. And so what I did is I modified the text here. Where is it? Where is it? Oh, yeah, it's up here. If I talk about it about whether the message ought to be attempted to be parsed as a cloud event. And then down here, I talk about how, you know, if all the four attributes are there, then you can start that process. However, just because those four attributes are there does not mean it's technically a valid cloud event. It still has to adhere to all the normative language in the specification. So for example, there are musts in there relative to values and stuff like that. If they don't meet those musts, even if they are, even if they do have the attributes, it's still not a valid cloud event. I don't go into what the implementation does if it's invalid, that's completely up to it because we stay out of the processing model. But I think this at least clarifies whether someone should at least attempt to parse it by the presence or absence of those four attributes. So I have a question here. So we are referring to the attributes in the HTTP headers to increase the probability that we determine. The problem that I'm struggling with and please help me if I'm mistaken is it still doesn't give us confidence that it could be a cloud event. So what is it actually helping? Is it clarify anything or is it just confusing it a little bit more? I mean, does that make sense? We're saying then it's probably a cloud event. Why are we not able to unambiguously determine that if those attributes do exist that it is a cloud event? In my mind, there are two reasons why we can't say for sure. One is our specification cannot mandate that people do not use our attributes randomly. So someone could say, hey, those look kind of like interesting attributes and I'm going to stick them in my message, but they don't actually know about our specification. So the presence of those attributes technically doesn't mean anything because they don't adhere to the full-blown spec. So we can't force people not to use our attributes. So their presence in a message does not guarantee anything. It's just a hint that it may actually be a cloud event. Now, even if the person meant for it to be a cloud event, if they messed up and didn't adhere to the rest of the specification, according to the spec, because they didn't adhere to all the musts and stuff like that, if it doesn't adhere to those, it is not a valid cloud event. And all we're trying to say here is we can't be definitive one way or the other, but we're trying to provide some guidance on what people should look for to try to see whether it actually meets the criteria. So for example, if it doesn't have those attributes in there, then they shouldn't even bother trying to parse it as a cloud event because it's not going to pass. Correct. So it's just a more of a filtering mechanism. Yeah, kind of. And that's why if you look at it, technically, there's no normative language in here, right? This is just sort of abstract guidance. And in a lot of ways, I actually thought about putting this into the primer. But I thought this might be important enough that it should go into the spec because I've got more than one, people have asked me this question more than once. Thanks, Tim. Any other questions or comments? Does this seem like the right direction to go? And if so, I can make the changes to the other specs as appropriate or the other transport specs as appropriate. Okay, not turning the objection. I'll make the changes to the other specs and then we can maybe review it and approve it next week. Okay. Thank you, everybody. Okay, this one. Technically, it's an SDK thing, but I wanted to give everybody else a chance to review it. Now, on last week's call, we were talking about what are the requirements for what these aren't hard requirements, right? These are pretty much just suggestions, but we're trying to talk about what are the requirements for SDK authors. And last time we talked about everybody obviously should support the latest and n minus one of the major releases. Within a major release, they only need to support the latest version, you know, where the latest minor version. However, we do have sort of the bootstrap problem of what do you do when the latest version is 1.0? What is the n minus one at that point? So according to the agreement we talked about last week, I added a note here that says one point is a special case. And in those cases, people should support 0.3. So Scott and Clemens, is that consistent with what I think we agreed to last week? That's my understanding, yes. Yeah. Okay. Any questions or comments on this? Oh, yes, John. That's a should and not a must, right? This isn't, but technically it's not a normative spec, so yes. Either way, yes. That's right. That's always going to point out the same thing. I'm sorry, John, I couldn't hear you. Can you say that again? I was actually going to point out the same thing. Oh, okay. The people starting a new SDK right now is problematic, right? They're going to go backwards to support 0.3. So yeah, it should just be should, not a must. Yes. Yep. And I think I have a big recollection that we did talk a little bit about that last week as well. But I'll just keep in mind this isn't a normative spec. That's why it's lowercase should and can instead of capital should. But these are all just suggestions. So, okay. Any objection to approving that? All right. Thank you. All right. So I put this on the list, even though Clemens raised an issue with it today. So some Googlers are working on, excuse me, PubSubbinding for their offering. And Clemens, you pointed out, hold on, where is it? Where did you say that, Clemens? Somewhere in there. Yeah, I don't know. I saw it. You just had it. It's painful. What Clemens said. There we go. Okay. Yes. So you want to raise the issue here? I don't think there's any objection, but just hope you understand why things may change radically. We, we, qualifying protocol and encodings is the section that we wrote together about what is acceptable to get official binding and watches and what is not. Google called, Google Cloud PubSubb is a proprietary product. And therefore we have a document that is the proprietary products link document that takes links. So this spec and should be hosted in a Google repository and then pointed to part of the rules. Yes. Thank you, Clemens, for pointing out. It was my bad. I didn't know about the rule. I'm trying to move this to a repo and then add the link here. But actually I'm glad that I made the mistake because I had some really good comments from Crystal then that will probably address and then add the link as well. I think it's, I'm not, I'm not, I'm not, I'm totally not against having that. It's great that we have, you know, as much coverage as possible. It's just that we have, I just want to point out that rule and nothing bad has happened. It's all good. Yes. But yeah, you guys are right. I didn't know about the rule and we'll move it out. Yes. So great. Thank you. Okay. So expect a, I think a fairly radical change to the PR. But obviously if you guys want to review it and provide feedback, obviously not so, as Nacho said, you know, more feedback is, appears to be welcome. So get it in there quickly before the tech stack disappears. All right. And one question there regarding timelines. So when we add this, this might be approved next week. We need to wait for a week until the next working group meeting for this link to be approved. Technically, yes. Technically, we only approve PRs during the week unless they're, you know, typographical type things. If you can get the change in there by Tuesday night, then people should be able to review it and approve it by Thursday. Okay. Sounds good. Thank you. And Jim, your hand was up there for a second. Did you want to say something? I retracted my thought. I mean, I was going to say, given what's going on with this proposal here, is there anything we can do or say around, you know, if you're looking to create a transport binding, but it is a proprietary thing that, you know, feel free to run it by us before you just sort of throw it out there in the world. I mean, if the intention here is that this PR would turn into essentially just a link to another spec, is there any sort of onus on us to make sure that whatever that link is pointing at actually works? That's really what I was driving at. Yeah. There was never, right, like the link to third party was best intention. So if the spec is correct according to that vendor, then that's fine. Yeah, I mean, that was my thought. I just wanted to raise it, but yeah, that was why I lowered my hand again. Didn't we either talk about or put text someplace that said it's within our right to remove things if we think they're pointing to old or bad information or something like that? Yeah, I think we had a provision to delete links, but not really the spec that's linked. Yeah. But I think, I think, Jim, you raised an interesting question, which is, you know, how does somebody just ask for our review, even though it's not necessary, you know, I'm trying to think, I think most obvious thing is maybe they can raise an issue just to draw our attention to it. And then once they get, you know, a couple of reviews, they can close the issue, but I'm wondering more about whether we should add some text someplace in either this doc right here to explain that people have that option to open an issue to draw our attention to it or to join our weekly call to draw our attention to it or something like that. Do you think that would be helpful, Jim? I think there needs to be something front and center somewhere that says, you know, if you're doing a proprietary binding that this is what, you know, this is what it should do, you know, look at these, you know, take them as a template and then, you know, open a PR on the page we're looking at to cross-reference stuff. Yeah. Because I think that's where your maybe SDK people are going to get interested as other bindings start showing up. So, yeah, I don't quite sure where you put it. That's what I'm struggling with. In the interest or in the theme of no good decos unpunished, would you like to take a first pass at a PR to add some text to that effect? Yes. I'll teach you to speak up. I appreciate that. Thank you, Jim. And I knew I'd be dropping off in a minute so you won't get any more bizarre questions. That was a good one. I like it because more guidance we can provide would be good. Okay. Cool. All right. Next on the list and Slinky developer has joined us. So, we have this issue that he opened and then you made a comment down here. Would you like to talk about what you're proposing that we do with this issue? Yeah. So, the thing is that the paragraph about key, it just points to that partition key, sorry to the partition with the partitioning extension, which honestly is just confusing because key is a really well-defined concept in the Kafka message specification. So, just saying the key extension must match to the Kafka message key. It's far more clear than saying there is a partition key extractor that extracts things and then put inside a key but then you open the partition extension and it's called the partition key extension. So, I think it would be better to just remove the link to the partitioning extension and say, hey, when you create a cloud event that goes on the Kafka wire, then you need to put the Kafka and you need to put the key, the message as an extension. As an extension named key. To me, it sounds far more clear this way than going through the partitioning extension and the partition key extractor. Jim, did you want to say something? I must admit I haven't read the full text here. So, I'm not entirely sure what the problem is. The partitioning of cloud events is agnostic to transports. But I think when that extension was added, it was there to enable a Kafka transport binding to take advantage of it if it was present. Yeah. So, I'm not quite sure why we would want to remove that linkage. Because I should be able to produce a cloud event and define a potential partitioning key irrespective of the transport it goes over. And in fact, it may go over transports that don't do partitioning. But we wanted that to be retained across multiple transports. So, that's where it gets interesting. If you publish an event onto Kafka and it goes through an intermediary and gets moved to a different transport or vice versa, you need that partitioning concept at the cloud event level. We might actually have a conceptual, there might be some miswording here in the Kafka binding. Because the partition, and that might lead to that confusion. The partition key or partition ID doesn't actually show up in the message per se. But it's the partition that you talk to in Kafka. And you're talking to the partition directly. So, that's a connection property, ultimately. And we're pulling that. So, this function here is effectively pulling that information out of the message. And then making that something that's related to the connection. So, it's not a key per se. But it's really how does, for Kafka, how can you tell which partition to set to? And that is by looking at the partitioning extension by default. But it could also be any other key. That's what the mechanism is meant to do. I'm a little confused by that. Because I would expect that the partition key would be have the Kafka partition selection algorithm applied to it in order to select the partition. Especially since partitions can move around and such. So, maybe you can help me get unconfused. And I think that's what my comment does again. I mean, that's what I say in this issue. When you say partition key to a Kafka user, it's the key of the message or is the partition ID? It's not really clear from the beginning. And also, there is the typo that in the spec, it states key while in the partitioning extension, it says partition key. Yeah. I agree that that's confusing. I don't have a solution for it. It's just that that is confusing. Because the way we discussed this, we probably didn't follow through and update this correctly, is that this key extractor, the key extractor idea basically just stems from having to effectively assign a message to a partition. And that there is a function that does that job, and that depending on what that transport is, and Kafka being that, there might be a hash function that you use for this, or that you may want to go and specify the partition directly. And so that that effectively the partition is the partition is something that's a kind of a flexible assignment. And that the partition itself cannot be manifested in the typically isn't manifested in the event per se. That's the thing we tried to kind of decouple those things. What we didn't want to do is write the partition ID directly into the event, also because if you forward the event into a downstream Kafka broker, so you have effectively you use for instance use mirror maker, and you go from a four partition to a 16 partition to a four partition broker. If you use a key function, if you use a key, then the the partitions will distribute differently. And if you use if you use a stable partition number, then if you go from the four partition to a six partition Kafka install, you would basically just go and send from four partitions to four partitions. And you would have 12 partitions that are idle. So the idea was here to have a way to dynamically in the client determine from the partition key, what the partition number would be. And of course, the key attribute here is kind of shooting that in the, it's probably it's not the proper resolution in the Kafka stack based on discussion that we've been having. So I think that's where some of the confusion comes from. And I would think that we've had some of those discussions in the PRs. So we probably have to go and dig back into the history and find that. So I had a comment about this. Is is the problem the implementation of that function that populates that attribute? Or is it the naming of the attribute? And I think it sounds like that should be those should be separated, right? So depending on if it's Kafka downstream, then the, you know, the appropriate partition key function is factored, which has knowledge on the number of brokers and how the best implementation versus the overloading of, I mean, the attribute is key and how it gets populated are two separate things. Is that a fair comment? Yes, yes. So yes, it is. I think I think we've been, we've been a little lazy and hand wavy on this point where we basically left this with this function. And then I think we thought we had that solved the function, the function ultimately in this transport binding takes by default this partition key. That's what this extract key extractor function is. And it goes and determines what the partition ID only shows up in your usage of the Kafka API, or if you want to go further a little bit further down in the Kafka protocol, but doesn't materialize in the message per se. The key here, the key attribute, I think that's a, that's a left over that, that we didn't drop as we were having that discussion, or we didn't clarify that here. So the, the, the, the, the, the key, the, yeah, so ultimately, let me, let me back out once. The idea is you should be able to specify some criterion in the message that will then go and determine what your partition number is without that key or the partition ID having to be manifested in the message per se, because the, the mapping of the, of the message to a partition may be different as you're rounding that event through multiple kinds of, kinds of infrastructures that are partitioned. Correct. Thanks. Yeah. Actually, can I just maybe suggest that we take a concrete example, like, and then say what was that key being. So I think we're all in agreement that the key should be, I don't know, implementation agnostic, but from the key, we want to extract some data that would indicate how it should be used by this extraction function to determine downstream choices, right? Cool. Yeah. So make that concrete. If I, if I wanted to say that my partition criterion is actually the subject that's in the, in the, in the event, I should be able to do so. So let's say I have an, I have an event that events that are coming where the source is a device of sorts and that emits all kinds of telemetry and then the subject is temperature and then other subject is whatever rotations per minute, the other one is so different criteria that are happening and they're all on the subject. I should then be able to go and have a function that then assigns those two partitions. So I, so that function might be knowledgeable of the, of the numbers of, numbers of partitions in my immediately in the Kafka implementation that's immediately in front of me. Let's say that has four. So the function is some kind of hash over the subject, mod four. That's how I get the partition ID. Now I pull that stream out and now I'm taking the same, the same, the same streams like the, all the data that comes out of that Kafka cluster and I want to go forward that now to a different Kafka cluster, but that has 16 partitions. So I should go and, and again run a function that is some hash over the subject, mod 16. And I get the, and I get a different, different kinds of partitions. That's, that's what I mean. Correct. Correct. No, that's perfect. So now where, where should the responsibility of the knowledge of that particular, let's say Kafka cluster should be? I mean, I think that's the point here, right? We, I think we all agree. It must be, it must be in the forwarding client. It must be in the implementation that is, that is interacting with that, that is implementing this, this, the transport binding. The way how we thought about this, like as we have evolved these things is that as we get into these scenarios where we're actually forwarding all this data and forwarding is something that is, so the reason why I'm, why I'm forwarding was, was a point of discussion in this whole time was that industrial scenarios, for instance, have this device, device emits telemetry and then the telemetry goes to dozens of parties who might be interested in that information for all kinds of different reasons. So there's, there are forward, there are those forwarding scenarios. And in those forwarding scenarios, we've, we've talked about transcoding scenarios, etc. So we always assume that you're taking a cloud event off a transport, then the cloud event becomes, goes back into its abstract form, and then it gets mapped down to a transport, and then goes through that, this leg of its route, and then you pull that off that transport, it goes back into its abstract form, and then you're choosing a different transport. So for here, effectively the, the, the client, the cloud events Kafka clients that is connected to a concrete cluster will then know how many partitions exist, and it will then go and calculate what the appropriate partition is for that cluster. And somewhere else on the route, another client may make a very different partition decision. Yeah, sorry. One sentence. And that is why, and that is why the partition number, right, the concrete partition hint cannot be in the message because the decisions on the route will be different. Correct. I completely agree with you. So I think, and maybe, you know, when we talk in abstract, it's hard to, at least for me to grab my head around it, but what you just said was perfect. So what, what that meant to me was the forwarding client, let's say you had four Kafka clusters, but there's a config which at some point, which when the rubber hits the road needs to tell it, it has, I don't know, four brokers, 12 brokers, 13 brokers, and this is the extraction function. And it knows how to take a generic key and map it to when it determines what that routing needs to be, it knows how to appropriately choose the hashing function and do it. So coming back to our original point, which in my understanding, I think we, we should not bring in the partition key attribute, but just keep it generic as key. Is that fair? I think this, I think what we need to do is we need to make a correction here and, and basically say the key, we need to be, we need to make clear, and it's not clear here that the key attribute really means what is the partition that we're sending to. So we need to reword, I believe we need to go and do a bug fix here, which is, is the partition ID, the Kafka partition ID is determined through a partition key extractor function. And we might also call this, we might also call this different. And which is independent of the name of that in the generic message. Yes. So, so we need, we need to go, we need to do a bug fix here. I can, I can take this, I can take this as a, as a homework item to, to propose a correction. So, so come on, I've got a question for you because I'm not 100% sure I'm following this, but I think I understood what you just said there at the end, but it's not clear to me whether something actually appears in the message related to, so, so nothing appears in the message relative to keys or partition keys or anything like that. Correct. And that's, I think that's what the bug is. We're talking about a key attribute here still, but we actually talked ourselves out of it while we were discussing this whole thing. And then we, we left that in. But does that mean that we should then propose to kill off this extension right here? No, no, the partitioning extension that, that, what that gives you is it gives you a partition key. And the partition key is if you need, if you can't, if you can't have a, if you can't derive from the message content itself, what partition that ought to belong to, you need to have some artificial criteria and by which you can go and order that. So, so you might have, you, the partition key is effectively an artificial correlation key of some sort. So you're saying this is almost like a backup kind of a thing? Well, it's, it's so if you, if you can't determine only through source and, and type and subject, how you want to process that, because that's ultimately what, what partitioning does, right? Partitioning is, is slicing up your event stream in a way that the processor can deal with it or processors can deal with it, which means that the publisher may give hints in the form of a partition key for how those events shall be mapped to partitions. And the partition key might quite well be a totally artificial assignment that the application logic kind of generates in some way and how it groups events. And those groups, groups of events, maybe groups of, let's say maybe groups of devices, you have a thousand devices, which are all different, all different sources and sending different events, but you're grouping them together, like a production lot that all then emits the same partition key, because you want to have them together. That's what that partition key, that is, that is what that partition key is generally for. And then that function that we're referring to here, the goal of the job of that function is to go and then look at either something that is in the metadata of the message, or that partitioning, that partition key, and then effectively compute a partition ID out of it. And computation may, in the simplest case, the partition key may simply be a number from one to four, right? May simply go and take it and copy it, and then go and set that as the partition ID for the Kafka client. Okay. I think it's surrounded it's Joe. Francisco, did you want to say something? Yeah, so if I'm just talking from what you're saying, what you describe here is more a client and SDK behavior than something that I should put inside the event, when I send the event and when I receive the event. That's, yeah, that's exactly right. Because this is completely not clear. Yeah, that's exactly because the way how those clients, so Kafka on the Kafka record itself, right, doesn't even have that information. And neither does neither does our event of also doesn't have that information. Like the partition is something that, the partition information is something that is outside of the event. And it's really only relevant for the senders and the receivers, how they get the stuff out. But it becomes completely irrelevant once you are forwarding that event elsewhere. So it's simply, it's really just a local function for a local concern for that particular Kafka cluster. Okay, it's okay. Can I just ask a last thing to you? So after you work on rewording this paragraph, can you please check the implementation that we did in the SDK code? So we can double check if this, how the spec maps to the actual implementation. Yeah, I will try my best to, I will look at the code and then make sure that whatever I write will make the code be compliant. Yeah, yeah, I mean, if the code needs to be changed, it's fine for me. It's not a problem. Just, just so I just want to be at this thing. All right. Okay. So Kathy has a question. Yeah. Okay, so I just want to make sure the key defined here maps to the key in the Kafka, which the producer can put into his message. And then the Kafka broker can, you know, use that key to map that message to a specific partition. Yeah, the Kafka broker doesn't do that. The clients do. Okay, but because in Kafka, in Kafka in particular, the since Kafka doesn't have a, doesn't have a shared gateway, but is architecturally monolith, the way how you find out, the way how you talk to partitions is you connect directly to the broker holds the partition, which means you need to know ahead of time, which partition you're selecting. And that's where the clients are actually doing that job. Okay. But it can also use something else to map the message to the partition, right? And that is what the function is, right? That is, that is what just meant with that function. Ultimately, you need to get a number out, which you then can then hand to the Kafka SDK. Oh, so this partition key structure is that function, which does the mapping? Yes, because we may have the partition key, right, is a generic mechanism that exists in the, in the cloud events extension. There are other brokers outside of Kafka, which all have partitioning models. So the goal for that partition key mechanism is to be generic and then be usable by all the other, by the other brokers, which also need partitioning. So this is a particular implementation here. And that should, and that should map to the partition ID. So we basically, we made a mistake by not cleaning this up when we, when we landed on the solution with that, with that function. We didn't, we just didn't do that, right? Yeah, but I just think this function name is a little bit confusing because it said is a extractor. That means extractor key, but actually it's do the mapping too, right? And also this function might do the mapping, you say, not, I mean, not, not based on the key, right? It could, it's, so, so this name, because otherwise, you know, this name just means it's like the key, always also true, it's like key. But if it's a function, if it's a mapping from the message to a partition, specific partition that, you know, it's, I mean, based on some other criteria, we shouldn't call it the key extractor, right? Well, that's right. And that should be a mapping function. I'll, I'll try to dig up from the, we had several issues and PRs about this. I'll go and mine the repo for the debates that we have had about this. I mean, we may also have had those on the call. And if we did, then that's going to be harder to find. But I'll try to dig that, dig that up. And if I can find it, then I'll put it into the, into the, into the issue here, or the PR here. So, reference. So, Clemens, just trying to understand this, because when I'm reading the text in here, it seems to me that, excuse me, that while the wording may not be exactly what we want, is, is what's written here actually wrong? Or is it just a little misleading? Because the way I'm reading this is, is that it says, there's an extractor function that people will use to figure out what partition to put things into. But if that either isn't there, or you want some sort of default extractor function, then we offer up this partition extension thing you could use. But really, if the extractor function that's the main, the main way of getting the partition key. Yes, that's right. Okay, so it's just a more, it's more of a wording thing more than anything else, is that true? It's a wording thing. I don't think we want to go and change the mechanism, but it's, it's really about, you know, clarifying the mechanism. Okay, okay, just want to make sure we're on the same page. Okay, thank you. It's not the key attribute, it's the partition ID that we want for Kafka specifically. It's also, what's also weird is that that's not really using using Kafka nomenclature here. So we'll, we'll go, I'll make a proposal to clean this up. I hope that the, the, the, the, the wording change will be minimal. I think it's also good to clarify. Yeah, this key is not the partition ID. It could be used by the matching function to map to a partition ID, or it might, it's all the partition, the mapping function can use other attributes to map the message to partition, for partition, right? So if we can clarify that would be, or we can just say this key map to the key in the, this partition key, map to the key of the Kafka, that maybe make it clear. Okay, does anybody have any questions or concerns about the direction the teams are headed with us? I think just the last comment is the whole point of a key and a partition key and everything is very Kafka specific, right? And it's, it's, and I think it was also mentioned, I think by Kathy that, you know, everything else is extracting that using metadata in the message to extract the partition key. So it's something that is in the, in the client and the client function, right? There's, there is no key in the message. That is correct. There is no, well, there is no, there is no partition ID in the message. There might be something that's usable as a key in the message. Exactly, exactly. Yeah, exactly. The key attribute, the key attribute that is described here makes no sense in the message. Correct, correct. Exactly. And I think I would like, I think we should, my only point, my last point is, thank you for this discussion. It's been extremely helpful. I think we should just mention that there is no key attribute in the message. The point is, it's, it's symbolic, which is extracted from metadata in the message to facilitate downstream actions. But I'm going to rename, I'm going to rename that, I'm going to rename that section from three, three, one, I'm going to rename it into partition ID and then clarify effectively the first, the first sentence will probably be two and there I'll be talking about the relationship between the partition ID and the message. The one thing that isn't quite clear to me though is, as he was just saying, if there's nothing in the message, what does that mean for this extension? So there might be something in the, if you really, if you want to have something in the message, if you, if you, if you need to have an artificial key, right, artificial key to derive the, the partition ID from them, it's that. Okay. So, okay, so it's, there's not a must, it's something that appears in the message, but there's a, it's a may, okay. And that's, and that's why we, that's why we have this as a, as an extension, because it's not applicable to all cases, but in case you have a partition log or, or, or, or queue, whatever, then you need to have a partition, partition key of some sorts. And because that is not too uncommon, we made an extension out of it. The partition key extractor can ignore key if, if it wants to. Yes. But this, what doesn't make sense here is for, for a transport binding to add stuff into the message, like to add stuff into the event that, that is already, I think that's already a foul. Okay. Before we rattle too much on this, but I think it was a good discussion. Any other questions or comments? All right, cool. So Clemens, I wrote you as taking the AI to provide some caravine text on that. So thank you. You're welcome. More homework for me to be laid on. That's right. You can never have too much homework. All right. That's technically the end of the agenda. Any other topics people want to bring up? Okay. Go ahead, Kathy. Sorry, this pops up binding. Is that Google specific or is a generic one? That is, that is a Google specific binding and they're going to move that, or they're going to change that pull request so that it's just a pointer to their binding that we hosted on their website. Okay. Got it. Okay. Thank you. Oh, here we go. Wait a minute. Christoph, are you there? Yeah, hi. And laugh better late than never. Gotcha. We're just about to end. Yeah, I just noticed that I'd save 70. Yes, I miss that one. Sorry. I suspect I hit a lot of people. Okay. In that case, you guys, I guess that's the end of the call. Unless somebody has anything else they want to bring up. Vlad, are you there? Yeah, I just joined. Yeah. Unfortunately, daylight savings times, bitcha. So we're technically over. I totally forgot that. I apologize. I probably should have sent that a note to remind people, but it completely slipped my mind. I'm sorry. This is a fail on my side. Don't worry about it. Okay. Anyway, I think that's it for the day, unless I miss somebody for the agenda. We'll talk again next week, everybody. Thank you. Thank you. Bye. Bye. Bye. Sorry. Bye. Bye.