 Okay, let's see. Okay, it's three after two. Is anybody obvious that I'm missing? I don't think so. Okay, why don't we go and get started. Where are we? Okay, community time. Anything from the community people would like to bring up? All right, not during any. SDK work I can't remember for sure if we actually had a meeting since last Thursday, but I don't think anything happened since then. Clemens or Scott, I think you guys might have been on the call. Is there anything worth mentioning this last time? So the go SDK has been rewritten. I assume that's a plea for people to take a look at it. Yeah, it's probably bugs. Yep. All right. Any questions about the SDK work? Okay, now Clemens did open up a pur request related to the SDK doc and we'll get to that later. But just I mentioned the warning to you guys. We'll be looking at that pretty soon. All right, Scott's next demo. We did not have a call last week because there was no update, but is there anything you'd like to mention? I don't think so. Okay, anybody have any questions or comments about that then? Just quick opportunity. Okay. Kukan EU. I think we may have had a planning session since last time. I don't think there's anything radically different. The notes are actually in this doc. If you guys want to take a look at what we're doing in terms of how things are shaping up, just a quick refresher. I came here for sure. We mentioned this last time, but we will have a cloud events intro and a cloud events deep dive. And we're looking to have one 80 minute long serverless working group session where we'll talk a little bit about state of the industry relative to serverless, followed by hopefully more of a birds with feather type of discussion where we try to get the community involved. We are still trying to figure out how that session relates to the bigger serverless days. Okay, we're at the term they're using for it that they're thinking about doing at Kukan EU. It may overlap with it. It's canceling ours, integrating to their or may keep it. We don't know yet, but that's still something that's up for discussion to see you guys know. Is there more information on that? I hadn't heard about that. No, unfortunately, they have a Google doc out there that I don't think has been updated in quite some time. I did reach out to Chris Andacek to find out more information, but he hasn't gotten back to me yet. So as soon as I get more information, I'll let you guys know. Thanks. Yep. All right. Any questions on that? All right. All right. Kukan China. Just as a reminder, I believe the CFP for that is closing tomorrow, I believe, just for you guys' reference. I will be asking for an intro and deep dive 35 minute sessions and then a bigger 80 minute one just like we're doing for Kukan EU. If for some reason we decide not to use them, it's very easy to drop them. I think it's hard to get them after the deadline for things. I'm going to ask for them and then they can always cancel what we need to. However, please be thinking about whether you're going to make it to that or not, whether you want to talk during one of these sessions. Same thing as we did for EU. We need to start gathering, you know, a list of topics and people who want to talk. I suspect in terms of topics, it may be very similar to what we're already doing in the EU, so we should be able to reuse a lot of it. So mainly the biggest unknown for me is who's going to be there and going to do some talking. So be thinking about that when you guys get a chance. Okay, any questions on that? All right, moving forward to PRs. I struggled a little trying to figure out the right order here. Unfortunately, I don't think any of these are technically ready to go. So I figured the order based upon the oldest first might make the most sense. So Rachel, yours is up first. And I know Clemens put a very long comment in there. How would you guys like to talk about this? Rachel, do you want Clemens to talk about his thing first or do you want to just talk about your perspective? No, I feel like everyone knows what Clemens and I think. I think they know where we stand. And I think the bigger question I have is if this is something that we want, because if it's... Okay, I guess I have one question. If anyone hasn't read through what's there, then we should absolutely summarize it. But if everyone has read through it, then that seems like a waste of time to just re-cash it. So I think what I would like is, is this the thing that people want? And if so, what needs to change to actually write something here that people want? If it's not something that people want, like if no one cares, if no one is interested in making it possible for proprietary specs to list that they are cloud events compatible, then great, we should just close this. If people think it needs changes, then we should specify what those changes are. And if people want to make this happen, then you should say that. There you go. Anybody want to speak up and voice your opinion? I will. Scott, so I've actually ran into customers that are using cloud events over web sockets. So it's not in our spec, but they're still using it and it's still working. So that's maybe an opportunity for that. Is that a proprietary spec statement, or is that just saying, oh, we have one more transport binding thing we need to write? Well, do we need to write it? I think it's the latter, because WebSocket is a framing protocol. And I think if we think we should have a binding for it, we should make one. And certainly it's an open IDF standard, so let's go and make one. It's certainly not the proprietary category. Do I see value in being able to bind proprietary protocols onto open protocols to make this ecosystem work? The question here is whether there's value in taking an open protocol and binding it on top of a proprietary protocol and what interoperability purpose that serves. If the proprietary protocol is not interoperable per se, that's the question for me. Yeah, I think that we're thinking about interoperability at different levels. So if something goes into a system and then comes back out and interoperates, I consider that interoperability. You do not. That seems like the point. So for me, the question is, there's a spec in the repository. Who does that serve? And if that spec is only good for the three people who are owning that proprietary product, then it's pure marketing. No, it's good for anyone else who's interested in using that and also wants to support cloud events. Like, if something doesn't have broad support right now, that doesn't mean that it won't. I also don't really want to be in charge of saying like you are large enough. I just want to say if everyone wants to support cloud events and you have a, like in your all proprietary, then like here's how to do it. Here's like a way. Okay. People are raising their hands. So people in chat can talk. That's okay. So Rachel, in your sort of, in sort of your opening statements, you said something and I wanted to question that a little. You sort of presented it as sort of a bullying choice. Do we want to do something or not? I think I give three options, but point taken. Well, okay. Well, that's what I was wondering is in my mind, I think there were three options. There was close this issue or PR and do nothing. No, no, I said close it, accept it or change it so that it's something we want. Oh, okay. Well, okay. Let me, let me, let me state my, my three, my three options as I see it. One is close it, do nothing. Two is have a list in our repo someplace of known bindings that we just don't keep in our repo. And that would include the proprietary ones. And then the third option is what you're proposing, which is allow proprietary ones to actually live in our repo. And I wanted to know whether those are the three options or whether there's more choices we need to consider. I feel like there are infinite number of possibilities. It's not what we want. Like, okay, here's what I would like. I've talked a lot, Clemens has talked a lot. We are not the only two voting members. What other people think? Anybody want to raise their hand and speak? Eric here. I think that it's, it's probably useful to consumers of the spec to have a list of, of those protocols that are supported, whether or not those are open or proprietary protocols. It seems like we would be taking on a fair bit of friction and overhead if those protocols and their mappings were placed into this repo. But anyone that was willing to produce a, you know, specification of the mapping and then link to it would, would then be able to, this seems what's reasonable to me, would then be able to add a link to that specification in this repo. Okay. Thank you, Eric. And Mark, I think you were next to your hand. Yeah, I, I'm a plus one for adding these proprietary specs into our repo. We want people to come here for all definitive information about cloud events. And even though it is proprietary, as long as we're marking it as such, I don't have a problem with it. Okay. Thank you, Mark. Christian, I'm sorry, Christoph, I think you were next. Yeah. So I already wrote a comment, but to repeat it, I think for interoperability that people should go and somehow also submit an implementation. They don't necessarily have to implement it for all our SDKs or so. But if there's at least one implementation, then it shows that it is really interoperable. And it also means that we have a starting point to actually use it. So personally, I'm actually interested in having a cloud event binding for some of the AWS stuff like SQS and SNS, for example, that would be something that I will find valuable, but only if it comes with some support, otherwise it is a little pointless in my opinion. Okay. So just for my own clarity sake, you're saying you, you're okay with them going into our repo as long as there's an implementation available. Yeah. Because then I would also assume or not necessarily, but it could be that SDKs decide to support it as well. And that would then also live in our repos. Well, not in the spec repo, but in the SDK repos. Okay. Thank you. William, I think you're next. Yeah. Just to validate some things here. So when we say live in our repo, are we thinking the implementation, the spec? What exactly will be living in the cloud events repo? I believe it's the specification. Just the specification, right? Yeah. So, and is there any sort of TCK plan to validate or to say that a given implementation is compliant or not? So that's that differs. So in my original proposal, no, but Christoph would like to see that. So that's up for grabs. Yeah. Because the reason I'm bringing that up is just because I feel like, again, without a TCK or anything like that, we can't prevent anyone from creating any, any spec or any binding, I would say, or any transport for, for what we're doing here. The decision then is just like where we want to host it. Right. And if you're calling it like a support of whatever, search five or approved blast by the group here. And if it's a proprietary one, like, I think, based on what I've heard from others here, and I think I would agree, like, again, you can implement that. And that's not coming out of the, the working group. So we are not endorsing it, but we recognize that, sure, you are implementing this back and click on this link, go there, figure out how that spec actually makes sure that they are compliant with, with cloud events. But we are not making that, we're not guaranteeing that, right? It's yet another spec out there that someone implemented based on what we produced here. And we just can't prevent that. Right. It's open source software. They can read the spec just like happened with web sockets here, for example, someone went there and implemented and started using it, like others would do that. And the fact that we can listen here, I think it's a good thing for awareness and adoption. But I don't think then those would leave inside the repo, right? It would just be a link. Okay. So you're, you're not just linking up the spec itself, right? Right. At least if, again, if it's not implemented or endorsed by the group, that's kind of how I would list them. Okay. Anybody else? Excuse me. Anybody else want to raise their hand? Okay. So Rachel, I'm hearing votes for almost everything. Actually, I guess that's not true, right? We've heard votes for respecting here and in the link. I feel like there's more, like the link is a lower bar. And so I think that is probably more supported. Is it, can people say quickly if that's true? I agree. Well, of course you agree. Anyone who didn't just say that agree? I agree. It's Jim. So you're suggesting that maybe it's a lower bar, but I still don't see a problem with moving, having a folder for private specs. Well, Rachel, by asking that question, are you implying that maybe we should start with that lower bar first and then consider the full proposal at a later point in time? No, that's not it. I am suggesting that we should do whatever it is that can get enough support from the group to let people who are not part of the group and who have proprietary protocols start supporting the spec. If the best that we can do is get links in the repo, then that's what I think we should do. I think that we should make it as easy as possible for people to find everything. So I think keeping things in a folder in the repo seems the easiest to me, but I understand that people feel like that looks like we're endorsing it even though we can say in many places that we're not. So the thing that I want to do is do whatever it is that we can possibly do. Okay. However, in order to make forward progress, we need to decide our next step. And I'm trying to figure out whether the next step is to say the current proposal on the table is just links, or do we want to go as far as say we've talked about this enough? Let's do a vote. Links versus spec in our repo and let the majority win. Or is there another option? I'm fine with that. We could just vote on it. Okay. What other people think? Is that the next step here is to finally make a decision and make the vote links versus full spec in our repo? It works for me. Okay. Anybody having either opinion or idea to move forward? I have a question. How is this going to affect maintenance? Like could the spec that's hosted in another repo diverge somehow or be very out of date and still prove compliance? It seems like it would be easier to maintain a higher level of conformity, I guess, if they were all in the same repo. The counter argument for this is because they're proprietary, you arguably can't touch them. And so it needs to be maintainers to deal with them. Right. To me, it's not that we can't touch them. I would not feel bad about submitting a PR that updates it. I just don't want that to be my job. I don't have an interest in doing that for other people's proprietary spec. Yeah, but that's so someone needs to go and do it, right? The person, yeah, whoever cares about that spec should do it. Yeah. So just raising my hand here. That's actually the maintenance aspect of this is probably the part that I think about the most, which is if their spec is in our repo, whose job is it now to maintain it going forward? Because it's very easy for us to say, oh, it's their job. In the original proposal, it says it's up to whoever proposes it will need to make sure that it stays up to date with the current version of the spec. And it also says that if it becomes too divergent, then we reserve the right to delete it. Okay, I forgot about that piece. Okay, yeah, because I don't want us to be on the hook to update someone else's spec continually. And if they're not responsive to issues, they get opened up against our repo, then that out, you gave us of saying, okay, they're not responding. We're going to kill off their spec. That's fine. Okay. Thank you. Okay. So in terms of moving forward, it sounds like what I should do is after this call, start a vote, I guess we could do it through the PR itself, where people can vote off length they need to. But then the vote closes beginning of next week's call. And the choice is just links to other specs from our repo or the other specs themselves living in a proprietary folder in our repo. Do I have that right, Rachel? Yeah. And just to be clear, the result of this vote is not to accept this PR or not. It's what direction should this PR go? Correct. Yes. Okay. Yes. And actually, just to be anal about this, the option of just closing this and doing nothing, we have no one's actually spoken up in favor of that. I want to make sure that we're not re-eroding anybody who actually might think that. Is there anybody who thinks we should do nothing? Clemens, do you want to take that one? I'm happy with having links. Okay. Okay. Okay. Okay. And start up next week's call. Is that right? Do we put the TCK issue into it or is that something that's basically off the table? I mean, it probably doesn't make sense for the links in the repo and it's just the way it is. But I think for option two, it would still be, I'd say, a valid option. Let me ask you this. If we decided later on to include the specs, but not a TCK, would that change your vote? Yes, because then it's just a link to something and it's like just go and follow the link. What if it's really in our report and I think it has a higher value for me? Okay. So then I think the question then, I think it's for you, Rachel, is what would your ultimate purholes look be? Is it just the specs or is it some sort of verification like TCK? My personal preference would be to have a lower bar for people to, like if people are already doing this, then show what you're doing. Like that's the level that I would want. If people feel strongly about a TCK, then I can go along with that. I'm not opposed to it. What other people think? I need to know what to include there on the vote because at least for one person this would change their vote. Can we just make it a third option? Can we say option two include the other specs in our repo without a TCK included and then option three? Yeah. And then I guess the tricky part is if two wins, then we might have to revote. If two wins, we might have to revote. If it's very close, if one and three together get more votes than two, then it seems like we would not be pursuing the interests of the group. What's that voting mechanism that we're supposed to see? Yeah. Like a ranked choice situation. Yeah. Just to make my life harder. I guess we have to do that kind of voting then, right? A ranked vote? Maybe. It depends how the like if two is by far the winner, then probably not. Do we need any civs or doodle? What was the first option, Mark? Civs. Yeah. But Rachel, I don't think we can wait until we see how things are going before we decide which way we're voting. No. I mean if two gets everyone's vote, then we don't need to do another vote. Right. But a ranked voting means you vote for more than just one. Yeah. I know. I love you, California. So familiar. Right. So my point is I have to phrase the vote properly. Basically, I think what I have to do is offer it up as a ranked vote and then play it out. Oh, you're saying that you want to do a ranked voting for real. Okay. I have to. Because I can't just say pick your pick your one favorite. Otherwise you're going to be voting more than one time. Yeah. So we could either do it multiple times or we could have ranked votes. Okay. I'll just do a ranked vote. Okay. Any objection with heading this direction? All right. Thank you guys. Hopefully by next week we should be good to go. Kristoff, you okay with this order? You want to take the second one first? Yeah. Let's do this one first. Okay. Yeah. So we discussed this a couple of times. The goal is that we have a minimum supported event size so that as a producer, I know I can set fire off an event and that basically everyone who follows after me will accept it. Most likely we change that to a shirt so people can still opt out if they have to. Yeah. Then we discussed it a couple of times. I think one of the, well, Clemens made a comment in the last week that is basically what happened and I responded to it. Otherwise the text hasn't changed. So Clemens, you want to talk about your concern with the proposal? Yeah. I think the, so I'm agreeing with that limit, with the size limit generally, but I think it's the whole normalization via Jason is something I find a little odd because you know, you set the event and the event has 64k and the publisher puts it on the wire in a certain format with a certain payload and it either fits or it doesn't. And ultimately if there's intermediary, intermediary chooses a different encoding, it's up to the intermediary to make sure that that fits and if it doesn't fit, then it needs to go and report that out in some way. But if you, this feels like this feels like we're doing this in Jason and 64k and it's kind of the median of all, like you literally need to write code now that measures in some way and ways the event in Jason, even though you don't even care about Jason, if there was a specification now for Seaborr and you would, the only thing you would do is Seaborr and then re-encoded to AMKP, you kind of would have to weigh the event in Jason to make sure you're performing with this section. So I find this whole reference to Jason and this, trying to be, trying to give a relative normative size of the event in Jason just to complicate it. For me, it's like you put the event in the wire, it has a certain size, it can't be more than, it shouldn't be more than 64k a go. Kristoff, do you want to say something? Yeah. So I think there's two points. I think the point that you make, that it is rather complicated to go over Jason, even if you don't use it, I agree. That is a weak point of this proposal. Before we discuss the little bit other options, I can still redo the discussion. I'd like another option would be to say, okay, the cloud event can have up to 50, maybe the first thing is it's only a minimum supported event size. So there's a guarantee that you have to accept it. But if you just don't care, you can always send higher stuff and then just hope that it is being accepted by everyone who comes after you. You only really have to do it, especially as an intermediary, if you want to be sure that it will pass on. And the same as a producer. So I think if the producer does this once and then basically everyone else along the line doesn't have to do it anymore. But the point that I was trying to make about that Jason is too complicated, another option will be to say, we define the limits in a different way, saying like maybe to support at least 50 attributes, every attribute can be up to a kilobyte in size measured in some way. And then the data can be up to that size. So that would kind of make the definition on the, or say this on the abstract model of the cloud event and not on something that's serialized. That is also something one has to write code to verify. But maybe it isn't so specific. I picked Jason here because basically Jason support is for, that's the reason why I picked Jason. But we can still go and try to do something else. That's my first point. The second point is it's not necessary that everybody has to do it. You only have to do it if you want to make sure that this will be, this message will be passed on along the way by everyone. Otherwise you don't have to. Okay. Anybody want to voice an opinion? Somebody has to speak up. I'm going to pick on somebody. Someone who writes SDK code needs to voice an initiative. Oh, well, in that case, I'll pick on Scott. Scott, you have an opinion on this one? I use Jason, so I don't care. Okay. Let's assume there is a seaboard spec. So in the Go SDK, I actually translate the cloud event into an intermediate piece. And it would be actually pretty impossible for me to do what Clemens is asking in the way it as a JSON if I don't have a JSON encoding type. So it's a point taken. I also don't know what the transport is going to do over the wire exactly when I send it to raw event. Can you elaborate a little? I'm not sure I understood you. You said you would be difficult for you to do what Clemens said because you don't know exactly how the transport is going to serialize it. Is that what you're trying to say? Well, I don't have the JSON representation of that event potentially, like for HTTP binary. So I don't know exactly how big it's going to be over the wire, especially if the client chooses to gzip the outbound request. But isn't that true of anything, even Kristoff's proposal? It's true. It occurs to me that you could measure the raw event as an object. As we know the types for everything. Yeah, that is sort of what I'm trying. I mean, yeah. But I think that still comes down to how you represent stuff or how you have the event name and sorry, the attribute name and then the attribute value. And for both it kind of depends on how you represent it. If you say, okay, these are asking characters for the attribute name, then it has a different size than if these are you would have 16 characters for some. So it's still not so easy. So I thought that JSON is simply the easiest way to measure it in a sort of standardized fashion. And the spec talks about how we think a string should weigh like how many bytes a string should occupy per character, as it has a type. So Kathy, I think you're coming up here. Did you want to say something? Yeah, I'm not sure. So what was Clemens proposal? Are you proposing that the what size are you proposing? I mean, with existing JSON on this proposal, I think it's fine, you know, because from that from the, you know, 64 kb, right, with the JSON format, I can derive as an event consumer, I can derive say, what is the, you know, size of the message. And then I know my own particle. I mean, the particle connecting to my event consumer system. So I can know what the size I should accept. Is that right? Is my understanding right? I'm not sure what's Clemens proposal. Are you proposing that, you know, the size if it became a matter of what kind of particle? My proposal is to take this, to take this, the block of text that we're looking at and delete everything after the second comment. So do we all of this? All of that? Then how do you know, accept the event? So what's the 64 kb represent? Does it represent the raw size, the size of the raw message? Or does it represent the size of the message encoding in some other format? What comes across the wire? Across the wire. So you mean what comes across the wire? So I mean, no matter whether it's an intermediate routing system or a gateway or the event consumer, we only need to prepare, we can accept the total size is 64 kb. Is that what you mean? Yep, the size of the buffer. Okay. Then how could the event producer know? So the event producer has to know all the different particles and then to calculate to make sure the events, the size of the raw message has to be, you know, about what size, right? So the event producer does not know what kind of particle along the way will be used. I'm just thinking, you know, how could the event producer know? As an event consumer, it's easier, I think, for the event consumer. But how could the event producer know what size, what's the maximum size he, I mean, the system can put there? So every event thing in messaging product in existence has quotas on message size, and they are always about the frame size. And HTTP is to say, and if you use buffer HTTP, it's also off the limit. But most messaging systems have some limits, some protocols actually tell you upfront in the handshake what the maximum frame size is, and you have to stay under that frame size. So does that mean the event producer has to know all the different particles along all the way that will be used? No, it just sends the data to its first pop. And if the intermediary chooses to do a re-encoding of the event and not forward that event as it is, then it is on the hook to make sure that it doesn't exceed the limits of its next destination. Because ultimately, the routing is all hop to hop to hop to hop, where it gets configured. And if an intermediary chooses to do a re-encoding, it's on the hook to make sure that the re-encoding actually works. Oh, okay, so the burden is on the intermediate, on the router or whatever, to make sure. But the thing is sometimes, if the change the particle, right, unless it does something other trick or like doing compressions, sometimes it just becomes larger, right? That means it cannot accept this. And if it makes it larger than the event or even consumer cannot handle it. Yeah, that would be very unfortunate, but that's just the way it is. If you're choosing to change from a protocol, you take it, you start with a compact protocol like AKDT, and you're choosing a protocol that has more bits on the wire like HTTP, yes, your message is going to grow. And that's something that someone who's building the integration for this will have to go and live with and deal with is that they go from a protocol which has seven bytes overhead per packet to a protocol which has probably 2K. Okay, now I understand your point. I think I like the JSON one. I think that's more clear. Everyone knows what size, no matter which component along the way knows what size it is. Mark, you want to verbalize what you put into the chat? Yeah, sure. I just think that we should be encouraging people to send as small payload as possible because it's more efficient and just send a pointer to actual data if they really need it. We need to set the right tone as to what we think an event should be that doesn't contain a ton of data in it. I agree. So my hand's up next. Oh, I'm sorry. I want to speak last. Vladimir? Yes, I feel this is a really important point because it really changes the perception and attitude of developers who use mechanisms like this. You know, often in organizations when you come up with something that looks like file transfer, people will start moving large files. So I had a recently situation like this where it was intended as event and somebody was thinking passing a five gigabyte file through this. So having the sentiment eventually small, the infrastructure for processing should be small, lean, and fast is something that I feel we should promote. Thank you. On that particular point, I think that might be worthy of some mention within the primer completely outside of this issue. It sounds like that might be something worthy of someone writing something up. Is anybody disagree with potentially adding something like that to the primer? I think that would be great. Thank you. Okay. Okay, I'll open up an issue to make sure we at least have that discussion later because I've heard that come up more than one time in several discussions. I think that'd be a good thing to mention. Okay, so my hand's up. There's not a whole lot of people speaking. I wanted to say where my head's at on this. So while Kristoff's original goal of being able to know that this particular event will always fit, no matter how many times it gets transformed to different protocols and bindings and stuff down the wire, it will always work. It does feel a little bit odd to me to require one particular piece of middleware to have to serialize to Jason when they would normally not touch Jason at all. That just feels a little bit odd to me. And because ultimately, what really matters is that 64K on the wire. I'm leaning a little more towards Clemens' view on this, which is just it's the center's responsibility to make sure it fits under 64K. If it doesn't, then they need to figure out how they're going to deal with it. But asking them to transform it to Jason when they don't normally care about Jason just feels a little bit awkward to me, even though I do appreciate the original goal of some sort of standardization across all the protocols. Anyway, that's where my head's at on this. Anybody else wanted to speak up? Or should I pick on somebody else to get some more opinions? Okay, I'm going to pick on one other person before we try to wrap this thing up. Jim Curtis, do you have an opinion on this one? I do not have an opinion on this. I've listened to it. Yeah, so I don't really have a strong way opinion on this. Okay. Okay, so that case, Kristoff, I feel like we're in the same position we were in the previous issue. How would you like to move forward? Would you like to have more discussions? Would you like to have a vote of first sentence versus entire PR? How would you like to go about trying to resolve this? Well, personally, I really don't see if we really delete all the stuff. I don't see what this says because it doesn't say anything. For a producer, I still don't know anything. I could send five kilobytes and someone blows this up into 500 and then it doesn't achieve the goal of it. As I said, I'm not 100% happy with the JSON either. I understand what the criticism is. Maybe I'll think a little and make a second proposal and how else we can measure the cloud event in such a way that you don't have to go through JSON. Okay. Well, since this is your PR, you get to decide what people are voting yes or no on. It's completely up to you. If you feel strongly that what's in there is the appropriate one, then we'll basically do basically an up or down vote on the PR as it stands. Clemens or anybody else is welcome to open up a second PR because they prefer a different proposal, but we could deal with that separately. It's up to you in terms of how you want this PR to be voted on. But don't feel like you have to change it if you don't want to. You could say, nope, I like it just the way it is. Let's vote. Yeah, I get that point. Okay. You said you wanted to think about a little bit more. Maybe next week's call, well, we won't go into a broad discussion. You just let us know how you want us to move forward. If you have a different proposal, obviously, we can talk about that. But if nothing changes, then you let us know how you want to move forward on this. Is that fair? Yeah, sounds good. Okay, cool. Thank you guys very much. Is there anybody else who wants to bring up anything related to this before we move forward? Okay, I'll take some notes in that a sec. Kristoff, another one of yours. Yeah, for that one in terms of moving forward, I wasn't 100% sure if I should open this. Then Jim said in some comments that he would definitely want this. I open it, but now I kind of feel that there is not a lot of people interested in having it. So in terms of keeping the discussion short, I think we talked about it a couple of times. It will be of interest to me if people think this should be in the main spec or if there's something I could rather go into an extension, because that would also be fine for me. So basically, the issue is that if you have really a small limit, and I agree with Mark that the others that we should have a small limit and people shouldn't put all the stuff in there, and instead provide a pointer. So this is about providing a pointer to the data instead of sending it along with the event. So if people think we should make that part of the spec as a first level concept, then this is a proposal for it. The other option is to just use the data in a, how would I say this, proprietary, not really a provider, but in a way that you just make up and the consumer has to figure out how it works. In this case, maybe the SDK can help you and resolve the data for you. That's the idea. Any questions or comments? Anyone want to speak up? Do I need to pick on somebody? I'll speak up. I do worry about this one. I said we need some way to have a pointer to the data, but I always expected that to be inside the event itself and not in the cloud events header. So I worry, would we be able to define this attribute well enough to make it universal enough? Or is there more context that it needs to be in the event payload? I'm having a harder time with this one. Anybody else want to raise their hand? Okay, that's my hands up. I just, as a background, as I was thinking about this issue, I did take a look at what we did for the very first interrupt demo where we were basically doing image processing. And I noticed, I don't mind for sure, I think it may have been events about the AWS images, because I think they were being stored in S3 bucket. And if I'm remembering correctly, they basically had pointers to the data, because we weren't passing the images themselves, but it wasn't a single URL. If I remember correctly, I believe there was information about the bucket, and then there was information about, in essence, the idea of the image inside that bucket. So the URL wasn't a, or the reference wasn't a single entity like a URL. It was actually split across multiple fields. And the thing that really came through my mind was, hey, that's exactly what we're doing here, sort of. So would we then force someone to have to construct a URL when they already have the information in there? And of course, I'm gonna jump through hoops that they wouldn't normally have to do because they're already satisfying this requirement. And so when I started thinking about that, that's when my mind started thinking, well, maybe as Clemens and Mark have said, let it be sort of a application-specific mechanism to define the pointers. And again, go back to just putting guidance in the primer that says, don't send large data, just include pointers, but we're not going to tell you exactly how, because you may have more than one pointer. You could have lots of different ways to represent a pointer. We can't know in advance what that's going to be. Anyway, that's where my head was at on this. Anybody else want to speak up? We're almost out of time, but I do want to pick out at least one other person, if I want to. Yeah, I think, you know, so I agree with, you know, how we define this. Actually, I gave this comment before, how we define it, because different the information, if there's a large size of payload, that information might be saved in different places. There will be different ways to represent that place. So I myself, I do not know whether there's a universal or consistent way, one way to fit all the different ways of describing those places. But I still like the idea of, you know, we need something in the, I like it to be in the header, because it's easier. If it's embedded in the data, and also there could be a case of data is not in the cloud events, then, you know, we need some place to know where is that, that large payload can be stored, right? I'm just wondering whether we can, if we can, you know, how to say it, we can find out what are all the ways those information could be saved, and then we define different types, and then for each type how we represent it. And this will be easier for the event consumer and the event producer, because I'm not sure whether every, you know, we leave it up to every event producer to define it, then how could the event consumer know, how to interpret where, I mean, how does the event consumer know what is, what is a way that's defined to find the reference to that data. Maybe, Doc, and I misunderstand you. No, no, I think you're right. If we do not define it as part of our specification, then if it's, if the reference is application-specific, then yes, every receiver would have to know that specific, that particular application's a format of how the reference is stored in there. And whether it's in the body or an extension attribute, you don't know, you just have to have knowledge in advance of where the information is. You're right. Yeah, but I just, so then that is the event, every event consumer needs to really communicate with the event producer. Sometimes these are different companies of different entities, right? So they, if we really want to support this interoperability, I think that's the goal of this group, then we didn't really fulfill that purpose. I think it would be helpful if we can define something, just, you know, we probably need to figure out how we should define this. Okay, thank you. Anybody else want to voice an opinion? I'm going to pick on somebody. Okay, Richard, I'm going to pick on you. You have an opinion on this one? Richard, you're still there? What about Steve-o? Steve-o, I see you came off mute. I can't hear you. Okay, I think we lost Steve-o, he's having issues. Dan Barker, can I pick on you? Yeah, I kind of agree with Kathy on that last point. Okay. I don't really have a lot of opinions around it though. Okay. But I don't want a lot of implementations to have to be coded into every receiver. Okay, so Kristoff, you had mentioned that, besides if I heard you correctly, you, it sounded like you weren't 100% sure whether this should be part of the course back or an extension that I hear you write? I mean, it is a problem that I know I have because in our events we include changes and then sometimes they can be small, sometimes they can be bigger. So I just know, and this is what I actually do today is that if the changes become too big, then they are externally stormed and they are not included in the event. So I have this mechanism, but I don't know how universal it is. Like if everyone else basically does not have this problem and everybody or nearly everybody else has events where they say, okay, I just have five kilobytes that I don't need this, then I sort of agree that it is not a valuable addition to the spec because it complicates maybe things for everyone else. So then it makes sense to have it as an extension maybe. And then I would make, if it is an extension, I would also make it more opinionated and try to lock a bit the type and the way that it is to be accessed down. All right. So what other people think? Is this something that people would like to see as part of the core spec or do you want to look at an extension initially to see how it takes off? Klaus, you're unmuted. Yeah. So for us, it's a frequent pattern to have this ref, especially for this third reason mentioned here in this list. So I don't know, we already have something like this already in our events. It's not standardized. It might be interesting. I don't know. It could also be a fine with an extension. Okay. It would be the main concerns against it. Even against it going to the core spec? Yes, or having this at all, I don't know. Are there any concerns? So Clemens, I think you had some concerns, right? Yeah. So first, you need to be able to resolve this and to be able to resolve it, you need to know what you're looking at. And so just having that one attribute and not having it decorated with information about size and then you have to worry potentially about authentication against that source. And then the reference may actually be in a networking scope that is known to the producer, but then the event gets forward. And then the data is not there because it's either or. It's either the data is there or the data reference there. And now you're pointing to the place for which you literally have no routes to get to as the event receiver. So there's a number of complications caused by separating out the consumer from the producer in that way using the reference. And then also you might have implementations where cloud events is literally just used over MPTT in a compact protocol. And now you're effectively forcing an MPTT client, which is built for small compact size, to have access to a protocol that allows you to do a get. And AKGD doesn't have that. And AKP doesn't have that either. So it's not even clear what that means here in terms of retrieving the object because pointing to a place doesn't necessarily mean that you know how to download something from that place. That might be true for HDP, but that's not true for every protocol. Okay. So I think we're going to have to call time on this because we only have two minutes left. Kristoff, how would you like to move forward on this? Did you want to think some more about it? Did you want to just say, nope, take it as is and vote yes or no? How would you like to proceed on this? Because I'm not hearing consensus. So I think it's going to have to come down to a vote of some kind. Yeah. I'm not sure either. If someone has, if no one has a good suggestion to make, then we can put it down to a vote. That's okay for me. I feel it's a bit strange because it depends a little on having this limit in the first place, but yeah. I was actually going to ask you whether the limit PR that we just finished talking about influences this at all. What sort of the need for it is there are anyway, but once we clearly say, hey, 64 kilobytes is something that you shouldn't go over, then people naturally ask, okay, I have more than 64 kilobytes of data. What should I do? And then that will be a good thing to have in the spec because we can simply point to it. Okay. Oh yeah. Well, okay. So damn wrap of time. So what do you want to do on this? Just force of, do you want to do a vote? I can kick one off starting now or do you want to think about it more and then potentially kick off a vote next week? It's up to you. Let's do it next week. Okay. Okay. In that case, I'll send it a note to pointing people about this. Let's just do the quick roll call. Doug, I saw you in chat. Fabio, you there? Fabio? What about Klaus? Hey, Klaus, I got you. What about David Lyle? Yes, I'm here. Okay. Tappini? Yeah, I'm here. Okay. Dan Barker, I got you. I think I've already got you. Who else? I'm here. Oh, Cathy. Yep. I got you, Cathy. Yes. Thank you. Okay. Anybody else I missed from the attendance? Fabio, are you there? Okay. In that case, thank you guys very much. I apologize for running over. So look for some potential votes starting next week. I'll warn people to email. All right. Thank you, guys. Okay. Thank you. Thanks all. Bye. Bye. Bye.