 All right, no audio yet. Okay, tell you what, it's 203. Why don't we go ahead and get started? Let's see, short list of people today. AI, I think the only AI that I wanted to call out was Rachel, since you're on the call. So I want to remind you of this AI to create a PR for new categories for transport findings. Honestly, I don't fully remember that one. We have to check the meeting minutes, but just want to remind you of that one. That's on your list. I remember that one. Yeah. Oh, cool. Okay. Cool. I think that's it. All right. So community time. So for those of you who are new, I think I might have one or two new people. This is just a time for people who don't normally join the call to bring up any sort of broader topic that they want to bring up for discussion. That is already on the agenda, I should say. So is there anybody like to bring up a community related topic? All right, cool. All right, moving forward then. I don't see us on the call. So let's talk about SDK very quickly. For what I can tell, SDK work is going on really well. There's lots of PRs getting merged and stuff into those free posts. I guess the question is for the SDK people who are on the call, do you guys want to have a regular meeting? I believe the gentleman's name is Matthews or something like that. He was suggesting maybe every other week we have a phone call just to sort of keep in touch. But for those SDK folks on the call, what do you guys think about that? Do you want to have a phone call? Does every other week sound right? How do you guys feel about that? I'm not sure. Do we need one? Well, that's what I'm wondering. Do we need one at all? I just don't want to lose any proper momentum we have because sometimes if people don't get together on a regular basis to discuss goals and objectives and stuff, things might linger sometimes. Yeah, it makes one of a shorter call. It might make sense. It's like half an hour adjacent to this one. Okay. Timing-wise, come as you mentioned that, would you prefer before? Actually, I'd prefer before this call rather than after. I know if you probably would too given your time zone. I would prefer before. Okay. Tell you what, I'm not sure how many SDK people have on the call, but why don't we suggest a 30-minute call before this one every other week? I'll put that out there for sort of a discussion point or vote where you want to call it in the SDK Slack channel. Is that not okay with people? Sounds good. Okay. Okay. Cool. All right. Kathy, I don't see Kathy on the call and I haven't seen any activity on that. I'm a workflow stuff, so I'm assuming there's nothing to mention there. So we'll keep going forward. Okay. So let's start talking about PRs. Kristoff, you are on the call, so maybe you could quickly talk to this one. Yes. So there were a couple of issues where we discussed about batching messages and it's kind of my opinion and then no one really disputed that. But it seems to be a question that comes up kind of often. So I decided to make a pull request for it. So basically people want to batch messages or batch events inside a message, which generally makes sense. But if you look at cloud event as such, within the specification, I don't think we should specify how batching is done that should be done at transport level. And the reason is that many transport layers already support batching natively. For example, Kafka, you even don't see it. It kind of happens behind the scenes that Kafka will batch messages for you. So what we should really do in my opinion is that we say our spec says what a single cloud event is, and then it's up to the transport layers to define if and how they support batching of multiple events. Okay. Any questions or comments for Kristoff? A thought. Not a pushback, but a thought. So and it's just basically the product from product implementation perspective. So we have the case where, and that's in Azure Event Grid, where we make it possible for customers to send us a bunch of requests with a single interaction. And that happens in cases where there is one underlying event that kind of shows up in this multi-tenant system, basically shows up in whatever 50 different projections, if you will. And so there's, in fact, one event happening, but you're really handing 50 events to the system, and they are all very similar. So in that case, we use the not the binary projection, but kind of the self-contained mode, the structured mode that we have. And then go and send one request, but we send that one request with an outer JSON array, and then in that outer JSON array, set the individual events. And that's kind of the batch operation. And since that, and that has wire impact. I'm not saying that we must do that. There's obviously a way for us to do that to keep doing that in proprietary fashion. But that's a scenario that I think is valid, that you have the need to kind of send a bunch of events at the same time into the system, and you want to make this work over the defined transport channel. And I think one of the changes that we would need for the structured mode only is to say, send those events as an array, and that's what it batches. But I'm not in a stronger position to this. It's just that this is how we've implemented this, and customers are using it. I think maybe I didn't get my point across, but I think batching is totally useful, and I think it should be implemented. It's just that we shouldn't tell how batching is implemented in Kafka, because Kafka already knows how it does batching in general. Yeah, but then the question for me is, if we think that batching is useful, I don't think we should tell Kafka how to do batching, but then should we have normative language that says, oh, you want to send a batch, and now we use the following mechanism? Yeah. So I think if we look at HTTP or also at JSON level, we could find how that looks at JSON level. But what I don't want to do is people taking a JSON that contains five events and then putting this basically into a single Kafka message, because then the semantics of what is a single event inside Kafka doesn't work anymore. Yeah, that's right. And I think that's the same argument I would make for, say, NGP, where you probably don't want to have batches, because you want to go and put 500 messages in a row and then handle them as one more or less by flowing them correctly. But for HTTP, that's specific. So making your general rule, like in HTTP, I can see it being useful to put multiple messages into the message body with more sophisticated protocols, such as Kafka or such as NGP, I would be rather against it. So go ahead, Roberto. Yeah, what I was going to say is that maybe what we need to do is put some language in like, for example, the HTTP binding spec about how batching happens there, but we don't put it in the Kafka one. Yes, correct. Actually, I was going to raise a very similar point because I actually interpreted this poor request as the first of potentially many. As basically saying, at the spec level, we're not going to touch batching. It's kind of a transport little thing. And that each transport specification may or may not choose to add normative language that says, in our particular transport case, here's how you do batching. So for example, in the HTTP case, we may decide it's the way Clinton has described it. There's one gigantic JSON object with multiple cloud events under there, or we may decide to use multi-part mine or something other. That's another mechanism. But I do see value in our transport binding specifications picking the one way to do it, especially if there is more than one way to do it on that particular transport. Yeah, I was going to raise the same point. And I think that's a good idea. Yeah, that was basically what I was trying to say, but you said it nice of me. Okay. Yeah, sounds like we're all on the same page. So let me ask this question. Is there anybody else has an opinion or comment on this particular PR, potentially the first step towards other ones? Obviously, it's now up to people to create the follow-up PRs, if we actually do want to add batching to particular transports. But it relative to this one PR, are there any questions or comments on this? Are there any objections to adding this? I mean, technically, there's no normative language in here at all. So. In favor of adopting. Okay. Any objection to adopting this one? A modification to include the part about transport bindings specifying how it started. I think it should actually point out to go look at the transport bindings. There's nothing about the actual specs or bindings in that language. Oh, I see what you mean. Kristoff, what do you think about adding that to make it clear that you're talking about either the transport natively itself or potentially our transport specifications? I think that's what you're looking for, right? Competing? Yeah, yeah, exactly. Okay. Kristoff, what do you think about that modification? Yeah, we can do that modification. Maybe if you want to, you can propose or I will write something and then I'll try to ping you if I know you get a panel. Uh, just ping me. I'm T-A-P-P-P-I. So it's three P's in Github. You'll find me. Okay, perfect. Okay, cool. Okay, so I didn't have any objection, but we'd have some word smithing to do. And then hopefully on the next week's call, we should be able to resolve this one really quickly. But it sounds like everybody's heading in the same general direction with this thing. All right. Any other last minute questions or comments on this one? Cool. Excellent. Thank you guys. All right. The next two should be fairly easy. I just wanted to add some more information. Now, this is in the, I guess it is in our repo. Sorry. I just want to add pointers to the presentations and demo that we did at KruppCon North America. Don't think there's anything really controversial here, just adding it to our demos.md file. Any questions or comments on this one? Any objections to adopting? No. Thank you for adding it, Doug. Doug, hi, it's James. Just a quick one. Yeah. Regarding that demo, obviously, we're going to put the links up. How long do you want us to leave the endpoints active for? Or obviously it doesn't cost as much to leave it there, but is there an expectation? Yeah. Actually, thank you for bringing that up. I meant to bring it up, but I completely forgot. From my point of view, the infrastructure itself at SourceDog, I have no problem keeping that up forever. It's really up to you guys in terms of how long do you want to keep your endpoints up. I have this general sense that at least for a short period of time, people may want to use this demo in some presentation. For example, I know Oracle did in the past. So I'm inclined to ask people to keep it up at least for a while longer. But I wanted to bring that up on the call here and see what people think. Is it going to be a challenge for people if we ask you guys to keep this up? I know, Jim, you said it's not a big deal for you. But what about other people on the call, like William, for Red Hat, or anybody else? Yeah, we shut ours down just over the break. There wasn't, didn't seem to be any reasons to keep it going. Okay. Would you be okay with bringing it back up just in case you do want to do it for a demo for a short period of time to see whether it gets used? Yeah, I mean, we could. It just didn't seem like there was going to be further use of it. So that's why we shut it down. Yeah. What other people think? I'm trying to see on the list who had things up there. I think Klaus, you had one. Yes. We can still leave it up for a while. Okay. Like I said, I'm inclined to ask people to leave it up for a little while. And then if we find that it isn't used, I'll look at the logs and try to track how often it does get used. I guess you guys can look at the logs too and see how often your endpoint gets hit. But if it's okay with you guys, I'd like to ask you to keep it up for at least a little while longer. I granted it over Christmas or the holidays for New Year. I can't imagine it was used very often. So it's probably safe to take it down. But going forward, I'd like to at least give you the opportunity, if that's okay. Okay. Not hearing any objection. I'll send out a note or a message to the Slack channel. Put it back up. I just wanted to comment. It would be nice to know how long it is up. There was just some first CNCF meetings or meetups in Finland. I think it would be nice to demo that there. Oh, there you go. I'm glad you mentioned that. Okay. To tell you what, why don't we I'll at least ask people to keep it up for at least through February. And then we'll see whether it dies down near the end of February and then revisit that. How's that? Yeah, that sounds great. Okay. So thank you, Jim, for mentioning that. I completely forgot. Hold on a second. All right, cool. And we approve this one. All right. KubeCon sessions. Yeah, I guess I misspoke on the previous one was about our demo. This one is about the KubeCon sessions themselves. There's pointers to the PDF files. This is technically under the serverless working group. Because we didn't really have one or two reasons. One is we don't really have a spot for it. I think presentation is so much under cloud events. But because these presentations were about the serverless working group in general, with a subsection on cloud events, I thought it was more appropriate to put it under the CNCF serverless working group readme presentation, readme file. So that's why I put it there. So just point this to the PDF files, and then I upload the two PDF files here. Any questions or comments on that? Any objections to adopting? All right, cool. Thank you, guys. Dan, you're on the call, right? Dan, you're on the call. So Dan, do you want to talk to this one, Dan? Will I sort of do some cleanup here? Dan, do you have to come off mute? Okay, probably not. Excuse me. All right, so Dan is managing our website for us. And I believe it was Sonia who actually created a blog, basically announcing version 0.2. And so then Dan added it to the website, but then he brought up a good question of, well, do we want to actually create a blog section or not? And it sounded like a wonderful idea to me, but rather than just, you know, rather than doing it, I wanted to bring it to the group for discussion. How do you guys want to handle blogs going forward? Is there another website you'd like to put it on? Or if we put it on our website, on our website, is creating a blog section, okay, with you guys. How do you guys feel about this? Do you guys care? Creating it is the easy part. Maintaining it is the hard part. So who's going to author the content? How's this all going to work? Thanks, Doug. I couldn't get back to this app quickly enough. Yeah, I'm worried about that, that we won't have very much content. And there's other people who run publication systems. I write for opensource.com and I'm a moderator there. That's a good avenue. A lot of companies that are on these calls have their own blog. So they're more likely to, I would assume, post there. We may just be posting announcements. And even just setting up the blog is, I mean, it is fairly easy, depending on how we do it, but then keeping that throughout each version of the website as it changes will be just an extra burden. Yeah, I guess I should mention that Sonia did try to get this added to the CNTF website, but because we're just a sandbox project, I think is what we're they're not allowed to advertise our stuff. So they wouldn't accept our blog. That's why we couldn't put it there. Jim, I think your hand was up. Do you want to say something? I think you just answered my question because my assumption would be that CNTF would have that sort of capability, but I think you just covered that. Yeah. And when I asked about that, their response I got back was basically, well, don't be a sandbox project, go to Incubator. They're like, well, we'll get there eventually. Yeah, this is Sonia real quick. I did reach out to the CNTF marketing folks and spoke to Taylor and Caitlin about both about this. And the response was the same. And I think to some of that sandbox versus incubation stuff will change once the governing board stuff is finalized and the dust settles a bit. So it just kind of is what it is right now. So, I think there are two different. Sorry. Yeah. I still believe it's super worth it, even if it's just for announcements. And something else which I feed today, I wanted to check out the SDK documentation and that doesn't seem to be published anywhere. So both of them would be better. More documentation is always better, unless it's out there, but still. So both the blog, even if just for announcements and the SDK doesn't seem to have any documentation published anywhere. Yeah. I feel the SDK a little bit later. So let's focus on this blog thing first. So you actually use the word announcement there instead of blog. And I think that's interesting. I hadn't thought about that because there were obviously people about us concerned, like Austin, you mentioned, you know, first of all, who's going to manage the website and who's going to create new content going forward. And calling it a blog almost implies we want a whole bunch of additional content. And so we may have to sort of poke people to create content. But if we rename it announcements, that's a little less formal and may happen less often. And it may be easier for us to write up a short little blurb about what's going on in terms of our releases. And that may be something that's more manageable to keep on our website, but then do as Dan was suggesting, maybe let people write blogs and put blogs other places, whether it's medium or around company websites. So I guess what I'm suggesting is maybe we should look at an announcement page as opposed to a blog page on our website. Or what do you guys think about that or any other suggestions? That could work. One other suggestion is what if we created, I'm not sure what this is called on medium, but I think it's like a syndicate or something where you can basically just redistribute blog posts from other people's from other authors and other entities. And people, I mean, I guess it depends if they're writing on medium or not. But if people write stuff about cloud events, we could just republish it through our medium syndicate and inherit a lot of content that way and kind of centralized content for people. The complexities there, I think are the challenges there are just how much content is out there being written on cloud events right now on medium. Anyway, I haven't fully thought it through, but just maybe there's an opportunity there. Interesting approach. Any other comments from anybody else? Thinking it through, I do like the announcements because we do have a newsletter, I believe. And we're going to be sending out some newsletters, hopefully. And ideally, if we just have to write one announcement, one piece of content to send out to the newsletter audience as well as to post on our website, that would simplify the maintenance a lot. Let me say we're going to have a newsletter this week. Oh, the cloud events effort. I'm not sure who owns it. But there was a newsletter at one point. Is it not on the website right now? I don't know. I never heard there was a newsletter. Yeah. We haven't done anything with it yet, but okay, it's not on the website. But we used to have one. Oh, no, it is. At the very bottom, there's get updates and enter email for updates. Oh. Doesn't that, no, that signs you up to the list that's CNCF set up. Oh. I don't know if we ever do anything with that. It's a list that we probably have access to, to message. Right. This adds you to our mailing list, I think. Or what is this? Now you got to go curious. Yeah, we did this months ago now. I don't remember when we switched from some other lists service to this one. Yeah. Yeah, I think that just subscribes you to our mailing list, Austin. Okay. I've never seen a newsletter yet. Yeah. Well, we have a list. We have a list of people who we can mail, I think. And if we just want to agree to draft kind of short announcements, sending them to the mailing list and publishing them on an announcement page or something, I think it sounds like it has low overhead and can still keep people informed. I would second that. Yeah. Hi, it's here from CNCF. Just like to reiterate what was done. So there was there were a few Google groups and we've decided like the community events decided that they have to be migrated to least sale. So we were CNCF we done it for the project. So now you're like, you may use it as as you previously use the Google groups. It's not, it's not a newsletter as mentioned before. It's just a regular mailing list. But again, if you feel that you may use like the announcement, announcement mailing list, for example, for that style. So yeah. All right. Thank you, Jor. So I guess what I'd like to do is sort of summarize. I think what you suggested, Austin, which is leverage our mail. Well, no, create an announcement page on our website, right? That's part of it. And then potentially look at leveraging medium in the future when there's more cloud event content out there. Because you seem concerned that there wasn't a whole bunch of cloud event logs out there today to sort of syndicate. That's your way to summarize it. Yeah. The medium stuff is kind of a far off suggestion. Got to get to think it through further. There's just, I've seen a lot of other companies be really successful just by building medium orgs that source content from other people related to whatever their theme or subject matter is. And we can, we can figure that out later. I don't think it's the immediate priority. Whereas creating the announcements page sounds like the more immediate priority. So there's a linear log of what's been happening visible on the website. And if we could tie that into or mail those announcements to the people in our mailing list, that would be interesting too. And it sounds like a good starting point to me. Okay. And what we can also do is as a work item, as part of the release process, we could find a volunteer to write up an announcement. That way we don't forget it, right? Then you just need to find a person to volunteer every release to at least say it. You know, a paragraph or two, because you should at least say something every release to at least, you know, maybe highlight what's been going on or something. So I can add that to the release process. But then it's for the biggest thing is you're just suggesting is rather than having a single announcement, just create an announcement page that we can link to. What do people think about that as a short-term solution? Yeah, I like that. And I could I ask that we actually send that 0.2 announcement out to that mailing list as well? And so I think it's more the process. Yeah, the process of, you know, authoring the announcement, approving the announcement, and then pushing it out to those different channels. Okay. So add that to release process. Add a page to website. So I'm hearing and then let's consider, what do you call it? Was it a channel or an org? What was the title for that thing, Austin? I don't know what the official term is. Okay. Maybe syndicate? I'm not sure. Okay. Okay. Okay. So I guess the biggest thing you'd like to consider then is these three actions, three concrete action things as sort of the next steps for this particular issue. So create a release announcement page, add a creating a release announcement process to our release process, or step to release process, and then send out the 0.2 announcement that is on your road to our mailing list. What do you think about that as the next steps for this issue? Looks good. Okay. Any disagreements? Any objections? All right. Cool. So Dan, I'm assuming you'll be able to create the announcement page. I don't know. Are you saying that because of time-wise or because of interest? Both. I'm not really a front-end person. So this is just kind of a thing that I could do with Hugo. But I will take a look at some options. And if we ever wanted to do something that made our newsletter type thing or announcement look a little bit nicer, I have used MJML in the past. So I may try to fit something in. If we have any front-end people, feel free to jump in. But I would imagine we're not a lot of front-end people. So I will trudge along and try to get this done. Okay, that's fine. If you're running a problem, just raise the hand and ask if we're helping. I'm sure we can find somebody who knows Hugo. I know a couple people who would play with it. Cool. I'm sure we can. And I'll try to get a little automation in there around posting to a mailing list and including the release announcement and kind of updating the website all from one commit, possibly. Oh, that'd be kind of cool. I think that's like an icing on the cake. I was able to get the manual stuff. But yeah, that'd be cool. Dan, where's this hosted right now? It's on GitHub pages. Okay. Okay. Well, anyway, yeah, if you could just do some exploration and see what the options are, to be honest, I'd be happy with just another static webpage that we just have to manually add stuff to every now and then. I'm okay with that too. Yeah. And this is Sonya again raising my hand. Dan, let me know. I'm happy to be your backup or help you along with this process or what have you. Not to say that I'm a quote front end person, but I'm happy to dive in and see what I can do as well. Cool. Thank you. Yep. If for whatever reason, we needed to host that on a separate service, do we get any financial resources from the CNCF? Eora, are you still on the call? Do you know the answer to that one? I prefer if Dan would also ask a question. Okay. I'll take a note to ask Dan about that. Not that we're missing anything yet, but it's good to know. So, all right, cool. Okay. And then for the other two action items, modifying the release process and sending the announcement, I'll take those two or someone else really, really wants to. All right. Anything else related to this issue that people think we need to discuss? Dan, anything else that we forgot? No, I think that covers it. All right. Cool. And thank you very much for doing all that work. Both Dan and Sonya put that together. All right. Next on the issues list, our PR list. So, okay, this, I'm not trying to actually want to resolve this today because I'm not sure people had a lot of time to think about it, and it is a property name change. It's kind of serious. Excuse me, but I did want to actually at least bring up the topic still to consider. While I was reviewing one of the binding specifications that's out there, I came on which one it was. Maybe the old mentioning one. I can't remember for sure. I think, in my opinion, they were using the content type field incorrectly. I think they were getting confused between the content type field that, for example, HCP has versus the content type field that we have is one of our properties. And I got me kind of worried that people will use it incorrectly going forward. And because the data for the event itself is kept under a property called data, it seems to me that it would make a whole lot of sense to avoid this possible confusion going forward if we just rename our content type field to be data type. Not going to change the semantics at all. I'm just looking at a simple syntactical change just to make it perfectly clear that this is the type for the data field, not content type to be confused with the HCP content type header. So anyway, that's the purpose behind it. The PR itself is strictly syntactical change, content type, data type, all the way through the entire set of documents we have. But the big question for you guys is whether you would be in favor of this change at a conceptual level? So we open the floor up for comments or questions. What is silence to me, guys? Help me out here. Hey, I think for me that means I would need to read it. See how. Okay. Well, if no one has any comments yet, that's fine too. Like I said, I don't want to, I'm not going to push to resolve it one way or the other this week. It is a relatively big change because it is a code change, even though it's conceptually a small change. So if you guys just want more time to think about, that's okay. We'll revisit it next week. But if there are any questions you want to bring up right now or concerns you have right now, now's your time. I think it's great. Okay. Anybody else? Okay. Not going to push it. So I'll bring it up next week. I will hopefully have some kind of resolution next week either to accept it or reject it. But please give it some thoughts. I don't think it's a huge coding change, but I do think it will alleviate any potential confusion that might be out there. All right. Thank you guys. Sorry, late comments there. I think it's an appropriate change. I always found it a little bit unusual that we were using content type out of context. So I'm in favor of it. Okay. Thank you for the comment. All right. Before we move forward then to the next event, I have many other comments on this one. Right. I'm going forward then. Kristoff, I believe you opened this one yesterday, so I'm not going to push for a vote or anything like that, but I thought I'd give you an opportunity to at least discuss it because it is something that has been brought up in the past. So you want to talk to this one? Okay. There you are. Okay. I thought you could stop. Go ahead. So maybe I start a step back. So for a lot of serverless technologies, let's put it this way, there are some limitations. One is functions as services. They usually, for example, AWS Lambda will only accept events up to a size of 128 kilobytes, and then there are other limits for other functions as service providers. Similarly for message queues and basically transport layers, there are some limitations depending on the product. Sometimes they're configurable. Sometimes they are just usually, if it's a software service, they're hard coded. Same for HTTP servers, they usually come with some protections that is a limit basically on whatever. So I think for interoperability, we should specify somehow when it's okay to review something and when it's sort of guaranteed that you can send an event. So if I'm sending out an event, I want to know, will that go through or will it not go through, obviously? And then one consideration for that is size. So at commercials, I implemented or I integrated with a couple of different message queues. And what we do is we basically have enough data and people always ask for most of the data. So for the message queues that support it, we send a lot of data for message queues that support less, we send less basically. The only thing basically for me that I want to know is when do I need to cut it off so that it still goes through? Yeah, so this is kind of my first proposal how to approach this problem. I'm not, I don't know, we can do it one way or another. I'm also not super picky about the limits itself, whatever that will be. I just want that there is a limit. Yeah, so I basically made two proposals here. The first one is really a hard limit that basically says an event should never exceed the size if or like the first question is how do you measure size at all? This is kind of difficult because obviously depending on which format and so on, it will be different. So I think because everybody, we kind of have JSON and it's the default serialization more or less, I think that's a good way to measure the size of the event. So I picked this one, but I'm open up for better suggestions. So basically what I'm saying here, if you serialize it as minify JSON, then it must not be larger than 128 kilobytes. Again, that limit itself is up for discussion. And I made a second limit which is on the number of top level attributes. So in terms of nesting, I didn't specify anything. But for example, for HTTP, some servers will have a limit on how many headers to send. And basically the second part says, you know, the producer should only create events within these limits and consumers must accept all events within these limits. And they also should reject messages. So basically you're not allowed to go above it. The second proposal or the second option is a bit softer in that it says, you know, there are these limits. And you can kind of you should probably stay within that. But if you go above it, it may work. But then you're sort of, it's up to you. So we recommend that producers stay within these limits. And then consumers may reject the messages that violate these limits. But if everything works, it's also fine. So these are basically the two options that I have. And yeah, a third idea that I didn't write down is that there's this claim check pattern, which you may have heard of, in which case middleware could also shrink messages. But that's maybe too advanced for now. Okay, Gem, I think your hand's up. Yeah, I think obviously we need to make statements like this. But I do say it more of a transport concern. Especially if you look at HTTP, are you looking at the whole payload or just the data payload or does that include the headers? I mean, I think you get into very sort of murky territory a little bit. And also from an end to end perspective with a potentially multi hop environments. I mean, I'm not quite sure whether we could do anything more than say people should be aware that there are limits. I'm not sure you could actually enforce limits. That would be my only comment. But I comment on the PR directly. Those would be my major concerns. So I think the second option sort of tries to say that there's a guarantee and for interoperability, every transport layer has to support events up to this size. So if I'm sending one, I know that everyone will accept that. So if I'm setting up an HTTP server and it doesn't accept 100 headers, then it's not compliant. So then I'm wrong. Yeah, I get that. I guess my comment would be if we sort of go down this sort of efficiency or whatever road you want to call it, it then behoves us to change our specs so that they're much more terse or the payloads are much more terse. So maybe it changes the way we define those attributes or properties of headers so that they're much more conscious of the fact that the size is of a concern. Yeah, but 128k is not the limit which makes you go in and shorten each field. I would hope not. I mean, if we get to that level. So first, you can't really make this a transport concern because we're going to have plenty of scenarios where, especially when middleware is involved, where an event goes into the middleware using HTTP and pops out of the middleware using NPP, which means now you can't make it a transport concern because you might, if the ingress transport supports a bigger size than the egress transport does, then you're in trouble. So either you make it end to end or you don't. And then from a size perspective, 256k is for most mainstream transports, options, brokers, within the ballpark. 128 certainly is. And in terms of that, most applications, even in messaging and eventing, can typically deal with that size. And then you have some outliers which do maybe a meg and then beyond that, you do file transfers. So I don't think 128k is a problem. If we wanted to make this work for, like, Laura Wan or something like that, then we're having a really different discussion. That's a really constrained transport because the frame sizes are, like, 16 bytes. There you would have to have a reframing of the way how we do metadata so that you can go and put that even into a transport like that. But I don't think that's 128k is nothing where you need to be worried about. That's just mainstream messaging and eventing. Right. Austin, I think you're next on the list. Yep. I think the intentions behind this are pretty good. I'm wondering if maybe the SDK has a role to play. The SDKs have a role to play here. Even before we approach limits, the SDKs could perhaps warn people if they have, you know, large, large cloud events. I don't know. Maybe they could help mitigate any problems that come from this without imposing any hard limits to still address the problem. All right. My hands up. It was interesting, Christophe, that you did approach this from both angles, right? One from the producer side and one from the consumer side. What's interesting is, as you're talking about it, I kind of like the idea of, if we're going to talk about limits at all, of potentially looking at it more from the consumer side to say you must support incoming messages of this size or something like that, mainly because specifying a limit on outbound messages sounds like it's a bit restrictive, right? Because what if in some particular environment they want to be spec compliant, but they want to send messages that are greater than 128K, something like that, right? Are we now going to say they're non-compliant even though they know that the receiver can accept it and the process is just fine? From that perspective, I'd rather allow people to go beyond limits if they know the other side can't accept it and still be spec compliant. That's why I'd rather put the burden on the receiver to say what are the minimums they have to accept from that perspective. At least that's my initial thought on the process. Anybody else have any comments on this one? I think it's good to have that discussion in the spec. What the actual limits are is something that we still need to go to figure out. Let me ask a higher order question then for people. Because I'm not hearing anybody speak up against the idea of adding limits in some way to specification, whether it's to the main spec or the transport specs is something to figure out. But does anybody have any concerns with continuing to head down this path of adding limits in some fashion to our documentation? Not hearing any objections. It sounds like, Christophe, there is agreement to head down this path, so that's good. How do you guys want to move forward then in terms of taking the next step, in terms of modifying this poor request to get everybody on the same page? Do you want to just work through comments in the PR or some of the mechanisms? There's part of me that wonders whether we first need to have a high order discussion about where should the size limits apply. For example, in this discussion we've had people talk about size limits of the entire message itself, meaning the transport concern. And then other people said, no, you really only should focus on the size of the cloud event itself because of issues related to that. So do we need to have a higher discussion around what should the limits apply to first? The entire message, just the cloud event, just individual properties. Do we need to decide or do we need to bounce around between all three? You guys are way too quiet today. I think it's a good point you're making. I also struggle with that when I made this initial PR. I decided to take the whole cloud event within JSON as that's a good starting point and I think many consumers will work on that level. They receive the whole thing as JSON or whatever. That's one size that they have to keep in memory. And then I also picked the top level attributes because I know that this is part of the HTTP binary thing and I think that will be commonly used and there are commonly limits on it, but I'm also fine with removing that work. We could also go and figure it out each four attributes and data individually, but that seems more fragile. Let's put it this way. Yeah, and Carlos, I'll get you in a second. It's funny because when I was going to mention the 100 top level attribute restriction you put in there because it's funny, when you serialize it as JSON, the 100 limit may not necessarily be necessary, right? But then if you serialize it as in the binary HTTP format, as you said, the number of headers, HTTP headers may matter at that point. So that's when I can remember who it was. Oh, Clemens. When Clemens said it can't be a transport level issue, well, I'm not sure 100% agree. I think it, depending on your serialization, it may be a transport level issue, right? Because 100 may be okay in structured, but not okay in binary. So we need to have reasonable upper bounds, and I'm not sure, I'm not convinced 128k is that upper bounds, but we need to have an upper bound. So Carlos, your hand up and then I think that might be it for today's call. Yeah, my two cents are that I don't see the value of adding this type of limitation using the word most. As a provider, I really don't care about the size. If I want to send a picture and it's one meg, why? I don't think the spec should be specifying size limits. Maybe, I think somebody mentioned it, maybe you can go at the specific attribute and say this attribute is type string and it should not be more than X, or if it's an int, it should not be more than X, just to avoid abuse. But other than that, I think, I don't know, it's doing a disservice by saying you cannot send a picture that is big or a piece of data that is big or if it's binary, do it this way. If it's not binary, do it that way. It's going to be convoluted and nobody will care. We just send a message and if the consumer can accept that, it would go through and not worry about this. But that's my two cents. I don't know if it's valid or not. Okay. Thank you, Carlos. I guess we're due on the time. Maybe one more comment or question. Anybody else want to bring anything up right now? Otherwise, I think we're just either here for next week or talk to the PR itself. Yeah, my closing comment, maybe. Maybe I think whoever proposed this was talking about claim check patterns. Maybe this is an example of where you refer to sort of best practice and make people aware that transports or implementations do have limitations and these are patterns, how you get around them. Yeah. I would sort of, I understand we should have statements around this stuff. I'm not quite sure how you draw a line and where you draw it. Okay. Thank you, guys. All right. With that, I think I don't think we have time to dive deep into anything else, and I think I lost any money to them anyway. So on this particular issue or PR, I should say, please comment on the PR itself either on any particular line if you want to. But I think it's more important at this point in time to have sort of a higher level of discussion. And so thank you, Christoph, for forcing the discussion. That's really good. So let's have a discussion in the PR itself. Go back and forth and see if we can maybe land on a general direction or where we want to head on this stuff going forward. And then we can modify this PR or create additional ones if necessary going forward. All right. And with that, let me switch back over and back to the attendance. How else I heard you. I don't know who AW is, but you don't have a microphone. AW, if you're there, either come off mute if you can or put a message into the Zoom chat just so I know you're there and actually spell out your name and your company if you can. I appreciate that. I heard them. Keith Crown or Keith Crow, are you there? I guess he dropped. Okay. Who is it? 744-673-2305. Okay, they're not there anymore. Sonya, I heard you. Is there anybody else I missed for the attendee list? All right. Is there any other topic people would like to bring up at all then? Because we have a whole five minutes. Austin, you came off mute. Are you going to say something? No. Maybe next week. Okay. Actually, one thing, since you weren't on earlier, we were decided they were going to have SDK calls every other week 30 minutes before this call. Okay. Let's just let you know since you're part of the SDK stuff. All right. And I guess with that, we are done. So you get back four minutes of your day. All right. Thanks, guys. We'll talk next week. Thanks, everyone. Bye, everybody.