 Let's see, three afters. So why don't we go ahead and get started? Let's see, AI is the only one I really wanna talk about with Logo, so Brian will look into that one. So let's jump right into the extension stuff. So, hold on a second, let me open this up. So Sarah, that's Sarah, very, Rachel. They said a comment in their last evening. Let me see if I can find it. Where is it right here? And hopefully everybody had a chance to look at this comment. Actually, just as a quick refresher, so at the end of last week's call, we were gonna be taking a vote, but then Kathy had some last minute changes she'd like to make to the PR shoot, and we got those in, so that's behind us. So the plan was to actually have a vote first thing on the call today, right now. However, with Rachel's comment, I'm pretty much interpreting her comment as a request to delay the vote so that they can prepare some material to explain some of their concerns and do that presentation to us on next week's phone call. Am I interpreting that correctly, Rachel? Yeah, that would be great. Okay, so my question for the working group is, is there any objection to deferring the vote until after Rachel, or until we have the further discussions next week where Rachel can do her presentation or who will come to the presentation? Well, Clemens rebuted Rachel's comment, so maybe Clemens can comment on that. So I'm happy to have that discussion, but then really finally next week. I would prefer to really focus on Doug's PR, so this one, and what the extensibility model is. It would be great then. I have a POC that I wrote this week that actually shows that you can go and seamlessly flow a cloud event through three formats while retaining top-level extensibility for JSON and XML, as schema-less languages, and then use an extension bag for a thrift and protobuf without loss of fidelity. So, and I strongly believe that what Rachel is raising is really a tooling issue, and what I would like to see then if Google wants to present next week is a proper proposal for a protobuf event format, and if that's not there, then I'm just considering that a private implementation issue of Google with a proprietary format that they're choosing to use event cloud events for, and then that's a question of why that ought to be even a concern for the group. So, but even with that, there's with the prototype that I have, and I wrote that in C-sharp just because that was kind of the closest thing for me in the last, the least time, but that should be something that most people can go and follow. That basically proves that you can go and do not only that flow, but also you can actually do extensibility in the sense of, you know, promoting properties out of a 1.0, assumed 1.0 standard to a 1.1 standard, and then also have kind of flexibility and operability in that model. So, I would ask people to go and review that, and I'm happy to go and hear Google's arguments next week, and then end that call next week with a vote. Right, okay. So, there are a couple of things that are mentioned in there. One is I'm not, I don't think I'm hearing any objections yet to deferring the vote until next week and have the discussion. Obviously, I'll ask again because only two people got to speak there, but two, there were some requests there made as part of Google's sort of presentation to the group to address some of what Clemens mentioned there. The other thing is Clemens, you have mentioned your POC. I prefer not to necessarily take up time on this call to talk about your POC. Would it be okay if we talked about your POC on next week's call as well? Yeah, if people take a look at it, I'm happy to do a brief walkthrough through it and constrain that to five minutes next week. Okay, that'd be great. I assume, Rachel, giving up five minutes of the time from next week's call is okay with you? Yeah, of course. Okay, just wanna make sure. Okay, so let me then circle back around. Are there any objections then to deferring the vote until next week's call after we've had a couple of more discussions about this, mainly the on Google to present their concerns? I just like to establish that we don't delay it again. I just feel like there's a lot of orthogonal or duplicitous arguments that we continue to argue through. And I don't want to come again next week and say, okay, we're gonna delay until the week after. Okay, sounds fair. Any other comments on that? Just to be clear, who's being duplicitous? Who's what? Who's being duplicitous? I just think we've had the argument of particular implementation several times now. We had it a couple of weeks ago. We had some last week. Either we need to settle that or we need to decide we're not gonna settle if we're gonna vote anyway. Okay, so anybody else have a comment on that? I'm not hearing any objection. Yeah, I mean, I just thought we were just working through the open questions. Yeah, we are. That's why I think it's fair to ask the group if they're okay with continuing the discussion. So, okay, I'm not hearing any, I'm sorry, is someone saying something there, Eric? Just a comment on the comment, excuse me. After presentations with content that who knows what the results would be, I think leaving it open for one week after that, seems within reason. That's fair. It feels absolutely certain. Okay, so I think what I'm hearing is potentially use up as much time as necessary on next week's phone call to discuss final concerns about this, then give people one more week to ponder the situation and then vote first thing on the following call. Or this could be a good opportunity to try and offline vote that wouldn't take up our time. That's fine too, I don't expect a vote to be more than three minutes because it's not gonna be, I'm not gonna allow people to elaborate beyond yes, no, or abstain, to be honest. At that point, we've already talked it through. But I like the idea that offline voting also records, yes, no, abstain, comment, even if the comment can't cause a conversation, it's good to let people stand by their decisions and explain themselves. Okay, we can figure out the exact details of how the vote happens next week. But anyway, the point here is one more week for discussion, I mean, sorry, next week's call will be discussions and then a vote will happen. Sound fair? Yes. Okay, no objection to that plan? All right, so that is the plan moving forward. Thank you guys very much. All right, next on the agenda. It was brought up, I apologize, Kimber who mentioned it, but someone mentioned in Slack about the possibility of a face-to-face meeting at the OSS summit in Vancouver at the end of the month. So I said I'd bring it up. I'm just curious, do people want to have a face-to-face? Are enough people gonna be there to warrant a face-to-face? Who's gonna be there? Yeah, that was, it's Jesse. I think I'm the one who brought it up or one of the people who brought it up. I will be there. I was just curious if other people are going to be there. Yeah, I know I will be there. So I'm okay with a face-to-face. And I believe Chris Anacheck did say we can get a room at the conference center if we wanted one. I won't be there and it's too short, short notice for me. Okay. I see in the chat two of the people, Chris and Austin will be there. How many other people are gonna be there? Cause if it's less than say five or eight or something like that, I'm not sure it's worth it. At least to make it a formal meeting. Cause I think that's not going to say quorum. We could just meet for a meal or something. Food is good too, yes. Okay, so tell you what, I'm not hearing a whole bunch of people speaking up, but let me do this. What if I start a doodle poll and make it last, not too long cause people would need to make plans if we do set it up. So maybe a doodle poll that ends end of day tomorrow. And if we can get a significant number of people saying yes, then we'll see if we can set something up. But if we don't get, you know, somewhere between like eight or more then I'm not sure it's worth it. Does that sound fair? Sounds good. Yeah, maybe if you're looking to chime in in the notes that they know they're going to be there. Yep. We can start to. We should just as a general note, I think having at least four weeks lead time for face-to-face is required because people are just busy. Yeah, maybe. I mean, I think we can have a face-to-face. It's just a gathering. It's a community gathering, not a official meeting. That is definitely true. So we don't have quorum. We can still meet if we choose to but it's not for a working group session. And you're right, Clements. I'll double check. I think in our charter it actually may require a certain number of weeks. And I'll go back and double check. And if it's within, I'm sorry, if it's too soon per the charter then I'll drop the entire idea. But I'll double check on that. Okay, great. Yep. All right. Community time. Are there any community questions? People want to bring up our topics you want to mention. This is usually for the time for people who don't usually join the call but are more from the community itself to bring up particular issues they'd like to discuss. All right. Not hearing enabled, we'll move forward then. Austin, I'm assuming you don't have anything to update on the SDK. I do see online now though. Since you are here though, let me ask you a question. Do you have any update on your AI relative to the logo? It's done. I just need to do submit the artwork. I could do that today. Should I upload this to the cloud events spec repo or should we make a separate repo for artwork? I don't see any reason why it would need to be a separate repo. Can you give a reason? When you have image files in artwork it's going to increase the file size of the repo so I'm not sure if that's a concern. Are we talking gigabytes? No. It should be too bad. I think a few megabytes. Can we use LFS? Yeah, I'm just going to use sketch files. I just realized we're using get LFS and get LFS for all my image files. The reason that is fairly easy is that there'd be an option here. Ah, okay. We should need that if we have one image though. Yeah, these will be fairly small. As long as there are not a lot of changes should be fine. So I will submit, or should I put these in the repo deck? Honestly, I don't remember. Do we have an images directory or something like that? I don't think we do actually. So just create a directory for it and call it whatever you think is appropriate. I don't really think it matters much. Okay. All right, cool. Thank you, sir. And then so back to the SDK work group. Anything to mention there since I don't think you had any meetings? We have not had any meetings. It's the last one we had a few weeks ago. All right, okay. In that case, moving forward, Kathy, I believe you had one meeting since you last gave an update. Is there anything you'd like to update the group on? So for the function workflow document, is that right? Yes, correct. Okay, so we have had multiple meetings in the working on this document. The subgroup has been working on this document and updating it, adding, addressing the comments. And then we wrapped up in the last week's meeting. And so now we would like to bring this to the, to this work group to discuss where we should put this document. Should we put into a separate repo or should we put it in the cloud events repo? Right, so I was, my personal view was to create a separate repo for this because it's not part of the cloud events work, obviously. But it does, in my opinion, it warrants its own little repo because it's also not part of the serverless stuff. It's sort of a new side project. What is the other working group think? Is that a brand new repo to host this? Is that okay with everybody? So this is part of serverless, but it's not, I agree with you, it's not really a section of cloud events. I think it's parallel to cloud events, but it's a serverless function workflow. So I think probably I agree, it's a separate repo is better. Yeah, I'm trying to think. The serverless working group has its own repo under CNCF. So we can't create a repo under our repo, under the working group serverless repo. I think we'd have to create a new repo under CNCF. I think that's the only org we have access to at this point, right? Well, is this a, so like if this is a proposal of the serverless working group, then maybe it should be under the proposals for serverless, right? We definitely could do that. I guess it depends on how independent of a work stream we view this, right? Because at one point we decided cloud events was independent enough that we sort of branch it off from serverless. Do we feel like this is too premature to take that step? Well, I think the process is like, cloud events is a project of the CNCF, right? It's not. Well, it is now, yes. And serverless working group now, right? Yeah, yeah. And before that, it was just part of the serverless working group and all the artifacts were in the serverless working group repo. So maybe the workflow, we're calling it a working group, but like the workflow working group is a sub-working group of the serverless working group and artifacts should go there until such a time that the serverless working group thinks it should be a project and potentially quotes it or whatever is going to, or documents it or whatever is the outcome gonna be. Okay. Like, you know, I haven't been involved in it, so I don't know what exactly the proposal is, but it does seem like, you know, it's premature to, and like maybe tell the TOC, like that there's something going on or we're set a time or something. I think there should be a, like we need a kind of like Kathy's saying, like we need to figure out like, what is this thing? So it's, you know, and maybe a serverless working group work stream. Okay. Well, basically what I'm hearing is I'm hearing most people seem to be thinking just upload it into the serverless working group for right now, not create a separate repo for it. Does that sound like what people were thinking? Okay, not hearing any rejection. Kathy, is that okay with you? Yeah, I agree, because this is what the serverless working group decide to do another work stream, right? So I think it should be, yeah, part of this. Okay. So Kathy, can you create the PR against the serverless repo to upload the file and just create a new directory for it someplace? Okay, yeah, I can do that. Okay, cool. Thank you. All right, cool. All right, moving forward then, we don't have any issues in terms of maintenance that I want that was hoping we get to close. Let's jump right into PRs. All right, hopefully this one is an easy one. I did give you guys a warning that I considered it to be such. So Thomas, would you like to talk to this sampling extension PR that you opened? Yeah, this is once again, it was a vendor change or an adaptive change from what TinyKoma proposed long ago. They want to have an extension where a system may send a subset in order to do observability without overloading a system with too much data. And so this is the data layer that the event itself would include an extension for the sampling rate, which basically tells you how many events this supposedly represents, so to speak. So maybe in a registration, you'd have a similar feature where you'd say, okay, the subscription to events only wants one in 30. And the event itself would say, by the way, this was sampled at a rate of 30. Yep. Now you have a couple other things in here. Did you want to talk to those? Sure, the spec had never supported the idea of an integer before. So I had to add it into the info set. It seems like 32-bit integer solves our needs for now, so I'm not inventing a whole bunch of types of integers. If we needed some more clarity in the future, we might invent Uinteger or Uint64. Okay, and then up here, I think this was meaning this editorial changes. This is in extensions.md, right? Oh, also, this is the first extension that was itself a scalar, and since extensions.md has some basically stylistic notes on how to write an extension in the extensions folder, so they all look the same in our ease review. And so I added some more notes about like how a scalar could be documented. All right, thank you. Any questions or comments on this one? I think it's great for our first example of an extension, and it's good use. I think having for the formal language of the integer type, we could be a little bit like a 32-bit whole number, we could go and formulate this a little bit more kind of like it needs to be clear that it's signed or unsigned, that's not there. I would probably rather than saying 32-bit, I would probably go and define what the value space is, but that's something we can go and fix later. Like I wouldn't hold the PR on that. Okay, sounds good. All right, any other comments? Why don't we just go for 64-bit off the bat? I mean, seems to be the standard these days. Yeah, that's also true. 64-bit is not JSON safe. You cannot represent 64-bit number in JSON because JSON takes a lot of, well, not in all languages. For example, JavaScript can't handle 64-bit integers, so there's actually a lot of controversy around those numbers. Some systems will say that 64-bit numbers are actually the 50-something, 52-bit numbers for the Mantesa. Some will say that they should always be strings, that they can't be lossy on any language. There's just a lot of sharp edges when you include JSON 64-bit. Okay, sounds like a good starting point then. All right, any other comments or questions? All right, let me ask the question then. Is there any objection to adopting this PR? Excellent, not hearing any. Thank you guys very much. Cool, it's been actually a couple of weeks since we've approved a PR. All right, Clemens, you are up with qualifying protocols and encodings. Yeah, so there has been, obviously, there has been a lot of discussion while I was away. I took the change that was suggested, so that section. I made an amendment this morning, which if that's not palatable, I'm happy to back out that particular sentence. Where was that? I added- That error. Yeah, exactly. I added, effectively, the sub-sentence. So the original text that you all arrived at was practically would like to see at least one open source implementation, and at least a dozen independent vendors using it in their product services. And then I injected, effectively, that reference to under the umbrella of vendor neutral open source organization to basically match the protocol standardization body. So we'll want to make sure that we don't have, just because a vendor puts their own product out under their own copyright in open source, and they have a bunch of customers that doesn't necessarily entitle you to be part of, or have an official spec, but you should really, in the spirit, have a open source product that has been produced or is developed on neutral ground, and then we'll go and consider that here. All right, any questions or comments on this? A non-blocking comment that could be done as a follow-up if people agree. The language in line 27 just always hit me as a little bit strong. I might have phrases thinking along the lines of the cloud event spec is not an advertising space for proprietary proposals. I don't want to let good be the enemy, or great be the enemy of good. If a company, like if Amazon wanted to support cloud events, it's totally okay that cloud events can be compatible with SNES or SQS. It's also totally okay that the working group doesn't want to be the place that hosts that documentation. But I read this as like shame on Amazon for using SQS as a storage buffer, which is not really, I think, the intention here. SQS is using HTTP, and you can go and just map that onto their protocol. Is that, like, I don't see necessarily being, like you can go and take a cloud event and map it into an HTTP message. And I would think that in the way how we're doing that mapping into the body of the message ought to be compatible with SQS both on the receive and the send side. I haven't validated that, but I think it would be. It might not, it will not be compatible with the webflow expect that we have, but that's okay. Right, like I actually like, like if there was some kind of followup where, like I think we have this for just general open source stuff that would be like, you know, and maybe this is, maybe I can't, I'm not sure whether I'm confusing two places, but like if there was, if this pointed to, we encourage vendors to support cloud events and list yourself here with a reference to your doc, if you're supporting, you know, like, if you have a different transport for your proprietary thing, I'm sure somebody could come up with a more concise way to say that. The point that I think the point I'm trying to make here, and so with a follow-up on PR, that would be, might be the best way to go into words with that particular part is that what I want to express is that not, this should not be the place where you make your, you know, your proprietary protocol known to a broader audience, that's what I want to say. And really what we want to do, we don't want to contribute to the proliferation of more proprietary protocols with products which could go and snap to a standard protocol without any problem, but just choose to use a proprietary protocol because it just happens to be more convenient for them or because they just want to, they don't want to standardize because they want to go and lock customers into their new protocol because we've seen all of that. So I would, the goal that we have here overall is to promote interoperability and people showing up with new proprietary protocols doesn't help interoperability. So I don't want, and I hope we don't want us to be the place where we are trying to promote interoperability and at the same time also promote, you know, proprietary product protocols. So I understand that that's a little bit, so I've been trying to condense that into that sentence and I'm happy to take alternative wordings of that but I hope that people are agreeing to the spirit of what I just said. Yeah, I think that like, I think it, I think I agree with you. Like we want, I agree we want to promote interoperability and I think that that is the spirit of this paragraph. So I think that like I'd move forward with this paragraph and then like, you know, people can wordsmith it as follow-ups. Okay, and I think even Thomas started off by saying it was a non-blocking comment and he'd be happy to do a follow-up PR if you wanted. So we got to see do that. Any other comments on this one? I just have a question. What is the proprietary protocols that are being referred to? For instance, Solar has, what's that Solar? One of the brokers that we had a proposal for adding a protocol to has a completely proprietary protocol. Polzar, that's what it is, has a complete proprietary protocol that only it uses. And so here's the question of whether we should go and give a blessing to that nascent product while the rest of the pop-up space is using a space convening on one of two protocols. So that's an example of here's a project which comes up with their own protocol, obviously very early in the cycle that then is only used by that project, also has, you know, obvious things that a more mature protocol would have are missing like versioning, et cetera. And then we had another proposal out of the open messaging, which also doesn't kind of meet the bar in my view in terms of participation of usage. And so there's just a bunch of, so the question is if we take 20 of these things instead of kind of the protocols that the industry has mostly been focusing on over the last 10 years, are we helping you to our probability? And I don't think we are. So those are examples exactly from here that I find problematic for us to endorse. That's helpful, thanks. All right, any other comments or questions on this one? Okay, not hearing any comments. I have a question for you. So you actually created a brand new doc for this. So two things, one is if nothing else, if we keep this brand new doc, we need to reference it from the read me at some point, but also would it be appropriate in your mind to move this into the primer or do you think this needs to be a standalone doc? I think in the PR comment I actually wrote that I don't think of this to do, no, I didn't say that explicitly. So the way I think about this, this is a section that ought to go into the primer. Okay. And I just didn't have a better place for it. And I think when we talked about this doc, then, and I think it was also, we talked about this on the call. I said, I'm gonna go and write a PR, I'll write this up in the PR and it really is meant to go into the primer. So I don't, I think this should, like we should take it and then move it. Okay. So what we can do though is assuming the working group adopts the PR, you and I can work offline to actually modify the PR and to add it to the primer because the text of the PR won't change, it's just its location will be. Yeah, correct. Right. So let me ask the question. Is there any objection then to adopting the PR? I have a question in, I think in our previous meeting, we said we are going to have bullets list of the criteria. I do not see maybe a message versus had one. Yeah, look at the client 34. I think that's what Kimmer who it was, but somebody would have suggested that. Yeah. Matthew, maybe Ryan, I can't remember for sure. So you see it now, Kathy? Yeah, okay. Right here. Okay. Thanks. Mm-hmm. Any other questions? Okay. Any objection then to adopting the PR? Okay. Is there any objection then to, in the process of emerging the PR in, we move it into the primer? Just want to get that out of the way. I would object if it wasn't done. Thank you. Okay. So hold on a minute. Proves and move into primer. So I'll work with you, Clemens offline to make that happen. Thank you. All right. Now, as Clemens alluded to, there are two different PRs out there for adding some transports or bindings. My assumption was that people may not have actually had a whole lot of time to review those, especially in light of Clemens PR that we just adopted. Do people wish to discuss these two PRs today or do you want to defer that until potentially after next week's phone call, since next week will be about extensions? It's up to you guys how you want to proceed here. Does everybody feel like they've reviewed these enough because they haven't out there a while? So we can't technically talk about that if you want to. No comments? So my opinion of the specific, of the open messaging one is open messaging, like the PR that I've seen, and if it hasn't changed significantly in the, when was the last change made? It's been a while, I believe. Yeah. Wasn't really, yeah, that's right. So give us some advice about it. So six days ago was a comment. I think that PR as it stands, it doesn't even define a transport binding and it doesn't define it, like it doesn't create an implementation guideline for how you realize cloud events on either open messaging, open messaging's native transport if there is one and open messaging's native encoding if there is one. But it just, I don't even quite really understand what it does, but it kind of alludes to that you can use a cloud events with open messaging, but it really doesn't, it doesn't raise to the level of really being a spec. So that clearly needs extra work and it really needs to refer to effectively what I know, what I, so I believe that the wire format here is whatever RocketMQ uses, because the effort here is based on, apparently based on RocketMQ, so it would have to have, would have to be effectively binding to that protocol. And then we'll have to go and check that against the rules that we just adopted. But like as it stands, after I've reviewed this, this is not a, this is not a, like that's not even a spec. Yeah, my initial take on this was that given the bar that we just adopted in your other PR Clemens, I wasn't sure if this met all the criteria of being widely adopted. Yeah, it's, if you look at open messaging, if you look at open messaging, the effort, I only see three people contributing in an awful certain company. So even if it's, even if it's running under the CNCF, there's some external contributions that are being made by folks into the benchmarking effort that's running in the open messaging group. But in terms of really the specifications, that's effectively all marching straight from whatever RocketMQ does into open messaging. And then RocketMQ itself is an effort that's also kind of fairly solitary by what it looks like also by, affected by a single firm. So even though it's been politically, it's been politically smartly done, I would say, to run these things under the umbrella of Apache and CNCF, it doesn't, for me personally, it doesn't meet the bar of being open source efforts, and standardization efforts that really are community-driven. Okay. This looks like, this looks like political robber stamping to me. Okay. What other people think? I do not people, or I'm sorry, I have enough people on the phone call looked at this enough to form an opinion. I'm not sure how to interpret the silence here. I'm inclined to interpret it as no and give you guys another week, but if someone wants to speak up, please do. I've looked at a couple of these transport bindings and I struggled to see the value in them apart from, I think like Clement said, political rubber stamping. I mean, I'm from Confluence, so obviously I have a Kafka interest, but I didn't want to come forward and say anything till I see how these transport bindings kind of progress. At the moment, I'm kind of at a loss as to value until cloud events and the extensions and everything gets mapped out a bit further. So I'm kind of just sitting back trying to observe how this is going to make sense at the moment with the transport bindings. And especially with the latter stuff that Clement's just added in the previous PR. So Neil, does your comment apply to both of these PRs, the postal one? Okay. Okay. Okay, anybody else in the call want to make a comment? So this is Ryan. Pretty much my impression was given that we now have a more strict or clear definition of what meet bar, we can look at that two lines of bullets that we just passed the bullet points with the PR just we passed. And we can just take a look and see if they meet that either standard or de facto standard. In my opinion, they don't seem to be that far. Okay. Yeah. Yep, okay. So I'm not hearing anybody jumping up and saying you, that they believe that both of these PRs meet the new bar that we just adopted. So what I'd like to do is give people at least another week to look at these two PRs just to make sure that we're not missing something. But I'm getting the general sense that we're probably gonna choose to close both these PRs with no action. Let's give people a week to review that and come back and see if they've identified some reason that they actually do meet the bars that we just adopted. Does that sound fair to people? Okay. Let's go ahead and do that so I won't sec. All right. Thank you guys very much. Any other comments on that topic before we move on? All right. This PR, I thought were relatively easy for us to tackle. I don't believe David is on the call though. So David added a JSON schema for our specification. Let's see, I guess to the JSON format doc, added a reference to the JSON schema doc and they actually included the JSON schema here itself. So with that, do people have any comments or concerns about this? I did actually run this schema, JSON schema through a checker. It did pass okay, I gave it some sample JSON to make sure it did seem to catch all the required fields and stuff like that and make sure they were there. It seemed like it did work for my point of view. Anybody have any comments or questions on this one? Any concerns with adopting it or merging it? I think it's great. Might be something we could use in the SDK as well. Yep, definitely. All right. Any objection to adopting it? I just have a question. So I assume that this thing, I just want to clarify. I think this thing will change, the schema as we change our spec. Correct. My assumption is that as people propose PRs to add or remove attributes to our specification, the PR should also include the updates to the JSON schema here or any other documentation that might reference that attribute, yes. Okay, yeah, that's my question. Yep. So one of the questions I have, and that's really a JSON schema question because I have to admit that I haven't studied that in as much detail as I should, whether, yeah, so that's exactly that comment that you're just pointing to, whether JSON schema actually allows if we then adopt 277 next week, if we were, whether JSON schema actually supports an open schema model where you can go and add stuff to the JSON and then does the JSON, like can you tell the JSON schema validator to be okay with that? If it finds extra content that it doesn't know, or will it always fail? Like how does that work? Is there anybody on the call who knows the answer to that question? Okay, I think that Maxi might be a really good thing to try to resolve before next week. If anybody- Can somebody put a comment on the PR? Or no, you already have comments. Yeah. Yes. He beat you to it. Yeah, so that might be a good thing for us to find out. I'm sure every one of us knows a JSON schema person someplace in our company. So maybe people can go and try to find from their schema expert what they think. Okay. But that definitely something we should talk about at some point. I would hope that there is a mechanism because if there wasn't one in XML then there will be one in JSON, I guess. But because I'm sure that the schema people come from the same tribe, but that's something we should go and verify. Yeah, plus I know people use JSON schema all the time and I know people add extra stuff to JSON messages all the time. I can't imagine it would bust it. So, but you need to- So I was just going to do that. I did it a few months ago. I just came up with all the details. But I definitely had arbitrary data. I just validated some of it. Okay. So, but I guess Clemens, your real question is, how do you actually represent that in the schema as opposed to just- Yeah, I just want to know enough that we're taking this and then find ourselves in a box because JSON schema is as things go is not the only schema you can go in for JSON as well. So, as that is in that wild world of JavaScript. Right. And then to me, the question is, okay, do we need to actually add something here that says you may put extra stuff or is it the lack of something alone or nothing at all implies you can add extra stuff? Yeah. So, I'd like to just understand that from someone who's been doing this, a partitioner and ideally from who owns the PR. Yep. Okay. So, we'll get to watch if I get that answer out there. We'll jump in real quick. This is Chris. Can you repeat the exact question again? Because I've actually been working pretty closely with the maintainers of JSON schema. Oh. So, the question was, if we accept 277, which means we're then moving to effectively having an open schema, how do you represent an open schema in JSON schema? Meaning you are making it explicitly okay to add further elements. Okay. Yeah, I can, I'll reach out to them and point them to this issue too and see if they'll chime in. Okay, thank you. I mean, in XML, I would know the answer and Doug would know the answer and that's just adding the any element, but here is, and that shows our age. Thank you. But in JSON, I don't know. That's new fangled stuff. These are whippersnappers, yes. Okay. There's a specific attribute that specifies whether there can be things other than what is in the declared schema or whether the only what is in the declared schema is possible and that affects the compatibility of changes to that schema and whether they could be breaking or not. I'm sorry, but I can't call it off the top of my mind right now. Okay. So, a comment, I love this PR would be very appreciated. However, I believe that this PR by itself as of right now is accurate according to the current version of our spec, correct? Actually, it's not, right? I just see that there's still that extension back which think we are going to remove that, right? Well, we haven't removed it yet. But that's the current version of the spec. Right. So, until we accept another PR that removes the extension, I believe this is accurate according to the current version of the spec. So, my question now is, is there any objection to adopting this PR? Because I think the outstanding question that people are going to investigate is relative not to this PR per se, but rather the other PR. 277, I think it was Clemens was saying? Yes. So, I think it affects that PR and not this one. Yeah, the question was just basically whether we're boxing us in up, but what I just heard is that there is a mechanism like this, we just don't know what the name of the attribute is. Right. And so, if there was a mechanism like this, then I don't see any reason not to take this. Right. All right. Is there any objection then to adopting this PR? All right, cool. Thank you very much. All right, Kristoff, I don't believe Kristoff is on the call. However, he wanted to add some guidance on extensions. And this, I believe applies regardless of whatever happens with our current extension discussion. That's why I thought it was safe to bring it up on today's call. So he wanted to add this little bit of text here to the primer. I'll leave you guys a chance to read it in case you haven't read it up till now. Very short. I agree with that. Okay. Are there any comments on this one? Any concerns? Any objection to adopting it? And keep in mind it is just in the primer so it's not a narrative, just providing some guidance. So last chance, any objections? Oh yeah, go ahead, Sarah. Okay. I don't really understand what it means. Like does this mean, this is imply that it's not for like, I wanna start using this, like does this mean that we are excluding experimental? No, I think all I was trying to say is don't start adding gigabytes of data to Cloud Events metadata because in certain modes like this HTTP binary mode, mapping that HTTP header will be problematic. Okay, so maybe it's the value of extensions actually. Yeah, I agree. Yeah, I think that was the main purpose behind it, but I think you also may need to worry about the number of headers as well as part of the issue. Unfortunately, is that on the call to explain this, but he actually did a whole bunch of investigation here. So I think that needs to be referenced because when, and like I'm sort of coming in and having missed some of the calls, but it seems like this should be like, if you're thinking of adding the extensions attribute, maybe you shouldn't rather than the number of extensions attributes and volume of data required in a single Cloud event should be small. And I can write that down, but I think that, I don't know, the intent wasn't clear to me. Yeah, no, it sounds like those are good additions. So I just said, can you make a comment to offer up some alternative wording? Because I think what you're suggesting sounds like a good change. Yeah, there's also like way, way back when this was the open events group, we had thought like tried to work through what extensions might make sense and just kind of like usage guidance the full stack. And we came up with this idea of, it seems like in practice, the data in a Cloud event, it's easier for developers if we reuse existing types. Literally like in client libraries and API specs, that it'd be the event for service foo, would include data type T of service foo. But what happens is sometimes just additional data that's only useful in the context of an event. So for example, in a FTP service, it might include metadata about what, whether or not it just overrope one file by uploading a new one. And it wasn't clear whether the definition of what a file is should change and add additional properties that are only ever used in Cloud events and muddy the normal like request response APIs, or whether that type of data should be put in extensions. We also have the data section, right? So anything that's specific to the event, that's not metadata, that's kind of standardized, can go into the date, anything that's specific to an event ought to go into data now. So what I'm suggesting is there's a couple of these cases where it's a fuzzy concept where that is in some sense a metadata about the event. The data could be just like the class, system.file, but system.file doesn't include a property that lets you describe what this file overrope. It's not going to be a persisted feature. It's not, you can't get that property when you just say list files. And so from the eventful systems perspective, it is a little bit muddy and I'm not giving guidance. I'm just raising a thought experiment that never concluded from long ago. It is a little bit muddy whether or not they should edit all of their existing classes to include this field that will only ever be used in an eventful format, whether they should have a different version where it's file event as opposed to just an event of file, or whether they should just put eventful metadata about that file in the extensions header. But what do you do with that on the metadata? Why is that not something that is absolutely specific to one event, not just going into data? Because the question is what is the type of that data, like the actual C-sharp class? What did the library use? I don't understand why I put that hint even on the wire. Like the C-sharp class that some library is going to un-martial into, what, make up a class? No, but I don't because I don't put runtime type information onto the wire when I send that to another system, which is maybe using a different language. I'm trying to worry that the first access has implementation coupling that I would never do, so therefore I can't imagine that case. I'm trying to step into your court and try to expose that yes, we have this thing that we call just bytes or any or something like that, but at the thumb layer in the stack, by the time it gets into one of, in Azure customers' hands, they're probably going to program using a programming language. I think that's a safe assumption. So let's work through their perspective and see what is best for them. Okay, so hold on to the second one. I think Vlad, you were trying to raise your hand? Yeah, so a couple of weeks ago, I tried to put what I should have put in data in extension because it wasn't clear to me that they were HTTP headers and actually part of the envelope another message. And I did that as part of a network or a startup I was working with who wanted to use CloudEvent for internal functions too, like using CloudEvent to pass events from one function to another, say put a CloudEvent in SQS and another Lambda, pick it up. And my issue for me was not the FTP example, but with details regarding authentication. I wanted to put that into the extensions and that was a bad idea. The S-word was put in a pause due to some other architecture concerns, but the idea was to add inside the data payload another metadata field and add relevant stuff there. So if we have an event of type file created, you would have that as a probable property inside the payload and then a couple of extensions like authentication or other events that happened. But I didn't matter that they came to it though, there is a PR where I was discussing this with Doug, I think. Yeah, okay, so we're running a little short on time here, but I think there are a couple of next steps on this one. First is I don't think we're ready to accept this one yet. If nothing else, I think Sarah, you wanted to make some editorial changes so that hopefully will come soon. Thomas, relative to your concerns, it wasn't clear to me whether you'd like to see those types of changes made as part of this PR or is that a piece of follow-on work? I have enough battles and fighting, so I'm trying not to put my foot down too much on anything without caution. I'm just expressing that I have run into issues in the past working on the full stack problem where it was reasonable to say that maybe like, like I said, an FTP server would have an extension about the FTP objects as opposed to saying that just the routing framework is gonna have thing about sampling. That there is sensible sometimes that the domain might want to put metadata about the domain in extensions as opposed, like these would be things that would never ever be ratified as core common properties, but that in order to keep the type system of the whole vertical stack simple for customers, that might be a good place for it. And so I was, it just seems so weird that we're fighting so hard and how extension should be done. And then also looking at a PR that says, by the way, extensions are bad. Okay, we'll tell you what, since we're basically out of time, think about it. And if you'd like to see modificates to this PR or want another PR to address this, add something someplace. Okay. Yeah, it's not necessarily extensions or bad, it's just don't put everything into extensions where it is on the stake I made. Yeah, that was my interpretation of it, not extensions are bad, but teach their own. So with that, unfortunately, I think we need to call time because we are out of time. So let me just do a quick final roll call. Anita Wu, are you still there? Anita? Yeah, hi. Hello. Okay, thank you very much. Michael? Michael Payne, yes. Okay, thank you. Chris Borchers? Yep. All right, Doug, the other Doug. I'm not even gonna try to pronounce your last name. What about Stanley? Jinjin? Yep, sorry, I'm here. Which one is that, Stanley? Okay, Jinjin, are you there? Yes, I'm here. Thank you. And Scott Andrews? Scott? And what about Mark Fisher? Yes, I'm here. Okay, Scott, last chance. And what about Doug? All right, cool. Anybody else I missed from the agenda or for the roll call? All right, cool. Thank you guys very much. And please do take, when you get a chance, please do review the other two PRs for transports so we can resolve whether those meet the bar or not next week. And thank you guys very much. We'll talk next time. Thanks guys. Thank you guys.