 All right three after let's make sure they're missing anybody then we'll get started All right, I think I got everybody let's go ahead and get started But I'm bump bump bump community time. Okay anything from the community that people want to bring up It's not on the agenda. All right, not hearing any STK call so no updates from the STK other than we do have a call planned for right after this one I know obviously Clemens can't make it but Scott hopefully you can't I think Tim Bray may have some issues you wonder he wants to bring up so Scott even if you didn't complete your action items If you or anybody else who probably joins the STK call can try to join that That way we can answer Tim's questions because I think you had some So be good but against reminder it is right after this call Incubator we are up to two end users. We're still looking for some more So please send me your information when you have it also if you want to be listed as an adopter Please let me know and I'll add your name to the list Kubecon San Diego I actually have haven't done anything at all about this I have had to call people ping me saying that if we needed additional people to speak they're willing to volunteer So that's great. Thank you But at first feel like we need a proposal in terms of what we want to talk about first So I'm going to try to write something up hopefully before next week's call so we can start talking about it and that This was a clear picture of what we're going to do chances are it's going to look very similar to what we did in the past But just wanted to have some talks offline first about it But we do have some time since it's not till like November or something All right moving forward before we jump into the PRs or issues and stuff Are there any of the topics people want to bring up that I might have been forgetting? Okay, cool in that case what I'd like to do first is just a little bit of cleanup To Peany opened up this issue or PR a long time ago and there have been some comments on there That he hasn't addressed I pinged him many many times asking what's up with that He hasn't responded So I'm inclined to say that we close this PR for right now It is just an extension so it's easy to add later if we need to and we can also reopen it if he reappears But unless someone on the call here wants to volunteer to champion this and take it forward I'm inclined to say we close it for right now Any comments or concerns about that? Okay any objection to closing them? All right? No for it. Yep. Okay. Cool. Thank you guys Alright moving forward Next on the list is maps maps maps maps maps maps so All right So Let's see Evan did modify his PR I believe on Tuesday evening and I think the only significant change was he removed this section here Which defines the mapping of how to serialize them a map into say an ECP header with the dash and stuff I think that's pretty much the only change he made since then so basically the general gist of this to mark fresh people's memory is Get rid of maps as a valid attribute type that does not touch the data attribute just all the other attributes including extensions so Last week's call there seem to be general consensus to head in that direction to removing maps. However I want to pick on Vladimir a second here Now Vladimir Jim did ping me offline. He still feels Quite strongly that we should be able to keep maps if we just simplify them down to things like Only one level of depth and stuff like that. Do you want to speak to that? Vladimir you have a coffee? Okay, obviously you can't come up mute. Is there anybody on the call who has an opinion one way or the other on this? I do go for it I think removing maps entirely would be Just fine with me Okay Do you have any opinion on? Gems comment about can't we just simplify them down to just one level deep? I think if you if you want to do that You still run into problems where things collide Can you elaborate on what you mean by cloud you mean like so okay, so let's say I Want to make a three level deep map and so now I have to write some sort of Custom convention and I have to mix the what splits my map keys Into the cloud events attributes So I think that you know if you're gonna write something custom just use cloud events and be able to use the keys and Maybe it's not so bad if we use a different string in your custom extension To split the keys in the map Okay, thank you Anybody else want to chime in here in particular? Is there anybody on the call? Including Vladimir. Hopefully you come up mute who would like to advocate keeping maps Okay, I'm not hearing any Do people feel like the PR has been sitting out there long enough people have reviewed it well enough that they feel like they're comfortable voting Or would you rather have one more week since this is kind of a drastic change This is up to you guys Yeah, what we could do is instead of voting at the meeting people could vote on the PR itself Over the next week and then we can actually confirm the voting next week. We could do that too. Yep. That's a good idea Is there any objection to doing an asynchronous vote that way we don't feel like people were rushed into this decision Especially because so many people are out gives them an opportunity to Participate in the vote of such an important change. That is a good point. We only have 16 people Which is definitely lower than our normal average. That's a very good point Okay, I'd say follow up by email was again and State that we're doing the voting over the next week and that the final total will be On next week's call. Yep. That's what I was planning. I'm doing. Yep. Exactly. Okay, so Okay Any objection to Kicking off that votes and heading in that direction. Okay anybody who want to make any last comments before we move on A little too easy. All right moving forward then All right, so I gave you guys fair warning through email last night James's and Clemens PRs about how to handle data Hopefully you guys have actually taken a look at these and reviewed them Does anybody want to volunteer to speak At all about either PR either in favor of one against one or just about the problems that these are trying to solve Don't be shy. Well, I will pick on people. I did warn you guys Okay, that did it force me into it I'm gonna pick on Christophe for two reasons one to make sure he gets on the roll call But two because Christophe you had tend to have lots of really cool ideas and deep thoughts on things What's your opinion on these two PRs? Have you had a chance to review them? I Reveal the initial one of James And I had a quick look at those two what to be honest I I It I don't feel like I Have a strong opinion there to be honest. Okay, fair enough All right Let me take on somebody else then Mark Like to get your take on this because I know you've had you I know you've at least read them Yeah, I think that the so James 470 is You know try It's a more simplified You know in terms of text version than what 471 the alternative one that the Clemens did came up with I Think the the problem that James is trying to solve is being able to have binary data transported in such a way that It's well known how to transform it as it goes through multiple hops and You know based on some other texts that I've read from him He sees this as a as a big issue. I think he's also wanting to have some simplification of Possibly even removing the type system and having everything be string You know in terms of what Clemens wrote in 471 It let's just say that there's a lot more text it And when I when I look through this I The the implementer of me Start saying, you know, what's the what's the flow chart that I need to follow in order to be able to correctly decode and re-encode cloud event and I'm worried that I'm worried that that's not Simplified enough and if it's too complex then people will get it wrong So I'm I'm up in the air on this one interesting so Do you ask you this aside from the text that's in the two Do you think? They both go about solving the problem the same way and the difference is just the wording Yeah, I think that they are they are mostly aligned, but Again, there there can be more nuances derived from 471 and 470 is likely more, you know simplified straightforward in terms of It's normative discussion, so So you know really really what I'd like is for other people to take a look in and comment on this as well because Without either James or or Clements to help with some of the discussion, it's more difficult for me to know the exact intent Yeah, so let me ask this is there anybody in the call willing to admit they've actually read both PRs and Can comment on it speaking for myself. I definitely haven't I need more time to really look at them and they're They're very extensive. There's a lot of text to read and and understand so I really cannot comment right now yet Okay, I appreciate you speaking up. Thank you, bro I Because I went I spent it quite a bit of time this morning going back and rereading both PRs and the original text from the original issue To see what was really, you know They're the genesis of all this and I feel like I have a better understanding now that I've gone back and refresh my memory But I also get the sense from reading both PRs that I'd say yes It just feels like there might be a simpler solution Even though both of them may be hundred percent accurate. It's just from an outsider point of view reading the text here I may be asking myself Why is there so much text here to explain something that that that should be easy to do and that fact that there's so much text Leads me to believe I'm missing something and that makes me scared Um So I'm wondering whether Mark so mark you said something interesting there You said that you thought that James's was easier from perhaps an implementation point of view um I'm wondering if what that means is we should do is when Clemens gets back from vacation I believe he's back next week is ask him to perhaps Look at making Minor editorial tweaks to James is that way we get the simplicity from the implementation point of view But maybe the sort of Deeper in-depth discussion points from Clemens and sort of merged to that way Do you think that's possible or do you think that they're they're too far apart to really kind of do that? I think that would be a good discussion to have with all the parties Okay Okay, well, it sounds like we can't come to a vote on this if only a few people have actually read it And we do want to try to skip the other guys involved Um, I know James is going to be really difficult to get on the call here Mainly because of the time difference. I think he's in australia Obviously when clemens gets back then we're going to have you know His point of view which is obviously going to be biased for his pr So it's going to be kind of a challenge not having the opposite Point of view on the call But I guess the best we could do is just hold off And see whether clemens can can do some magic in terms of merging the two Um Because at this point in time, I don't feel comfortable trying to push a vote on this I just don't feel like we've had enough review of on this stuff So unless someone has a more brilliant idea in terms of a way to go forward We may just have to defer next week and and just ask you guys to please look at these for next week Because we got to have a we got to have a deep discussion on this stuff This is in the map one of the two. I think biggest issue is outstanding for 1.0 So please review it for next week again Yeah, I will I'll do okay. Thank you. Roberto Okay, anybody else want to chime in? On any on any other points? Okay, in that case, let's see what we can move forward with. All right scott Your batching one So I think there's actually two different discussions here in the same issue One is we've got a slight tangent with the webhook specification Um, and then of course there's the batching issue itself. So let's focus first on the batching since that's what the issue is about I this morning I try to summarize four different options here as I see it If I miss one, please let me know, but I think the four options are Go full bore and completely defined batching and that's the means not just from a syntax perspective But include the processing model definition. So for example, one of the things scott things is missing is some sort of response back to the sender to indicate um Whether each individual event itself was processed in some way, even if it's just a returning list of 202s At least then, you know, it got it and and it wasn't lost in transit that kind of stuff the next level down from that is Basically have what we have in the spec today, which is defined batching from a straight syntax perspective But don't say anything about the transport And it's and you could kind of interpret that as an all or nothing kind of a thing since most transports only have the notion of All or nothingness to them, but we actually don't even say that so basically define it just from a syntax perspective Another option is to remove batching from the spec But talk about how you can do batching if you really wanted to but it becomes an application level a definition meaning the batching gets shoved inside of the data attribute And then it becomes up to the application to figure out how to extract it and process each one But from a transport level perspective, this was still just a single cloud event that gets sent over the wire So it's it's kind of like doing nested cloud events in some fashion And then finally is just remove batching entirely and say nothing at all about it I think those are the four options that I could see that people have possibly mentioned Are there other options people could think of? Could we Keep batching not specific in the sense option four and then do something for cloud events one dot one I mean batching is something that isn't addressed in any Other transfers that I know of at the level we want to address it and implementing Responses about hey which one failed and all that Would make cloud events way more complex in the sense that as you said last week We're not going to be able to just describe it as hey just add these headers and now you have a cloud event It's going to be hey at these headers now you have a cloud event, but you also need to respond in this very specific way So could we maybe add it in a follow-up version or do we want it to be part of one dot zero? right, so Mark and I actually had a little bit of talk about this yesterday And correct me if I'm wrong here mark, but I think when we were when we talked about this We couldn't come up with a way to add it to the spec without it being a major version bump meaning we'd have to go to version 2.0 in order to add it and I believe the biggest reason is because I'm a receiver of a cloud event that has batching Oh, no, this was a map discussion. Never mind. We were talking about this in context of map not batching Yeah, I was going to correct you on that. Yeah, sorry Yeah, I proposed the same thing on the map stuff because as I do believe there is worth in getting the first release candidate out And then seeing if we want to do this for one dot zero or if you want to do it for one dot one But the breaking change is something that is intense. So it might be worth doing it now Yeah, I my initial take on it is for both maps and batching I don't see how to add that Without it being a breaking change because In most cases, I think people are expecting Cloud events to be sent as one-way messages Which means you have no guarantee that the other side actually got it aside from maybe a two or two, right? At the worst-case scenario And at that point you don't know whether the other side understands maps or batching And so you're kind of in the dark and there's no reliable way for the sender to know What's going on whether it whether it's for the other side supports 1.1 or 1.0 So my initial take on it is we probably cannot add either one without it being a breaking change Meaning jumping up to 2.0, but I'd like to hear what other people think It's additive because it changes the content type So you're saying you think we've had batching as a 1.1? Yeah, but people have their hands up. Sorry. Yeah, I'm sorry. I didn't I forgot Kristoff. I think you're first Okay, so the thing I'm always I think that is difficult to understand is there's basically two types of batching One is batching at the transport level where you have no semantic Grouping of the components together and the other one is more a semantic batch where those Events belong together for some reason So I'm trying to explain maybe a little better. So if you have a Bus you just put people in there and they don't know each other. They just commute together But they have no relation to each other. That will be a transport level batch batching So it's just random by chance that they are together And I think that part we cannot really remove because there are Transports that do this today. For example, Kafka just does it for you There's no or you can configure it. But per default Kafka just does it. They just wait for A defined time in your client batch those messages together and send them to the server And that's about it. And I don't think we we should remove that or forbid a transport to do that The other thing is to have more a semantic grouping of things. So if you say, okay, this Group of persons, they're actually a family or they're part of the serverless broker group or whatever Then they have a semantic meaning why do they belong together? And that's a really different concept For events that would be maybe Okay, these events have been collected by this IoT device over the last minute or so And then you want to group them together and the fact that you group them together Should also remain if you move them across several transports So I think when we discuss we should really make a difference between those two What what I did when I made the original comment in the spec was adding this transport level batching where it says Yes, a single transport can batch events But as a sender and as a receiver you just Take them as a random group and you process them one by one and then you can if you Hand them over to the next one. If you're just intermediary, you're free to break up the batch or create new batches and so on Yeah, so this is kind of not what I'd like to keep and then the next question is how do we deal with this at HGP level do we want to have this in HGP? Do we want to define this in jason or do we just want to keep it for those transports that have it like have gone? Interesting. Okay. Thank you. Roberto your hands up Yes, I I think ristoff explained it much better than I would have explained it and I agree with him 100% that We can either leave it as is where we have defined it at the transport level Which is what I would like to do and not change the syntax of a cloud event to include the concept of batch in the Cloud event the specification itself. So my vote my strong vote actually is to leave things as is. I'd leave it at the transport level So I want to make sure I understand what you guys are saying when you say leave it at the transport level because My interpretation of what's in the spec right now is not necessarily leading at the transport level Other than what all we're really doing is saying If you want to send a batch of them, here's the jason for what it looks like, right? It's an it's an array Exactly and that's good in it. Okay, because I okay I wasn't considering that transport level as much as because it's not like we're actually interacting with the transport It's just we just defined sort of the wrapping for it. Okay. I want to make sure it was on the same page It's you so you're advocating for basically number two leaving it as is leaving it. I say it's exactly. Yes Okay, and just for clarity sake ristoff. You're you're basically saying keep it as is number two Yeah, okay. Cool. Thank you. Okay. Scott your hands up Yeah, so I'm saying that if you try to implement this feature inside of htdp It's there's not enough information to actually understand how to act or knack each individual message for htdp transport Christop or for bertha want to respond to that something like pub sub Pub sub has a response back that says this id got this response Hey, I can respond. Um, I think So if we just talk pure htdp Not the webhooks back, then I'm actually maybe I'm saying something wrong But maybe here we don't we have the same problem that we don't even know What is going on? So from a pure htdp transport layer or we define as there are some headers we add And then the response you get back It's up to you. There is not really an A definition of an error code So it kind of from htp level it implies if you get a 400 error back you did something wrong Or we don't have a definition of saying Uh, I don't know your jason is broken or I do not accept that format or something We did not define this either. So there's not really a way to Acknowledge or not acknowledge a message Fully either at the level that we have it at in pub sub The other thing I'd like to say is that what you still can do is Acknowledge or not acknowledge the whole batch of events, which is obviously not as good at what most other transports do Um, but yeah, I think once we go into that we should Focus on the webhook spec and that's where it should go in my opinion Anybody else want to comment on that? I feel like it's not valuable until the webhook spec is implemented Because I have no idea how to implement this So I just want to say how we implemented it in ad Adobe with with batch So we just send a whole bunch of events and when if the response is to xx we say it's done If it's not to xx we say something failed and we'll deliver them all all again So we don't need to have an individual acknowledgement for each one of the events in the batch We just treated us all or nothing I that was actually going to be my question back to you Scott is Excuse me It if you assume that it is an all-in-nothing thing and the race fee response code Was for the entire batch Is that not a viable alternative for you? if it's It's not super desirable though Okay, can you elaborate a little because because I'm trying my best to channel clements here, right? This is supposed to be one-way messages. This is event thing not messaging blah blah blah So really reality is that all of these systems are built on top of messaging systems Yeah, but but my point in saying that is these are supposed to be one-ways, right? So at the worst case scenario, you may get you know, assuming you're not going to get a 500 But let's assume you get at least a two or two back saying yes, I got it But that doesn't tell you anything beyond yes. I got it All the way to maybe you could do get a 200 Which probably means you should just successfully process the whole thing Why isn't that good enough because that's as as christoph was saying that that's basically all you have anyway even for the single case, right? Yeah, and for the single case, that's fine But if you're trying to do you know once delivery for these things it it doesn't work So let's let me poke on that a little just to make sure I completely understand it Let's say you get back a 202 in the single case All that means is I got it It could have been dropped immediately on the floor by mistake or afterwards, but from your point of view as a sender All you have is a 202 If I now turn on batching. Yeah Why does that 202 mean something less significant to you? It's already pretty insignificant other than it got The happy path is easy. The hard case is when the middle event fails You're talking about the the case where there's a okay where there are multiple hops No, no when there's multiple there's there's you have a batch of stuff, right? and so you the You have to deliver that to something Because the spec says it's going to explode it out into individual events Okay, so you're worried about say as an example case where there are 10 events being bashed up the first five Process successfully the sixth one dies. So the server decides to return say a 400. That's right. Okay So Scott it sounds like you're you're you're assuming The middleware portion of it will do the delivery of each event and give you a response Inline at this while you're waiting for a response back from that server And I don't know that that's necessarily the case that likely it would Give you a response code saying I accept it or I don't accept it But then would in queue each of those events possibly on for later processing and for later delivery So I don't know that There should be an assumption that you would get immediate response to the you know Disposure of each event Inline with with your request I think that's what most systems would do. There's there's a persistence receipt that you can say, okay I got the event. I've successfully unbatched this batch and I put it into new persistence Or I've processed it in some way and the original request is going to be held open For most processing models right, but you explicitly said that uh You wanted to you wanted an error if it couldn't be delivered for further delivered, but You know, I may be able to in queue it Onto a delivery queue Just fine. That's fine. That's great And it may it may fail further on down the That's absolutely not what I'm talking about. I'm talking about okay. I'm sure that it gets to Wherever it's supposed to be going Right like that new queue has taken ownership of that event and its delivery And so now you can hack it So I wanted to actually poke on something slightly different here In that use case that example that I thought that enumerated where the first five get processed. Okay, and the six one dies Let's go back to the single event flow And it gets sent it gets delivered to their to their receiver And then the receiver returns a 400 now There's nothing in our spec or even the htp spec That says That 400 does not have side effects Right, so it's possible that it started processing that one event Did some changes to the backend system and then things died and he did not roll anything back And the reason I'm mentioning this is because I'm trying to equate that with The batching case where you process the first five and then the six one dies and you get back a 400 And I'm in my mind. I'm trying to see if those two line up to say well, it's pretty much the same thing You did half the processing And you don't know whether it rolled it back or not as a result of that error And I guess what I'm trying to say is are we any worse off in batching than we are with a single case And I'm trying to say we aren't I want to get your opinion on that scott I think you're in a way worse case because what if you have 10 events that they all fail halfway You you have a 10 times more problem. I'm not sure about that, but okay So scott your position is what I feel like you might be either do one or Some variant of three or four right meaning fully to find it or Basically remove it. That's right Like I mean really all all I want is a response format Or remove it right like the the current Square bracket bracket response is not it's not quite good enough right Okay, so to try to narrow things down and move things along let me ask this Is there anybody on the call? And I'm going to say that's in a very biased way, but forgive me Is there anybody on the call who would like to advocate for position one which is fully to find the processing model? And leave the boundaries of cloud events as just a syntactical thing of what how how an event looks on the wire And you actually get into the processing model of semantics Is there anybody that wants to advocate for number one? Well, I That's kind of what I did with saying Let's take the webhooks back move it into its own repository So we can do exactly this there because I agree with scott that will be really valuable to have a Defined processing model that is default and that people can agree to work on what it is out of the scope for cloud events itself So this is kind of my you know Compromise of whatever you want to call it Do it but do it outside of cloud events and make sure that the htp processing model We have one way very defined and people can standardize on that Or they can also do their own thing which is also fine So we don't force people to use our webhooks back because there are a hundred different ways to do htp calls anyway Okay, I have a question for you, but mark's hand went up first go ahead mark All right. I was I was going to comment that if we Truly want this to to uh be number one Then we likely should expand the the htp transport spec to include, you know, the error codes Or the status codes being returned For example, you know, I just pulled up the Standard list and you know two or two accepted so if you know a receiver can just say okay, I accepted it and You don't get any other information But then if it's like a 200 okay, we would think about what is the payload in the batch case That would return the individual status codes that Scott is asking for but then we'd be more prescriptive In terms of what we expect as a response there right So so my question back to you christoff was I interpreted your comments about moving the the webhook spec out as As just dealing with the singleton clouded that case If if you think that the webhook spec should handle batching as well All right, do you think that the definition of batching should be in the webhook spec? um as as well because that that's because I I view the webhook spec as a very generic hgp spec basically and pretty much nothing to do with cloud events But if we if we push the batching stuff into that spec then it becomes A little bit of both it becomes generic for singleton events But then it becomes a very cloud event specific for batching. So how did you see that playing out? that's a good question, but I think like the webhook spec is missing also Kind of the response right now what I said before it doesn't really It has some error codes, but it doesn't go into the details. What does 400 mean? So once you go into these details, you can also do a special subsection for When something is batched And I think that Is true where you transport cloud events as a batch or whatever else as a batch Okay, but but your position is I think it could be wrong, but isn't your position then to remove batching for now and look to do it outside of cloud events no My position is to say batching is a thing But we at spec level do not define it Each transport can do whatever they want and support batching But they need to make sure that it's transfer living level batching not semantic meaning of a batch As long as trench puts do that everything is fine. We don't have a problem and we go in and say we have uh Jason as a format we Because we define it We also say here is how you can do a batch in jason and then we have hdp Where we define here is how we send it over and basically all the only thing we say is Whatever your format is if if it happens to be jason then it looks like this if it's something else Like I don't know xml you could also define batches there Just add batch at the end of your content type And that's it and we still in the Neither defines a processing model so far. So I think you will need that processing model anyway And once you have batching that processing model may or at least the responses need to look a bit different And then in the webhook spec we actually go in and define a concrete Processing model for what delivering a single event or a batch of event looks like looks like You make sense or yeah, I think it does this. Thank you. Okay. Scott your hands up I just want to put one more point. I think The the issue that i'm having is that I Given the current specification. I cannot implement Uh, something that would do things like delegate a batch of pub sub events Send them off and then Uh, knack upstream Or knack or act upstream say I the current definition doesn't allow me to do What I would like to do And so I guess I'm I'm asking ways for optional Individual acting knack like response codes tied to the IDs of the the batched events Like other transport support Any comments from the crowd So like I commented that you you would have to change the htp spec to Have status code and what you expect to be returned from that Right, so this would change uh instead of a batch being an array it would be an object and the response would be a batch response But the the you start defining a processing Processing model at this point, right, which is something we don't have in the htp Transport yet if i'm not mistaken But in htp it already defines what you should respond to it so we we are Telling users what they what the processing model should be But not really giving the hooks to actually make a reliable system I'm not sure we do too or To some extent hp automatically does it because it it has the status codes, but maybe not fully Like for example, again the case where you say the event itself the formatting is broken them should be a Particular within the 400 error code. It should be something more specific you so you can react to this. It's just like here's the 400 whatever it is So if you want to do a more detailed system you have to go in and define a bit more things I think So I want to circle back around to the question I asked earlier, which is Does anybody advocate for number one? and Kristoff you raised your hand but I Based on what you said though, I don't think you're actually advocating for number one as much as you're advocating for number two with a follow-on piece of work of Moving the web hook spec someplace else and expanding at the cover of batching Is that accurate? Yeah, but I'm advocating for defining a processing model, but not within the spec Right, right. No, I just like and and I think if we choose to do something like number two Or technically anything but number one That doesn't mean we couldn't do something else later Either that's either as a follow-on spec or even in our spec later, but But I just want to make it clear that you're not actually advocating that We do number one within our spec itself Yeah, exactly, right. Okay. So let me go back to my original question Is there anybody advocating that within the cloud event spec or one of our transport specs that we have, you know, within our scope right now That we actually define a full-fledged processing model Either for batching or single events okay not hearing that so That would then seem to me that we have If you if you want to boil it down to a boolean choice we have a choice of either Keep what we have in the spec today or remove batching And we're going to have different flavors of remove batching, but it basically comes down to remove batching or at least to find the syntax for batching Or is that what a choice comes down to other or am I oversimplifying it? just to get Feel for the group because this is a this is kind of a big decision as well and it would require Probably an offline vote. I'd like to just get a sense of people on the call I know scott given those two choices your preference would be to remove batching Other people on the call What's your your your current take on here and I will pick on people who have been quiet so Step forward on your own Hey, Doug this is Collin. I I go to remove it Or or suggest we remove it. We aren't voting it This is a slippery slope. So it's batching today with an act or knack-based Processing around it and then these typically lead to something more complex and in the end, you know You'll be looking at at distributed transactions, which are a nightmare. So so I vote keep the spec pure and clean and You know keep batching out of it Okay, thank you The other Doug voted for keeping it as is in the chat. Thank you Doug anybody else want to speak up Vladimir Hi, I would propose that you remove it for now as Salivan commented it it can be a slippery slope that would Just extend to a lot of complications Okay, thank you What about Eric You have an opinion on this one I am currently Leaning slightly towards removing it. I definitely think that it's been a strength that we don't have a processing model definition in the spec. I think that Over time it will be very helpful Create a lot of value to actually standardize processing models so that You could interact using the same code with any any provider Things like that, but I don't think that that's the point this spec is out Okay, thank you. And I should point out that Roberto voted for keeping it as is and they in the pr itself or the issue itself Yep, one other person just because I don't think I've picked on them before Ginger Do you have an opinion on this one? Ginger you're still there Oh my goodness. Oh my gosh trying to find the window and unmute Problematic. Um, unfortunately, I don't know enough about this to give my opinion. Um, Obviously my colleague Collin gave his so I'll just Plus one him Oh, that's wimping up. Okay Okay, in that case, I'll pick on one other person. Um Barum are you there? Yeah, I am do you have an opinion on this one? I would vote to or I would recommend we keep it just because I understand the The concerns with over complicating and making the spec a little bit less clean but As a practical matter when you start doing pubs up at scale people are going to have to start implementing badging And so if we have a way to guide people Um With the spec I think it's I think it's going to be something we have to deal with Eventually so we might as well You know Get it As we're defining the rest of this right now. Okay. Cool. Thank you. All right Before I talk about next steps anybody else want to voice an opinion one way or the other Okay It sounds to me based upon the that Informal questioning. It sounds like we actually might be kind of evenly split and that's unfortunate. It'd be nice if there was overwhelming opinions one way or the other And because we're kind of evenly split. I'm wondering whether that basically means Um, we do a vote and and see where things lie um, because I don't know how else to To move forward here. It seems like it's a very easy choice in the sense that it's very clear what the choice is We just need to decide one way or the other Is there anybody has any other ideas in terms of moving forward aside from just Put it up for a vote Could we get some feedback from clements and Clements representing microsoft and I forget team from aws because I Since they're gonna be if they're if they are to implement cloud events. They're gonna implement it At skill and I might be affected by this I would like to make sure that they are not impacting by us removing batching Okay, we could do that. Uh, as I said, I believe clements is back from vacation next week And unfortunately, obviously you can see tim is it on the call But what I can do is I could take the ai to reach out to both of them Um to make sure that they are at least on the call next week Or if not at least voice their opinion through email in advance Um, not only them like any big vendors who are going to be doing high events per second Yeah, I can I can I can try to force that and what we can then do I guess is Uh, try to avoid Repeating what we talked about this week But give new people on the call like for example clements who obviously had a hasn't had a chance to voice his opinion at a chance to voice their opinion and then If we don't sway people to one side of the other start the vote next week That's not fair Quickly um from the microsoft perspective. I work with clements on implementing this stuff I'm gonna be the one kind of taking care of actually implementing cloud events for microsoft Um, and at least from our perspective if we don't have batching defined We're just we already do it in our in azure So if it's not defined in the spec, we'll just have to come up with our Own because it is a critical path for us. Um, so from the the vendor perspective, that's kind of microsoft stands But I'm sure clements can give more detail to that Okay, thank you. Um, I just I Not speaking to share or speaking just as dug from IBM That's actually been the entire reason that I was okay with it going in the spec to begin with was because I felt like enough people were going to do batching That it would be great if we had a single way of doing it as opposed to everybody roll the roll and have zero interrupt on it Even though I I do kind of agree with scott and I guess collin when they said One it's not fully defined true and two it is a very slippery slope if people are going to do it anyway Let's at least get some level of interrupt as best we can which is at least from syntax perspective Yeah, I agree with brown from microsoft at w we also implemented batching So we do batching on delivery as htp batching an array of cloud events And that's what i'm saying. Why that's why I voted for keep us is we already have that specifically on the htp transport layer And I think that's good enough right Okay, anyway, um Okay, so i'll take an ai to poke or just send in a note to poke in particular microsoft aws and And or i guess clements um and as well as the other you know big guys to get their point of view and to warn people They may be doing a vote next week Or vote starting next week I should say Hold on i take some notes here So if you guys say remove from the spec do you mean the sentences that are in the primer that basically says it's defined at transport level Or do you want to keep that which basically says we don't define it in spec? But instead do you mean removing the jason format? That defines the batching and the one the thing that we do at htp transport level So basically removing it from those the format and the transport but keeping it in the primer Anybody want to answer that one? I would like to keep it in the primer and in the htp transport But that's because you're advocating but you're you that's because you're advocating for number two I think I think I think what christophe is asking is if it what does remove actually mean? um my interpretation of it is to Pretty much remove it from the spec in terms of even talking about it, but that doesn't mean that a transport Couldn't batch it up if they do it. We just don't talk about how to do it. That was my interpretation of it Anyway, scott. What was your interpretation of remove? It would be remove the formal definition and you can still roll your own It would just be a cloud of it Does that answer your question christophe? Okay, so we would keep it in the primer but drop the jason and htp definitions. I mean That's also okay for me. I at least it means like Kafka and some some other things we allow that to happen Basically, so we strictly make it a transport level concern. It's okay for me What are we saying there? Okay, so you talk about this paragraph right here, right exactly Okay, was this added before or after we added the jason batching This was before so I Because I I was also kind of in a camp don't add it to the spec itself. Let's not do that It's a slippery slope We'll just add in the world of pain because we have to map it back to all the transports that already have their own processing model for batches uh Let's not do this. So I added to this added this With input from others Okay, and then we said, okay. Now we say it's a transport level concern. So we htp is our transport We use jason. How do we do it? We should define it. So that's where we ended up Why end up doing it for that? Because this almost sounds almost kind of contradictory to what we put in this spec Um That when I was contract, that's not my word for it, but it's weird that this this thing basically says we're punting But then we turn around and define a syntax for it So it feels a little bit awkward interesting, okay Okay, um So anyway, I think we have a path forward um We'll see how it goes next week. Um I I did want to draw people's attention to something and I just even though we only have three minutes left I'm not obviously going to push for a vote on this since I just opened it yesterday But we have this property called schema URL Which to me is inconsistent because we have data content type and data encoding Or something like that data content encoder or something like that But we have these two fields called data and then we have schema URL which relates to data, but it's not called data And so I'm advocating actually changing the name to add the word data in front So they were consistent so that way every property that points directly to data itself has the word data in front of it Um, obviously, this is a breaking change, but I wanted to get a general sense of what people thought about this Do you want to be consistent or do we not care and it's not worth breaking things? Any comments from people? I've not high level description. That seems right Say that one more time you're cutting out a little there, Eric Uh, given your high level description. I haven't read the text. Uh, that that seems correct. I would prefer consistency Okay, thank you anybody else want to comment Where would you define the entire schema of the the whole payload if we're like structured htp? We'll talk about of the crowd event itself. Yeah, right That's a completely separate issue that this this this URL is never defined that This URL is always just to find data And the fact they're asking that question could mean that's a good thing that we possibly do do a rename Yeah, yeah, I was thinking that you know, it's I don't think it's been used very much And I think the implementations I've seen also include the the entire envelope definition Which I've always thought was a little funny. That's that's not just funky. That's wrong But you remove the discoverability of extensions Maybe but that was never the intent of this URL Unless I'm completely messing up. I don't believe this URL was ever meant to describe the cloud event. It was meant to describe data I've always understood it to kind of describe the entire envelope Okay I'll double check on the spec, but I'm pretty sure that's the way it is But if so if I'm right Then I think it'd be a really good thing to clarify this with the word data in front But I'll double check the spec. Maybe I'm wrong Okay, with that, I think we're technically out of time I just pasted the description In the chat. Oh, there you go. Okay So it is just about data. Thank you mark So this may be good just not just because of consistency for my ocd, but for understanding as well Okay, so with that we're technically at one o'clock or the top of the hour Did I miss anybody on roll call? I think I got everybody Okay, thank you guys, uh, please do review the two data prs one from james and one from clemens And we'll talk about again about those next week And if you are involved in the stk work, please stay on the call So that means maybe mark and scott if nobody else