 Hey Tommy, where are you actually located Singapore? Oh, wow, so what is it? Midnight there or 11 p.m. Or what? Yeah, midnight. Okay. Wow That's dedication. Thank you. All right Morning Eric. Good morning And Heinz Yes, good morning morning. I Yeah, go ahead Hans. No, just a quick heads up. I may have to bug out a little early unfortunately today Okay, thanks for the heads up and Dan Jones Hey, hey, is this your first time on the call? It is my first time. Yes Do me a favor in the zoom slack. Could you just I'm sorry zoom chat. Can you just write down the company you're with just so I can Unless you want to be nice Okay, yeah, just for the attendance tracker. Thank you Um Slinky there Francesca Yeah, hello You're there Okay Let's see who else Christian Hey, Doug. Oh, I feel like I haven't been here for a while. Yeah, welcome back then. Thank you You more To to more TTIHO or My area. Yeah, okay. This isn't you've been here before right? Yeah. Yeah, so I thought okay Well, we're getting a whole bunch of new people today All right NGIR ALDO. I know that person's been here before can't write the real name. That's me. It's Nick Nick Okay About that. I need to change my screen name. Not a problem All right, sir J. Are you there? Yep, all right, cool. Thank you Let's see who else Mona are you there? Hi. Hello. I think this is your first time on the call, right? I as a yeah, that's correct So I attended a couple of meetings of serverless workflow and right now My first time in serverless. Yeah, okay. Great. Well, welcome Can you do me a favor? And if you want to be associated with the company just for the attendance tracker Can you write the name of your company inside the zoom chat? Yeah, great. Thank you. I don't want to be Associated with my company. So That works too. Okay. Thank you All right, let's see Feel like I'm missing somebody here. Um, okay, Mike. Are you there? Yep. Just joined. Hey Lou Um, Mark morning Mark. You there? Mark good. Um, Mr. Klaus you there. Yes, I'm you. Excellent Scott you there Thomas for you there. Yes, I'm here. Good Okay, let's keep jumping around on me. Oh, no, I'm miss. I'm not misspelling your name. Oh man. W-e-n G-a-r-t-n-e-r. There we go. And which company with Roche diagnostics. Okay Do you favor? I'm gonna is it r-o-s-r-o-c-h? Yes And with an e at the c-h I was so close All right John welcome Good morning Let's see Vinay, are you there? Yes, I'm here I was there last week as well, but I was on the mobile and for some reason I couldn't unmute myself Okay, I'll mark you. Sorry about that. No, not a problem Okay, let's see Ginger, are you there? Hi, I'm Doug. Good morning. Hey, you got a funky little icon today. I like it. Yeah, we got a Corporate Zoom account and so that's our main company, Senadia, and I just forgot to log out of that account, so Okay, cool. Just noticed it All right Hamid, are you there? Hamid Yes, I said he Hello. Hello. Do me a favor. This might be your first time on the call, right? Yes Yeah, can you do me a favor? Just paste if you want to be associated with a company just for the attendance tracker If you can place your company name into the chat, I'd appreciate it. Sure. Okay. Cool. Thanks All right, we'll get started in a sec. Let me show I got a right. There's a DI on the call. Who is that? That's me Me as in Dustin Say that again. Oh, Dustin. Oh, okay. Hey, wait, why you're in there? You want to add your company name? There you go. Perfect. Thank you. All right. Let's see if I got everybody. Did I miss anybody? Vlad, you there? Okay, we'll get Vlad later. All right. Let's go ahead and get started since it's throughout there. There you go. Thank you All right We'll get started. We'll catch up with that stuff later. All right AIs so just let you guys know we are working on a Not a sig at platform Charter. We are doing a sig serverless Charter We decided to bite the bullet and go ahead and do it That will give a proper home to hopefully cloud events the workflow spec and the two new specs that we're working on It's not approved having gone to the TOC yet. I'm still waiting for a couple of internal reviews From Mark and Ken, but that's the path we're currently on I will probably make that document available to you guys Probably in a couple of days once I get those reviews done. I just want to make sure it's not Horrible before I show you guys but anyway, just wanted to let you guys know that's the path we're on Community time so anything from the community people want to talk about this on the agenda Well for that for that sig serverless Charter, can you give us a outline of the scope you expect there? Yeah, basically It was actually really difficult because they have a sig Runtime and then there's another sig that kind of overlaps with it There's another one that there's no sig that seems really really close to us What I did is like delivery. Yeah, that's it after delivery. Thank you What I tried to do was to write it up to talk about More of a developer experience kind of sig because I was thinking about what serverless meant to a lot of people and When you look at it from a technology perspective it overlaps with a ton of other stuff, right? The other sigs in particular But what those other sigs I don't think are necessarily focused on is how to make the developers life easier because to a lot of people That's what serverless is really about in some ways, right? Yeah, there's some features like scale to zero and stuff like that But a lot of it is making the developer go back to being a developer as opposed to an infrastructure IT expert, right? So I try to write it up from that perspective to try to differentiate us a little however Having said that I did add a paragraph in there that says Realistically, there's probably gonna be some overlap with other sigs And so we're gonna have to to work with the other sigs on a case-by-case basis to see which project really belongs under which sig So it's gonna be a little bit of give-and-take That's the current wording anyway I have a quick question. Sorry. You mentioned overlap. Can you talk about give an example of what projects you see? They might be overlap Well, let's say Let's say in the future some project like say K native, I know it's not hitting here now. Let's just use that as an example so everybody knows it Let's say K native decides to try to join the CNCF, right? Is is that something that should go into a runtime at platform or serverless? I think you can make an argument for any of those three Yep, you're right. You're right. Yep. And in my mind, I would put that towards our word our sig mainly because while while K native Obviously Is a bit of an infrastructure kind of platform Its main focus is to make life easier for the developer by trying to as best they can reuse underlying technology and just Simplify it for the users even though it does introduce some some of its own features. That's not its main point in my mind So that that's that's my current thinking anyway No, that's fair. Thank you. Yep. Okay All right, uh, any other questions about the sig charter or the community time All right moving forward Um, I don't see Kathy on the call but Timor I see you're on the call Did you want to bring everybody up to date on anything might happen within the sig workflow? I'm not sorry the workflow subgroup. Oh wait, did we lose him? I guess if we lost him, okay never mind. Um Yep, there you go Any update I keep getting dropped from my phone. So I just have to switch them all up to my bear Okay Regarding are you guys asking about the workflow subgroup? Yeah, yeah, sorry. I was First of all, you can probably change this Kathy to my name if you want to I'll be doing the updates probably start joining on a weekly basic whenever you feel like it Yeah, we're doing a couple of things Currently there's a toc proposal as as you guys mentioned So we're kind of waiting on the decision and and and having a serverless sig. I guess definitely we're hopefully that So that's one thing that we're kind of Hoping this will happen. I don't know if anybody might have a time frame for us as far as When or how I don't I don't know if anybody's on a call to have the nose. No, or might know Uh, the second thing we're working. Uh, our meeting weekly Discussing the workflow workflow primer The specification primer. So that's going on. That's a big issue a big thing going on And the third thing is we're working on Our first version 0.1. So we're updating all the All the documents and and setting up the get branches and everything for For the first version So those are kind of like the three big things they're going on with serverless workflow right now and And also just wanted to mention. I wanted to thank the community. We've been just like in this meeting We've been having a lot More people starting to join and and starting to get interested on on on During meetings. So that's a that's a big thing Cool. Thank you. Any questions? All right. Cool. Moving forward then into the prs. Um, Okay Why is this one? I thought we resolved this one, didn't we? Hold on. I apologize Yeah, we already resolved this one. I'm not sure why that one's on there. Okay, uh, Mike Anything you want to mention relative to your pull request? Yeah, so I um, I updated all of the or I resolved all of the comments and uh went through Uh merge how and rebase how last night, but I think I think it's readable um So there's a couple things that uh came out in particular Um, the idea of expanding available source uh ce source attributes that might be available I got a lot of feedback uh through different channels that that wasn't necessarily doable for some implementers So I think we should probably debate that a little bit further before we decide to put that in um As well as uh, so took out the source structure um because We didn't have a template for that to be read like we didn't wait for that to be interpretable Uh by by machine. So uh, that's something that needs a little bit more thought Um one, you know, I think this is fine for like an rco one um What I would plan to do is immediately send another pull request for an rco two That uh puts this more in a rest structure Um, so thinking about a hierarchy of resources So I've got looking at it right now on my desk. I grew it out on paper last night just to type it up Um, I think it makes it a little more palatable Um, the other thing is that I I don't want people to sell on slack, but I put out I think gem suggested last week a graph ql Uh, so I went and learned graph ql and implemented. This is a graph ql api. I think it's actually probably slightly more elegant Um, but that's more of a question for the group of how do we feel about pushing a graph ql api um So I'll stop talking a lot other people, you know Say some things Okay, any questions for mike? You know, actually see any outstanding comments on the doc you there, which is cool Okay, so just a quick discussion point about process because since you mentioned rc two um The way we've been working in the past for cloud events is the next Version is typically labeled as just rcn or whatever Even long before we even think about officially releasing it or publishing or anything like that So if we accept this pull request Any additional Pull request made to this document would still be under the banner of rc one because it's not actually released for anything yet The first so we don't need to worry about rc two or anything like that. Um, it's not until we get to an official zero one um Release type of votes that we need to worry about changing the version number Right. So any other any questions comments from mike? This is Thomas speaking. Yeah, go ahead Um, I would actually vote for graphql interface possibilities for cloud events. It's it's this right So we discussed it in in our group as well in our company and from a consumer point of view That would be great if you could support graphql as well Okay, cool. Thank you And I I do think if we do that it needs to be for both discovery and for subscriptions Like you would expect that uh, you'd be able to query your subscriptions in the same graphql Re-document and that you'd be able to uh add subscriptions to be a mutation Uh, we'd expect so yes Insistency is always good. Yeah But I know clemens is out this week, right? So he's not here to weigh in on that. Yeah um, yeah, so again some from a bit of a process perspective Um, obviously I'm assuming we're going to accept the p this pull request and once we do I would I would hope that people would now either open up issues or poor Poor requests against this document and so for example the graphql discussion I would I would actually think we'd probably want to start that as an issue. So maybe mike or Or thomas either one of you guys want to open up an issue to stir to force that discussion. But yeah, okay Yeah, I'll open one up since I have the prototype already. Cool Okay, and if people didn't see it discovery dot uh in the cloud dot dev with dashes between in and the And the on cloud Yeah, when you put that in here someplace didn't you? I can put it in a note stock Yeah, okay Okay, so From a process perspective last week we talked about giving people another week Um till today to look it over see if there's any earth shattering reason why we should not merge this as the first rough draft of the spec Um, are there any objections or concerns with doing so? Okay, cool. So we will now make that the official first draft. So I'll accept it so approved Okay, so Now as mike mentioned, and thank you mike for all the work on that appreciate it um As mike mentioned clements is actually out sick this week. I don't believe it's the virus or anything like that So that's good, but he did get a recommendation to take it easy. So he didn't get a chance to address um Any of the concerns or that were raised as issues here? So ignoring the outstanding comments in there, let me ask a broader question. Has anybody Looked at this and have any concerns? With accepting this assuming clements does address what I consider to be mainly typographical type changes Okay, what I'd like to do is it's proposed that we conditionally accept this as the first rough draft contingent upon When clements feels up to it addressing all the comments in here and then merging it the reason I don't want to merge it now is because I don't want to run the risk of losing people's comments in here because once you close the pr Sometimes people tend to forget about it and move on whereas if you leave it open It's a nagging reminder form. So are people okay with leaving it open until clements can address the comments um And if there's a comment that he doesn't address the way the person suggested it Then we can open up an issue and track it that way But I just didn't want to lose the the existing ones. I thought were relatively minor Any objections to a conditional approval? Okay Cool. Thank you everybody Next So we talked briefly about this last week, but the changes were made too soon for us to approve it So just refresh your memory. There was a little bit of a question about how a receiver should know whether A binary message is actually a cloud event or not So what I did is I went through and modified all of the transport bindings to basically add text that says if the Four required properties actually appear in the message then you can Make a good educated guess that it's probably a cloud event. So you can go ahead and try to parse it Um, but there's no guarantee that it actually still is a cloud event because people could choose to You know add our our headers randomly. Anyway, um, nothing normative in this text It's just additional guidance for people to try to Make a guess as to whether they should even try to parse it as a cloud event when it's in the binary format I believe this applied to all transports except for NATS Um, but basically the language is pretty much what you see here for every single transfer binding except I made it specific So for example, some were called properties some were called headers Minor differences like that. Basically it's the same thing Any questions on that? No, that's good. Okay. Cool. Thank you. Any objections to approving? All right. Thank you Um, all right now this one is Something that gem put in there actually how long did he put in there? Okay, technically it's it's too soon. However, let me view it this way I'd like to do is since this is completely non normative and it's just sort of editorial type text What he did is he added this section right here I'll give you guys a chance to read that Does anybody have any questions or concern with this text or I think that it needs to be tweaked in any way? Perhaps we should also mention Kafka by name because HTTP one and the Kafka one they differ in a few ways And when I was trying to implement a custom Protocol for cloud events in Lyclos I was so I started with HTTP but had to switch to Kafka because of some differences Okay Um, can do me a favor then can you make that comment in his pull request so that way he can make the edit and then We can approve it next week. Sure. I will Excellent. Thank you Anybody else any comments? Okay, so I won't push to conditionally approve it since we have an outstanding request, which is fine Right. Thank you for the review Um, hold on a minute All right. Clement still has an a on that so nothing to talk about there So before we start talking about these three working project work in progress Proposals, are there any other topics people would like to bring up? Just before we get to Stuff that isn't possible to be approved or anything. Just want to make sure we don't lose any possible topics Okay, I had a Maybe a comment here. So I'm sorry like this is my fourth meeting I was wondering if does it make sense to have a master document that Pools and shows the relationships between a lot of these efforts to give a little bit more context so that we can appropriately dig deeper Does that make sense? So you do mean an overall document that ties together the three specs that we're working on Yeah, all the efforts and the the motivation and and and a lot of the Providing a little bit more context and then so it helps people who are new to Really come in get context and start contributing A lot faster unless there's something to that effect already No, I yeah, no, I don't think there is because pretty much up until recently everybody Everything has been focused on just the cloud of inspect. So having something that spans all three From my point of view is a is a great idea. We just haven't done it yet Okay, you know, I'm happy to start Putting something together if that makes sense. Yeah, I was gonna ask if you wanted to volunteer Yeah, absolutely given that, you know, I'm a relative, you know, I've been Only here for the last four meetings. I'm happy to go through and start putting pulling something together That'd be great. Thank you Yeah, actually I I should probably write this down, but I have an action item to at some point Look at restructuring of our directories because everything obviously is very much focused on cloud events right now And it makes more sense if we're gonna have three different specs Each spec might have its own primer or ancillary documentation and a single flat Directory structure isn't going to be as nice for people. So Moving stuff around it to subfolders would be good And then having this document that you're proposing at that top level makes perfect sense to help explain it So it's all goes together. So yeah, it makes sense to me Okay, any questions on that or comments? I assume no one's going to object to more documentation for people, right? Yeah, okay, cool Um, all right, Francesco, I believe these three are yours. Is there any particular order you'd like to talk about them? Um, not really, uh, I mean those are three different alternatives to start looking into the problem creating a more efficient Way binding to to send the multiple events in the in the same htp envelope Okay, would you like to talk about this first one? Which is the structured one? So, uh The first two one a multipart structured and multipart binary. Uh, they leverage the the multipart Um, content type from ffc to zero whatever Yeah, let's call that was called down Yeah, from the two zero four six so In the first one, uh, we just send events, uh, serialized in one or different Event formats so that could be jason could be avro could be whatever format and Every event is put inside a single Part of the multipart envelope so we can send multiple events and we can also optionally give a name to them And that's the first idea. The second idea is to create a custom multipart and A custom multi a multipart content type Which is always based on rfc to zero for six. Is that this one? Is that the binary one? Yeah, and it's technically valid Okay And and the difference is that this one, uh, basically send events Like like like the binary modes that now we have with the difference that send one event one event per part So it works the same way just Just it it leverages the the adders of each single part Did anybody have any Questions I had one. Yeah Go ahead. Have you have you thought about how authorization Uh, will work so, um I think authorization and authentication Uh, those can just leave in the adders on top of the request so given I mean that The assumption is that when you start receiving a multipart cloud events You can receive them. So that on top On the under adders on top of the request. So the adders that The global adders to the whole request should contain authorization and authentication adders And that's meant to cover the entire stream Sorry, and it's meant to cover it. Yeah. Yeah. Okay Well, I'm just taking some notes here so I've started looking at the implementation and if you want if you want next week I can provide an implementation for all these three different proposals The more in my opinion the multipart binary is the one where we could gain most problem is that Most of htp implementations out there Does the assumption that the only multipart that you send on htp is a multipart from data And the multipart from data doesn't allow custom adders So like for example, those adders see as back version see a type Some htp implementation could decide to strip them out That means that we need to investigate How easy is to Actually implement implement this like I'm so that for example in going with it It's quite simple to do but I need I still need to To do a better research. Well, the first one the multipart structured Follows the rfc. I don't remember the name, but if you look on top you see it rfc google on top on top stop stop This one Yeah, the rfc 7578 and it is the rfc of multipart from data the column of this This or the multipart structure proposal is that Because we send the using multipart from data content type It's hard for An implementation to understand if inside each part you have If they should be envelope contains or not cloud events So I think that touches on the question I was going to ask was I it wasn't clear to me Why you have to separate these two out? I mean, so for example on this one for binary Why couldn't each individual one be binary or structured? You know based upon how it gets formatted in here. It's not a bad idea. Yeah, we can do it And that's that's something that at some point when I was writing that I was like, oh, yeah, maybe you can do it So, yeah, that could work definitely Problem is we need to we need to do some investigation of how simple is to implement multipart Yeah, because I think I think one of the reasons we avoided in the past and to be honest and correct me for anybody else who's been here a long time. I think we avoided don't because um It's going to sound funny, but at one point when it came up as part of the discussion I had this vague recollection of clemen saying Multi-part is hard and for some reason that's all I remember about it well multipart Multi-part form data It's not hard because every HGP implementation has it Multi-part itself it's different because Not not every HGP implementation can implement a full multi-part Some of them just implements a multi-part form data. So again, this this requires some investigation But yeah, if you're interested I can I can go forward and try to create some prototypes of implementation Check out the works and we can go forward. I mean for now These proposals are really just draft. They don't mean more or less anything. So They need to be working out, but I want to understand if there's any interest in going forward Yeah, so uh, christoph your hands up Yeah, um, it's also what I wrote on the original ticket So the concern here is efficiency but from my point of view the main goal should be interoperability And so what for me follows is that simplicity and Aspect that's easy to implement Out for me the primary goals And sure if we can also reach efficiency, that's good But if we like do some things that are clearly more efficient I'm not questioning that but that make the shp spec harder to implement. I'm wondering whether that is really a good idea and worth it So for me I'm more leaning towards the side We should keep it simple so that people can easily implement the shp spec And then we have the other formats that are clearly more efficient And then you can use those if you want something more efficient That that leads me To to another point uh Does it make sense to have These together with the batch that we now have in the shp binding in another subspec Which include like an hdp Binding for multiple events because now we have the shp binding that supports That that's binary structured and batch but like Good part of the sdk's that we have now doesn't support batch for sdk go the main one doesn't support batch So maybe it makes sense to To have those in a different By hdp binding which you can call like hdp binding multiple events For multiple events and we can put the batch and then we can put the multipart We can put even the jason streaming which is the other proposal Surgeon your hands up Yeah, um, how quick question Since hdp2 is already a mature spec and you're soon going to have hdp3 What if we focus on? hdp to hdp2 transport for cloud events where For example headers are packed into binary form And you know have to resend this sd spec version s e type and so on so forth but also We can reuse the stream in hdp2 unlike in hdp 1.1 um No, so uh for for the for the thing that you wrote on in the comment. I agree with you. I think it's a good idea and And for streams, yes that that should that I still need to dig into this so But for for the part about the errors, I think it's a good idea and honestly, I think it's even better to just say To to to don't send the h spec Kind of either you wrote in the comment, but just to define from this back So I have a question for christoph Based upon your comment You were suggesting that we we try to keep it simple um But given that we already have batch in the hdp spec I'm wondering whether You think adding this type of batch Is really more complicated or you've actually suggested that we made a mistake and even adding batch to begin with I'm not suggesting we remove it. I'm just trying to understand because it seems odd to be out of the batch at all And yet now trying to add a form of a batch that Some people are probably already familiar with which is you know Using a multi-part Is less confusing for people If i'm not mistaken, we only have batch in json But not in hdp itself Uh, check if i'm if i'm i'm not mistaken. I see we have an hdp. Yeah, I see it in hdp. Let me let me bring it up Hold on a second here Look at that one. Let's do this You see this is the hdp protocol spec. So it is in there We have That's right here All right, okay Well, it basically says okay. Yeah Yeah, right. Well, it basically says the uh event format implements the details Otherwise, you just know it's a batch Yeah Yeah, yeah, well, yeah, okay. I I get what you're saying. Yeah, you're right But is this only defining it for json? So Json is the only format that has it, but it's like eyebrow or whatever would implement it then there would also Oh, I see. Yeah, but this is just a json array Exactly. So every other format could also just send an array or Whatever they have okay so Okay, I get so my next question is actually is for yeah, no good go go for it I mean your your question was um, was a mistake to edit So here, uh, this one I think The batch mode itself is optional. So yeah, I think that that's also one of the points Francesco just made is maybe we should have multiple specs What we say one is the one that everyone has to support And here's all the fun stuff you can do but probably, you know We don't know how widely it is implemented and so on So that that is one way we could go I think we consciously didn't make the batch content modes mandatory. So that's optional Yes, I heard that What the question is like adding so much optional stuff and then You don't we know how much people implemented and so on. Um, I also don't know what's the value of that So, uh, my my use my personal my personal use case for sending multiple events in the same envelope is to invoke functions that takes cloud events So that's my uh, use case. That's what I want to cover So they do what they expose it Sorry What do the functions do? I didn't get it. Sorry No um A function that takes multiple events as input and can potentially output multiple events as output Those kind of functions to be invoked through hcp needs to have multiple events in the same envelope basically Um, what is your function depending on getting multiple events in the same request? Yeah Oh, yeah, because I I think that That Is conceptually something That we can maybe implement with hcp what we won't have with any of the other transport formats So you're building a function that can only work with hcp because if you look at all other transports the grouping or the batching of events is More or less random. So you just You know, it's more an optimization at transport level or I need to send you I don't know 50 events let me batch them in groups of 10 and then I'm sending you 10 and then there is No, semantical meaning to one batch But for your your use case it sounds like there is a semantic meaning and several events belong together Yeah, it's almost like that is something Sorry, I was gonna say it's almost like a nested cloud event like I think class suggested at one point Exactly. So and I think for that use case I would For to keep the interoperability and to make sure all transports can support the same thing I would really like to keep this use case out of the spec Because there's no way we can implement that with the other Specs on this batching level. We really need to have a different terminology for this kind of thing Like grouping or nesting or whatever. It's like a valid use case. I'm not questioning that But it's different from batching which is purely a performance optimization if that makes sense Ryan your hands up Yeah, I'll I'll just echo that I'll I'll give you a concrete use case and apologies. I just joined so If this has been already discussed, just let me know. Um, uh at pillio We allow customers to set up a web hook for every state transition of you know A phone call or a message or whatever happens within their account And a lot of customers Want us to batch those because they basically don't want us to denial service their web server And so batching is definitely a a use case that we need and we're going to Implement regardless of whether it's in the spec. Um, I'm not arguing that it should be in the spec And maybe maybe there's an argument for it to be in the maybe the htp binding specific part of the spec But it is definitely about use case for us Yep So let me ask a question of scott in particular since Since the go stk was mentioned Have you guys not implemented this on the go stk side of things just because you haven't gone to it or was there a technical challenge that you saw Well, if you if you look at the The current go stk it's written Assuming that there's one event in and out And so the the thinking was always If we would like to keep that simple faz scenario We would have to explode out the batch on delivery if we get a batch of events And the where I got stuck was I wasn't sure what to do Mid mid processing that batch if I got an error or a response I I I suppose that for the specific case of sdk go and We will need a complete separate api to handle this, but I think it's the same for every sdk Because all sdk is now does the assumption one one Yeah, I think that that's probably the right answer and it's it's unfortunate because it it really Uh complicates the api at the client level But I I don't think that the there's any reasonable way for the sdk to choose with the what the Processing semantics should be for batch for errors midstream I suppose it's something that should be done like at the binding level like I mean you as a user You know that you are going to receive a batch. So you prepare for that Right, but the trouble is like you could just be a function that wants to consume a single event and No, no, no, I don't like everything that is in sdk go now Should not be aware of batching It should be like Some two or three different apis that just works with batching Well, that's the trouble is if you receive an error on event 10 of 100, what do you do? Do you like do you save that error and return it at the end of Delivering all hundred events or do you wait until? I just don't I don't know and it's not defined in the spec. So I didn't know how to implement it In the in the like that's case And I guess so at this point I suppose doesn't make sense to give A semantic meaning And the answer like from christoph is no or does it make sense to give To don't give them a semantic meaning can treat them as a as a stream because I think that's the point Scott does the go excuse me does the go sdk have the notion of Just returning it to a to immediately and that way you don't have to worry about what this what the send back on errors um well, so We have implemented a programming model where Incoming requests are basically blocking until Something somewhere acknowledges it So it's it's synchronous in delivery something has to say like I'm I'm good. I got this I'm done I guess what that what that thing does whether it's Stored in a queue for processing later a process if fully right now it's an implantation choice then I guess That's that's right. You could choose to do whatever you want But the trouble is like I can't really force someone to to batch up the or you know Buffer up the batch and say I cool. I got it now act it on the upstream Yeah, so it was just tricky problem and I still don't know how to solve it correctly And I really I think the only solution if this is really is something that we need to support is The the api has to also allow you to say like I also would like to receive Batches of things and I would like to send batches of things Which is independent of sending sending single events and then that becomes very tricky because the The integrating with other like amqp or pub sub or Other things that allow batching like Kafka you have to treat that protocol differently, too So it might make sense Sorry, is somebody talking Hey, yeah, it's done. Um, this is my first call so Kind of apologies if I make any mistakes or assumptions, but Do do all of the protocols support backing only a subset? Well, so there's there's a workaround in the SDK or sorry in the specification that means anything that supports JSON Structured mode can support batched mode Meaning you can always send an array of cloud events It's not very efficient for for things that don't natively support a batched content like Kafka What What he is trying to propose is the stream of of events on On htdp so that we don't have to do JSON marshaling and produce this JSON array which is very inefficient JSON streaming proposal it's goes in that direction because it's Francesca do we lose you Or is that someone else talking was that the end? Yeah. Yeah. Yeah, just the json streaming proposal goes in the direction of avoiding a full parsing and just implementing easily with JSON Okay, well, let's talk about that one a sec, but first Christophe your hands up. Do you want to say something? I wanted to maybe ask back to Scott's earlier comment on us Yeah, processing a batch. So I think if you look at other protocols like Kafka or pub stuff or whatever if you send a batch of events You basically get this What doc said is 202 response, which is okay. You send me 10 messages or events. I got them That's it. That's all I'm saying. I got them And then afterwards they're getting Or to the consumer and then they're being processed and if an error happens there and that That's not anymore for the producer to worry about or even know about So, um, I yeah, I'm not I'm not really talking I'm not talking about the producer needs to worry about this. I'm talking about what what is so if the sdk is acting On behalf of middleware So it's it's not the producer. It's it's maybe it's some other thing that's taken a batch off Kafka and is trying to deliver it as a batch over HTTP that consumer is going to take in that that whole array and What is the processing story for that situation? And it became very unclear Sorry, okay, maybe I'm full right like It's different for a producer to send to a broker that has persistence because you get that 202 because it says cool I saved it, but it didn't really do any processing On the other side on the consumer side The sdk is hosting on an hdp port you get in this block of json json. That's uh 10 events And that connection is going to stay open until the consumer says i've Stored it right like you don't want to lose messages because it's either going to act or an act upstream and a an act means I'm going to move the index or i'm going to drop the events or i'm going to erase it from disk So there's there's more at stake on hdp because it doesn't have that centralized broker that's going to stream things out for you That's true. Yeah So what you're saying is I need to keep the hdp connection open until i'm Sure, I processed all of the events Whatever processed means so that they're at least so i or another way around I can guarantee at least once processing semantics, right That's right. And then so where I don't know what to do is If you if you're processing that batch of some number and there's an error in the middle Yeah, I think that's a choice on the application developer that wants to integrate, right? Like if you're using Kafka directly you would you would know where to Move the index to and then you would get a re-delivery event So you could act some and knack some Yeah Okay, yeah, I was gonna say Ryan you're up. Yeah, I got it. Thanks. Yeah, I guess I question whether Um, and I don't have a strong opinion here yet But I question whether we want to prescribe delivery semantics in this spec because a lot of that's going to be technology and implementation dependent Um, and even within the technologies that do support, you know, at least once delivery like Kafka Yeah, there's still like the Kafka cluster can be configured in lots of different ways with how those how that How the how the persistence works, right? Like there's a TTL on a per topic TTL That is a fully application and implementation dependent So I guess I worry a little bit about like it can get a little bit messy if we're prescribing semantics that are And application dependent in, you know, the broader spec Which is actually the reason why we haven't touched it Right. We just we just talked about the format. That's that's pretty much it We try to stay as far away from The semantics or the processing model I should say is as possible But john your your hands up next Yeah, I was I was gonna channel clouds and basically saying the same thing right for people who are new this this is like a A topic that comes back and forth every so often because of this semantic issue, right and people bring different semantic assumptions based on their use cases or their tooling and trying to harmonize this across all these different use cases is Is is seriously quicksand Yeah, and I will point out that both of the roles that we've seen so far Um from francesco do not get into semantic processing his his His follows the same pattern that the rest of ours or the rest of cloud events does which is just defines the format Which is you know, obviously leads into the question that scott's asked, you know, what do you do with errors and stuff like that? But at least he didn't try to tackle those which he probably shouldn't have because we didn't do it for cloud events in general So he's at least consistent from that perspective Right, but there's still these implicit Assumptions right or the assumptions behind well, why do I need this? You know, it's not just efficiency. There is there there are implicit semantic processing assumptions Right, like he he directly mentioned this comes as a group and has meaning as a group Right that that is an direct assumption That's interesting because yeah, so francesco, did you mean to imply that if you do Batching it's like this in this binary way that there's a meaning between them or was your application just assuming that My application is assuming that but that I mean this this kind of proposal works also for um Having an infinite stream of events not related between each other I assume In my case, it's it's an assumption of my application right Yeah, I think I think all all people raise their good points about the semantic of matching too So we're out of eight minutes left And I want you to very at least quickly talk about your jason streaming one and then we'll talk Even faster about what to do next with all three. So you want to quickly summarize this one jason streaming is really It's the same as batch with the difference that uh, you don't have to wrap inside an array, uh, but you Divide events using And line feed and and Anyway, uh It's it's it's a well-defined spec by an rfc or they're an rfc. I don't remember what's the name If you go on top you see it. Yeah rfc seven four six four and This in this rfc, uh, it explains that you can actually send an infinite stream or jason just sending them divided by these two chapters Okay Any high-level questions on this Okay, so you talked about possibly going off any of these things. Um Obviously you can do whatever you want to do personally I would rather If I was in your position, I would wait at least a week because since we just presented these on the call today I would wait until next week to get people a chance to look at these think about it And see what their reaction is next week because next week You know in general people might come back and say I love the idea or we hate the idea And I wouldn't want you going off and coding something That everybody hates, right? um So if I were you I'd wait at least a week and I'll keep what these on the agenda for next week as well To see if people have any more comments and then we'll try to figure out the next steps there because like I said I don't want you to do work that that may be throw away, but obviously it's up to you Okay, no problem Does anybody else have any comments about all three of these because they're all kind of related to this notion of batching in some way or another Sort of I mean the stream isn't quite batching, but it's Close anybody have any high level comments questions about or concerns mean does this seem like something we should consider going forward? And it maybe it's more of a position or more it's more of a question of how we position it in the specifications like for example, do we Do we include it as a separate spec that way? It's more clear that it's optional and not part of what we consider like core Whereas if it was part of the SP spec it may not be quite as clear that it's optional People may think they have to do it, you know, is it that gives it that kind of thing or Does anybody have any thoughts about whether the idea is good or bad in general? When I when I saw the pull request and then I saw the first time that batching is mentioned at all for cloud events I wasn't expecting that when I read the specification at the first time So I was really wondering does it really make sense to have batching For small amount of data for for cloud events at all. I mean if someone wants to go out and implement it for themselves, but Referring it from the specification. I'm really wondering whether this makes sense or not Okay. Thank you. Thomas your hands up Scott I prefer this over trying to make json arrays And that's because it uses sort of a standard Multi-park type stuff, you know something at the transport level. Yeah, plus like you You could do Outbound streaming if you have a very big list of events you want to stream out that maybe doesn't fit in A normal size buffer like I think iot cases You could stream out a single processed events at a time on the wire and you wouldn't have to produce that giant List of all the events that you're going to stream Okay, and from a performance standpoint parsing parsing giant arrays of json is a lot less sufficient than small 64k max chunks Yeah, so you guys are both saying you prefer this third json streaming one over the other two Yes Sorry, I had a comment This one only works for What would this look like in binary mode? Sorry One of the proposals had an option so that you could do Multi-part streaming with binary mode, right? I think that there aren't three proposals. There's There's two on the table. There's a multi-part for binary and there's a multi-part for json One of those multi-part Oh, well, no the multi-part structured In my opinion can be joined with the multi-part binary. They can be joined in my opinion And so these two are almost the same They are not the same the things that they can be joined because As as soon as you know that you are in a multi-part cloud cloud event Uh for every part you can you can send an event or but a structured or binary. I mean because you can Can you can check whether or not to according to your using right that answer your question That would be awesome because then you you can receive Uh events encoded in whatever way on htdp and bridge them to the The next phase without having to decode and encode them in a different Encoding. Yeah Okay, so we're almost out of time Any last minute questions? I think the homework assignment for everybody is to look at the proposals And give some thoughts to what we want to do in terms of next steps Whether you want to even head down this path at all or say no, we don't want to open up this rattle again Um, but it'd be nice if we can give uh, francesco some sort of Some sort of decision by next week, so we don't you know keep them lingering um Anyway, can we grab this? Pull request on to the doc so that everybody knows the The the the pull request link You mean here Yes, yes or somewhere Yeah, well, I did I'm going to put it on the agenda for next week as well. So it's still there. Yeah. Got it. Thank you Okay All right. Thank you francesco any other topics for this week's call All right within the last minute. Did I miss anybody on the agenda? I'm for the tending list I think I got yes. I came late. Sorry. Oh christoff. Sorry All right anybody else? All right, cool. Thank you everybody and good call. We'll talk again next week Okay, thank you. Bye Thank you We don't have sdk, right? No, no sdk. That's called. We'll have a schedule for next week Okay. Okay. Bye. Bye. Okay. Bye