 Let's see, where am I? Okay. So, I have a couple of action items. Actually, I guess I have all the action items. I'm trying to find time to work on those. Let's see, who's on the call right now? We don't have Mathias, Mathias, Dennis, Fabio. Fabio, is there anything you want to mention relative to your SDK, anything exciting or noteworthy you want to bring up? No, not at the moment. I'm just working in the version of HTTP binding binary, but just this, nothing new. Okay, sounds good. Clemens, is there anything you want to mention from the C-sharp side of things? No. I haven't done any substantial work on anything really exceptional stuff. Okay. In that case, I can't exactly where this came up, but in some previous discussions, there was some question about whether the getter should say get event type versus type. And if I'm wrong, correctly, I believe the spec right now just says type for the property name. But there are some people who I believe wanted to keep the word event in there. And I thought this might be worthy of discussion because I think it's important to have some level of consistency across the SDKs. Do you guys have any opinions on this? No. Okay. We changed it from event type to type, correct? In the specification, yes. So we should mirror that same thing. We shouldn't be putting events into the SDK. It can't be. What's that? I would prefer events that actually, no, I do have an opinion. If you want to make it all the same, then get type is conflicts with the C-sharp. Oh, interesting. Type and get type conflict with the C-sharp. So I guess the question is, do we want consistency across all SDKs or are we okay with C-sharp being different for this one property? I mean, I'm okay with it. I can probably work around the type. It's just a little bit confusing because I can call a property type. So in the SDK, of course, in C-sharp, all properties are just properties. There's no gathering center or they're implied. So they look like fields. And I can name a field type without getting in the way of the type type. So yeah, I guess I don't care so much. I can speak with myself loud. Here's my opinion, which is as much as possible, we should show that align things across the SDKs where it makes sense. If there are language specific or runtime specific issues that come up, obviously we have to work around that. Right, but I guess I'm still a little unclear. Clement just started out saying that there was a problem with the C-sharp SDK side of things, but then you kind of talked yourself out of it. So where did you land on that? So yeah, I'm okay. I think I'm okay with type. Okay. Granted, we don't have a whole lot of people on the call, but it sounds like the general consensus is get rid of the word event in there. Right? Yeah. And I know Mark, you said that Scott or Fabio, you guys have any opinions on this? No. For me, it's okay to use get type or get event type. No. For me, it's okay. I mean, the issue will be is if we have to use a naked type, in which case that type is typically a reserved word across a lot of languages. As long as it's get type, it should be fun. Yeah. Okay. So tell you what, I'll send that a note to, we actually don't have an SDK mailing list. That may be something we want to consider at some point, but I'll send that a note to the full mailing list and to the Slack channel asking for people's opinions, but that our preferred choice as of right now is to just remove the word event and we'll see what people's reactions are. Sound fair? Sounds good. And I'd like the idea of sending it to the entire list. There was a set only a subset working on SDK that says impact across a larger swath of people. And I don't think that we have enough volume on the mailing list that there should be a problem. Okay. I agree. That makes a lot of sense. Okay. So Scott, hold on a minute. Let me see if I can unmute you. Oh no, I'm not at a host, so I can't do this. Sorry, Scott. You're on your own. Try to fix that. Okay. SDK demo interrupt. Okay. Interesting. So Scott, when you come off mute at some point, I'd love to be able to find out more information about what you just said there about Knative not using the SDK. Does it work? Yeah. Hey, there you go. So I took a look at the, well, the SDK as of like six weeks ago, and I just wasn't comfortable pulling in the Knative because it lacked most of the test infrastructure that would make me want to pull a project. And it didn't, it had a different view of how events are going to get created compared to how Knative is seeing the world of any creation. Can you elaborate on the last piece? Because I agree to testing is an issue, but that shouldn't necessarily be a complete blocker as much as the point of time statement. A complete blocker on third party software. No, I get that, but it's a point in time statement too. It's easily fixed. I'm more concerned about a design decision, which is sounds like the second point. Yeah, from remember my, my memory is a little fuzzy because I've been on leave for a couple weeks. But there's currently three got go laying cloud event libraries that are in the mix right now. And they all three took a slightly different approach and we just had, we were preferring the current one that uses a little bit of type reflection and the callback that gives you the event. And it's like very deep details of that specific presentation. And we just didn't really want to play with this one. I, I understand having an embedded SDK already in not wanting to rip it out or, you know, you have understanding of that. I would. It would be nice if you could file some issues with the current SDK, I'm assuming go SDK to say, you know, at least file an issue that says, you need to, you need to provide more tests. And another that says, you know, here, here are the gaps that I see with respect to what we're currently using. And at least that way we can, you know, is there a way to rationalize is, is the direction of the current SDK, really that far off that it wouldn't be usable. I mean, it'd be good to get just get feedback. Yeah. Yeah, I can try. You know, we're also trying to move fast and every time you take an external dependency, we have to do the dance of update dependencies and make sure that things are working. So Scott, how would you summarize your concierge your second concern there was it, there's a different design decision relative to how events are handed off downstream to functions I can't remember exactly how you phrased it. It's, it's just a, how the plugin is architect and it's not a wrong decision. It's just different than we chose. Okay, can you put a sentence in the, into the notes, right here, just to fill it out. Yeah. Thank you. What worries me the most is that if K native goes in a different direction. Are people going to assume that that's the quote unquote de facto standard. And are we doing the wrong thing with with the current SDK. I mean, really what we want is for people to be able to come to to our repo and pull a an SDK that's in use by some number of companies doesn't mean everyone has to. It'd be nice if we had some usage of it. So I worry about that. I think the only concern is that if they're not in Iraq. And I think they should. Yeah, but I kind of share Mark's concern there a little in the sense that if right out the door the very first SDK we have for go isn't used by a lot of people who are within our working group itself and the K native folks. That would lead me to believe that that something isn't quite aligned or something like that. I can't think of the right word. Because as you said, even if it's not necessarily one is right and one is wrong, even if it's just different design choices, I'd like to understand why those design choices would were made by each party. And to see if we can perhaps find some common ground because I don't think the community is large enough right now around crowd events to necessarily justify two completely different implementations from this small same group. So if you can just open up issues so we can have these discussions over there. I think that'd be useful. Okay. And, and I'm assuming that the quote unquote SDK that you're using isn't really a separate repo, but likely embedded inside the venting. It's part of a package. There's a package. Package. Yeah. Okay. Cool. And just just to flavor this point, I think it's important for us to think about what is it that we want around the vending and go if we think about the usages of this of this SDK. It really has to do with, you know, creating event sources, and then transiting those those events. So for, if we're talking about K native, well, obviously, we want to make sure that we have something compatible and aligned with a project such as K native. Yeah, I would agree. Yeah. I'm not, I'm not a goal person, but to pop out once. I think one of the goals of the serverless group is to just generally foster interoperability across platforms. And as such, I would certainly welcome if the core representation of a cloud event. Not necessarily how you receive it and how you, you know, bind into the transport and all those things. But how you actually represent the cloud event and how you how you handle it inside of the application looks exactly the same is literally like compile compatible. Yeah, so it's on the wire format should be compatible. No, no, no, no, no, the OM of the cloud event itself. And then, and then whether whether how you how you pull this off the wire, and how you kind of integrate that into your into your stack and whether you, you know, maybe even how you send it in a different way. That might be different, but I think I think if we, I would find it a little weird, really, if K native as a host of apps would have a different OM than if you use that in, you know, any other function framework. I mean, that's the decision you guys need to make, but I just find it a little strange. Has anyone started working on interoperability between these SDKs. That actually gets the next point. Oh, great. Yeah, not that was a great lead up. My point is, whatever your reasons are to diverge, you're making a decision to basically just go away from a common, a common path. And I would welcome if there was conversions to a core library that satisfies the requirements that you have and satisfies the requirements that other and other people have, because ultimately, I don't think this is about you. But I think it's about the user and the user should have in go one way to represent the cloud event in an OM and not 400. Yeah, I guess that goes to the questions. It's got to wasn't clear from an application point of view, right if they write a piece of code and they and the function itself leverages the goaling SDK that I guess the end where it was written. And then they also let want to leverage the SDK as part of K native. Would their code of their application need to change. I'm guessing the answer might be yes. I believe so. No. Oh, no. The thing that comes on the other side so the consumer of the cloud events go lang SDK should consume events produced by K native. All right, but if they take that. Are the API is either in or out of the application code different between the two SDKs. Sorry, the ones, the API is the API is like how you program to send and receive events. Yeah, yeah, basically. Right. So, so an application written for the current at going SDK would have to change in order to run inside K native. Oh, no, no, you can send however you want. Well, but if you want to leverage the SDK that comes with K native that then it's going to be different isn't it. The SDK is different but what gets produced and goes on the wire is not. Things getting the post for somebody else and that body of that event gets transcoded back into a cloud event. And that is compatible. Okay, I'm sorry. I was mixing things up. Okay. So as long as you're producing the right headers and the body format, the thing should smush back into whatever wacky object model you have on either side. But if someone someone someone pulls the cloud event off the wire. And now dispatches that inside of your app to a background process, kind of in memory. Then they're always using the K native API right. They're their events right and they flow on the wire but they also play a flow in memory, because they're also very suitable for that, and they help out somewhere on the other end. Oh, I see. I see your sense you like you want to have a another app or another SDK that comes in as a library application that consumes a kind of cloud as an object. How close is your there's there's there's application code that's hosted inside of K native right and that can be of arbitrary size. How much of a K native that needs to bleed into that application code. We just defined the at the edges. Well, you're making this DK and the SDK apparently had now as an OM around. That's just how we produce how we produce the cloud. Yeah, this is where I got confused. I completely forgot that in K native world, the application itself is in essence a monolith. It's a standalone application. And the anything that's running inside the K native infrastructure is completely independent of that application. There's no connection between the two when they want to talk to one another. They basically do it through, for example, a rest API. Yeah, they're they're hermetic. So in essence, you can have a K native SDK for cloud events, and I will use by K native infrastructure than the application can have its own cloud event SDK, which is used by the application and those two worlds never touch basically. Exactly. So using K native, you can actually make a compelling demo coming to your SDK interop thing where you have an event produced by something goes through a channel and then you have seven subscribers using a container with each of these languages using these SDKs and can all of them consume the event. Right. And so if there are seven, if there are seven different applications with seven different is being used technically, we're actually testing not seven SDKs but eight SDKs because the K native infrastructure uses its own SDK. Yeah. If you use our sources. So let me let me ask a provocative question. How many dates types are in go way. Oh, man. Okay, then then that world is apparently as quality as I thought it would be. Because you're talking about go itself or how many how many date types does go date to. Okay. Does it need to cloud event types. No, because one's a duration and one's a time. So, so my point is, it could quite well that could quite well be one cloud event type which encapsulates all the data that we have which is ultimately the core of the the spec that we have. Right. Our core spec is doing nothing but defining an object model around well a set of properties and and effectively the blueprint for an object model around the cloud event type that's all it does. Yeah. And then there's adjacent specs which are saying this is how we go and bind these things to the, to the, to the transports. And I can totally see that in particular infrastructure is you will want to go and do the bindings in a different way than they are done in a SDK that is built for with a bunch of generic assumptions that are not specific to an infrastructure. So I can see that the binding pieces. But it would be great. If there was exactly one cloud event type that everybody who's writing stuff in Golan uses. So just a little time check here because people are starting to join the call for them for the regular serverless thing. I think, I think Scott if you open up those issues we talked about in the Golan SDK we should be able to leave have some of those discussions as part of those issues. I think that will help flush it out a little. Another question that you started talking about Scott of demo or interop around the SDKs. That is one thing I want to talk about but I'm not sure how much time we have here, but I do want to really start brainstorming around whether we want to try to do something. And if so what would that look like. Is it some sort of formal like you know a pack of kind of a thing is it just a demo that shows these things being used as you guys described Scott. Let's go a little bit of brainstorming around this and see what you guys were thinking if anything yet. So can you that can also target external systems and potentially this thing could like spider web out across the entire internet and every major cloud provider. It depends on what what's the demo you want to show. I think that the, maybe the more compelling jobs. Yeah, we've shown that clouds can talk to each other, but I don't think we've shown languages to talk to each other. Yeah, and I'm not sure how do you how do you show that or is it in your mind is just having a talk. I don't know that that I don't know that that's exactly true because in the, even in the demo that we had with a coupon that Austin did. We had, we had all the functions written in different languages, JavaScript, Python, etc. It just wasn't using these SDKs at that time. That's true. And even the last one was, I used to see sharp one there was a bunch of, I would be, I guess, no one so all the language coverage is largely there. But it would be nice to say here. Here's a set of sample code that's using the SDKs that we're using for this demo. That would be, that would be interesting for people then to be able to crib off of that and have a start starter guide for each of the SDKs. I understood you're right there. You basically suggest things. You could technically take some one of demos we've done in the past, and just everybody use the existing SDKs and make sure all languages are represented there. Right, and then provide the sample code. Okay. Okay. In terms of a demo itself. There's a line that you think would showcase things better than any of the demos we've done in the past, whether it's the mad libs or Austin's one before or I think we have one between their camera. Maybe there's just those two I can't remember. But do you guys want to look at a new demo or or or the existing one. It would be interesting to actually make a real world thing. It's some sort of factory that makes widgets that then gets them sold and stocked and shipped. Like, what an actual business would do with products. Yeah, I like that. Yeah, I do too. Scott, would you be going to, even if it's just a center to just add that to the to the agenda here in the notes section, just to get the idea going and then we can get other people to sort of build upon it. Yeah, that sounds great. Or, or, or even send a note out to the server was working group email. And maybe we can have a discussion there. That works. I'll do that. Excellent. Thank you guys very much. Okay, we only have two minutes left before the regular call starts. Is there anything quickly you guys want to bring up because we're not going to talk again until two weeks from today, except for your slack. Okay, good discussion. Yeah, thank you guys. This has been good. Let's just let's switch over to you. Where's my mouse. I need more coffee so I'll be back. Okay. I'll just hang on since it's the same color. Exactly. Yeah, that's one of the reasons you said, okay. Let's see who's on the call. Christina there. Yes, I'm here. Hello. Hello. Klaus, are you there? Yes, I'm you. Okay, you still there. Yeah, you're still there. Okay, excellent. David Baldwin. Yes. Hello. How about Dan Barker? Okay, Richard. Okay, I'm assuming Richard, you're on there twice, right? Yeah. That's okay. Excellent. All right. So Scott, while we're waiting, since you took an extra long vacation, I'm curious, would you get to do anything fun or you just hang out at home? Well, I split my time between, you know, hanging out with my kid and trying to build a bunk bed. A bunk bed? How'd that go? I'm still working on it. I found out I'm allergic to walnut wood. Did you break out in hives or something? No, it's just like, I thought I was getting a cold and then I didn't work for a couple of days and got better. I would work again and get worse. What's going on? I finally realized it was the dust. The irritates the back of my throat. Kind of dripping nose. Oh, wow. So it's getting like this mild sore throat. And yeah, it's just dumb. But I got a respirator now and so it's all good. That's interesting. There's no shooting a bunch. Yeah. All right, back to the agenda. Hi, Sam. Are you there? Hello. Hello. I was on mute. Yeah. All right. What about Kristoff? Hi, I'm here. Hello. And Chad. I'm here. Excellent. Jim Curtis. Yep. Excellent. Ginger. Good morning. Good morning. And Erica there. Yes, I'm here. Excellent. Morning. Rachel. I'm here. I'm here. I'm here. We're. Here. Roberto. Excellent. John Mitchell. John Mitchell. Yep. That's John. Let's see anybody else. William, are you there? William. Definitely. Oh, sorry about that, Rachel. Definitely noticing a consistent pattern there with William. You were. Renato, are you there? Yeah. Doug, it's Victor here from Itaú. Could you put me there, too? Yes, I got you. Thank you. Thank you. Oh, hi, William. I got you. Okay. Thank you. That's actually, I should mention, if you guys join the call late or something like that, or you missed the roll call, just go ahead and put a message into the Zoom. And I'll take that as good enough. As long as obviously your name is associated with it, I'll take that as you're there. All right, one more minute and then we'll get started. Oh, Colin, are you there? Colin? Oh, excellent. Thank you. Thank you, everybody. All right, three after one. Oh, there's more person. Someone new, I think. Lori, are you there? Lori Brickley. I'm sorry. Lori Brickley. Okay, I'll act. Oh, because there's no audio. Okay, we'll catch up with him later. And Doug M is there, too. Okay, let's go and get started three after, two AIs. I think the way one that's kind of jumping out at me is Rachel, you're. I have not made progress on it, but I will this week. I promise. Excellent. Thank you very much. All right, community time. Just a short time for people who don't normally join the call to bring up any topics that we want to discuss with the group. Is there anything people would like to mention? Is it typically for newcomers? Good, moving forward then. Okay, SDK subgroup. So, in case you guys missed it, or you got somebody who wanted to join, but they had the wrong Zoom link in the invite, I apologize for that. I cleared my personal Zoom by mistake rather than the the serverless Zoom link. Hopefully I had new invite was sent out with the right one this time. I apologize for that. But there really isn't much to mention in terms of progress. The only thing that was brought up was, there may be a split in the community around the GoLang SDK, because Knative is not using the one we're producing. So we're going to try to explore why and see if we can try to bring those two worlds back together. If not, then no huge deal, but it'd be nice if we could bring back together. Also, we're going to be looking at a new interop demo. Scott has already started some brainstorming ideas here, so I think he's going to send out a note as well. But please, when you guys get a chance, take a look at that, because I think we may try to do something for Barcelona, not just another demo itself, but try to highlight the fact that we're using our SDKs as part of the demo. So anyway, take a look at what Scott has written here, and I think you might send a note as well when you guys get a chance and get some brainstorming going around that. But other than that, I guess I should mention, there was some discussion about get event type versus get type for the getter for the type field. I believe some people were advocating for keeping the word event in there, even though we dropped the word event from the spec itself, but on the call we just had, everybody seemed to prefer aligning with the spec itself, so dropping the word event. But I was going to send that note as to make sure everybody else is okay with that, since we had low attendance on the call. So keep a look out for that one. If you have an opinion on that, please speak up. Otherwise, we're going to try to push people to go with just get type instead of get event type to align with the spec. Okay? Anybody from the SDK team or a subgroup have anything to mention that I may have forgotten? All right, cool. Let's see, Kathy, are you on the call? I don't see her, so I don't think there's anything to mention about it to the work group. I'm sorry, workflow subgroup. Nothing really happened there. Okay, so moving forward to PRs. Let's see. I don't believe Alan is on the call, but basically he wanted to instead of just say a 32-bit whole number, he wanted to be explicit about the actual ranges themselves. Does anybody have any questions or comments on this one? In particular, I want to pick on Clement since you were heavily involved in the creation of the data type section. What do you have an opinion on this one? Yeah, I'm in favor of this one. Okay, thank you. Anybody else have any questions, comments, concerns? Okay, any objection then to adopting it? Excellent. Thank you guys. Topini, I don't think Topini's on the call. Clement, this one might be another one for you as well. I believe that he just wanted to add integer to the list in this sentence. Go. Go as in yes? Go as in yes, yeah. Does that make sure? Okay. Anybody else have any questions or comments about this one? Okay. Any objection to that minor change? What's with that Oxford comma there? Where's that? Oh, this one? Is that by Lee Scott? I can get that changed if it really bothers you. Actually, the change that I would like to see on this one is to remove all of the or a, or a, or n just put commas between them and possibly alphabetize them. Okay. I want a second. Commas. Remove. Or a. Okay. I'll work with him to make that happen. Anybody else have any questions or concerns? Okay. Hopefully that should be easy to go through. I'm assuming those are all just syntactical type things. If those syntax fixes go in, then that's okay with just approving that offline. No need to revisit it from a semantic perspective. Is that fair statement? That's one for me. Okay. Anybody else on the call? Any objection to that? Okay. Jeez. I cannot type today. I apologize. Okay. Thank you guys. Let's see. I believe this was an action item from last week's phone call. Okay. Okay. Okay. Hopefully this first one is just escaping the, the star because everything else appears as italics. You can see right there. So that's kind of knowing me. So this is the real change down here in line one 18 on basically just add to the release process that says we need to send out a note. Basically announcing the new release. An obvious thing that we just forgot to do in the past. So send it to the mailing list as well as add it to the announcement section of our website, which is still under development as we speak. Okay. Any questions or concerns about that? Can we have a Twitter account? Oh, we do have a Twitter. Wait. Do we? I think we do. Yes. Okay. I will do that as well. So. Thank you. It's not on Twitter. It didn't happen. That's one way to view it. Yes. Okay. So I will add. Twitter to the list of things here. Anything else you guys want to see changed? Okay. I still consider that to be kind of a more of a syntactical thing. You guys okay with approving this one conditional to adding Twitter to the list. Okay. Any objections to that? Okay. So hold on. Okay. Thank you guys. All right. This one is a little bit more significance. Hopefully not too controversial. We briefly talked about this last week. About how I think there may be some confusion about how I think there may be some confusion about just having a property called content type, because some people may confuse it with the content type HTTP header. So I was suggesting that we rename it to be data type instead, which is, I think actually more accurate anyway, because it is defining the type of our data property. Yes. Well, we map it to the content type exactly in the HTTP. Because it is the content type in the binary format that is true. Yes. Yeah. Yeah. But I believe in the not in the structured format, we actually have content type HTTP header as well as the CE content type HTTP header. No, we don't. We don't. Oh, I'm sorry. No, it's actually right. And content type actually has encoding. So there's more. So content type is a fairly rich thing. It only says, you know, you know, you might have an application, Jason, but there's also potentially the character set. So you might have application Jason semi-colon, Charlotte, et cetera. So all of those things are, there's a connotation for content type and we're actually literally referring to the RFC for, which I don't have in my head right now. So there's, there's, we should agree that we want to go and disassociate ourselves from that entire world of mind types and come up with something else. But while if we don't, then we should stick to that because it's the content type. So, okay, let me rephrase this because I maybe some misunderstand here. I'm not suggesting that we deviate from using the mind type stuff or the current semantics of content type. I can't remember for sure which spec it is. It's one of the ones that's up for discussion. It may be the open messaging one, or maybe it's the rocket one anyway. One of those two included the CE content type property inside of their message. And I believe they were confusing it with the HTTP content type header itself. And I was, Yeah, that should not exist. That there's a rule in the HTTP binding that even in that case, right? If content type is being projected onto the message, it is content type. CE-content type cannot exist as a header. No, I understand that. But they were including it within the body of the message itself because they were including a whole bunch of different properties in there. That's correct though because it describes the text or whatever the basic default content is. Yes, I agree from a semantic point of view but my point was, I think they were actually using the wrong value in there and they were grabbing either, I think they were grabbing the HTTP content type header by mistake and putting it there as opposed to describing the data property itself. And what I wanted to do was to avoid any possible confusion by saying that when the content in the cloud event content type property appears, regardless of where it appears, it would cause less ambiguity or confusion if we actually call it data type instead. And let the HTTP binding still map it to the content type HTTP header. But it's the content type of the data. It's not the type of the data. I guess I'm not seeing much of a difference. My point is content type is a thing itself. Content type is not only the mind type. So you have a media type which is what you see here which is application type there's a super set of that which is the content type which also includes the further description parameterization of that with for instance the char set. And what we're saying, what we're saying with here is here's a binary thing for instance, right, that's base 64 encoded and to decode that successfully, we first need to understand that we have to go and turn that binary into a using, you know, a UTF-8 decoding into a string and then we're going to run that through the JSON decoder. I'm confused as to why you think we're placing the word content with data would necessarily change the semantics. So if you want to inject data in there, that needs to be set data content type. What do people think about that? Even though it's a little bit more robust, I would actually prefer that to be a little bit more clear but I'm still not convinced we need the word content but I could live with data content type. What do the other people think about this topic? It looks like there is some room for confusion there. So if set data content type makes it clear, I think that I'm in favor. I just ran into this problem in Golang. The Go serializer didn't understand because things were trying to get mushed into application cloud event plus JSON got really confused. What do you think about the property name or about the value of the property? The property name. My non-cloud event SDK uses this value as both what it sends on the wire and what it writes in the cloud event. You ran into the confusion point I ran into with the other specs. Interesting. Do you think having a different name aside from content type like calling a data content type would have avoided that confusion? Yeah, I think so. Okay. Okay. So what do people think about data content type as the property name? Yes. Okay. I'm going to pick on somebody here who I know had an opinion in the past about this, Jim. Pick on me while I'm meeting. I sensed it. No. Sorry, I was late in. I think data content type is more appropriate if we're going to make the change. Absolutely. And we do need to separate, clearly segregate the transport from the data type of the data and then becoming very redundant there. I guess what will clarify this in an example would be and people will hate it, but a binary, so a cloud events JSON transport with an XML payload in the data. So you've got a JSON document carrying XML data. So I guess in that case, your ACP head would say it's JSON and the data content type would be text slash XML or application XML. So you'd like to basically take this example we have in the spec and fill it out more to show the ACP headers. Yes. The thing is obviously that ACP head only appears in the ACP transport spec. Yeah. That's something we might be able to look at. Are you okay with that potentially being a follow-on to this? Yeah. What do the people think? Any other people want to speak up relative to this one? Any favor against or just thoughts? Okay. What I think I'm hearing is general consensus to changing it to data content type. What I'll do is I'll make that change because this is changing property name which is a big deal. I don't want to rush this through so what I'll do is I'll make that change and then give you guys a week to think about it and then hopefully we can resolve that on next week's call. I'm assuming no one else raises any objections. But I don't want to take the resolution of this one offline. I don't want to think about it. Does that sound fair? Fantastic. Okay. Thank you for the help on that one. Change to data content type. I'm sorry for being so nitpicky on these things. I usually don't care about names until they have semantic things. That's fine. I think we ended up at a better spot so thank you. All right. Kristoff. Actually Kristoff, which one would you like to talk about first? Yeah, let's talk in this order. In this order? Okay, cool. So this one we already discussed last week. Basically, what it says is batching is a good thing but it's handled at transport labor. And then Tapani, I don't think he's on the call today, he requested that we should mention which is basically the last sentence that I added now that whether the particular transport layer supports batching is either to be found in the transport binding or in the transport specification itself. So as an example for Kafka for Kafka, all we have to say in the binding is basically we map this is a cloud event and this is how we map it to a Kafka message and then how Kafka messages are batched is part of Kafka itself so we don't have to mention it in the transport binding. So this is what the last sentence adds. Okay, any questions on this one? Okay. Any objection to adopting it? Okay. And then we can talk about the HTTP version of this. Exactly. So then during last week's discussion, we said it would be really nice to actually have batching for HTTP and I agree. So maybe we can scroll down to the examples. So basically as a for HTTP we basically have two modes defined already the binary and the structure one and I think for the binary it's pretty much impossible to do batching because we map the attributes to headers and that just won't work. So then that leaves the structured mode and the structured mode can work with basically any content type. So kind of starting with JSON is easy and then for the rest I don't really have a solution. So what I basically did here if you look at the example the body is now a JSON array and that JSON array contains two JSON rendered cloud events. So pretty simple at that I'd say. I think the question is the MIME type, right? Exactly. So that's the next point I would get to. So if we scroll back up. Which spec is that? That's the HTTP transport binding. So I'm supportive of the format. I think that's exactly the right way to do this. I'm just wondering because we have we have so I would like to get a C-BOR spec still in. I just haven't made the effort but I think we should have a nice binary format which is as JSON structure and C-BOR is a good candidate for this but also obviously at Protobuf and so I think there will be more encoding alternatives. I would I would love for that batch to be defined in the JSON binding in the JSON binding spec. So that because really the JSON encoding is so that basically payload batches are dependent on the encoding. So you send an array of C-BOR objects or you send an array of JSON objects or you send an array of Protobuf objects and that sits in the binding because from a perspective of binding this all to HTTP you've effectively done the work here of saying I'm going to use this structured mode and now I'm going to send you a payload that contains cloud events and that payload is either a single cloud event or it's a batch of cloud events in the following encoding. That's basically what you say with that content type but it would be great if the actual format which means the array itself would be in the in the encoding spec in the event format spec. I that's kind of where I wanted to start but then I realized we well we can discuss it I think but the thing is that right now we say everybody should support JSON and then if you make the batch JSON part of what JSON is what we effectively do is we force everyone also to accept batches that is something that what you now would say is batches are only supported in JSON not really I mean that's kind of why I opened up the support request so we can discuss in my comments on the PR I also mentioned that this is a shortcoming of it right now and if we find a good way to support batching for basically anything then it's good but one of the problems I had is that in our type system we don't even have array so it's pretty hard to go from there but maybe it's a good idea to say every format can potentially support a batch and if it does so then you can use it from there so maybe that's a good idea yeah I think that's I think that's seen our way to go because we because it would be great to I think we're not we haven't seen the end of encodings coming from XML I now have the belief that even JSON is not going to be the end thing for everybody and so having a bit flexibility in there not bolting a particular format into the transport specs is probably good okay then I'll do that for next week I'll try to move the sort of the batch itself as an optional well what do we call it extra format into the JSON specification and then make this whole thing the other thing that I want to talk about is the header so if you scroll a little bit back up there in the beginning in section two or one or something we talk about how the sorry a little bit back down yeah this section here sorry so the receiver basically based on the header the receiver can distinguish between which mode it is getting so I think that's why I think we need a new header value you mean a new media effect value sorry yes yeah absolutely yeah I agree okay cool and you're fine with the name okay absolutely it doesn't come with anything that is existing you can choose whatever you like you can call it Fred okay this is the name Fred let's do that just for fun so Gem I think you have your hand up I did just a quick one for Christoph if you wanted to put this in the transport binding is there any reason why you didn't just go for you know multi-part bodies in HCTP it's that's potentially a good question I think last week we talked about let's do a JSON array so that's why I started with a JSON array but I'm totally open to anything I just want to just start a discussion yeah no that's fine okay my multi-part is pretty evil to program against true yeah it's one of those things is like if you don't find any code that someone has written already you really don't want to yeah I mean flashbacks to a previous weekly call comments where you said multi-part is hard yeah I mean it absolutely is and I'm not I guess it's the it's really whether you want to leave this as a transport concern or push it up to an application to it yeah whether you make it an array or whether you make it a one of in the application level at the cloud event level I'm muddling my words but you know what I mean yeah too early all right so I think Kristoff you had some things you wanted to change or move things around of anything or of nothing else was there anything else you wanted to mention relative to just the overall concepts behind this no I think we talked about everything then there are maybe one other thing is that I made this mode optional I think I already touched on this because I don't think we want to we say everyone should support binary and structured but I'm not sure everyone should support batching for example in a function as a service if you got a single HEP request single event that is pretty nice because it maps perfectly onto the semantics of a function as a service yeah okay I just just want to call this one must right here I think that's a very good must and a very important one for consistency ease of processing yeah okay oh yeah I think okay anybody have any questions comments concerns are you okay with this general direction it's just now a matter of a syntactical moving around of stuff the most part okay I'm not hearing any concern so that's good okay so we'll get those genes in there I assume Kristoff will review this again next week right yep excellent thank you all right cool size limits this one should be interesting yep I think we started this one last week but let's see how far we can go want to refresh people's memory on this one yeah so I think the whole issue is that if we look at basically all messaging technology out of there we know that they have limits in one way or the other and I'm mostly coming from the producer side basically what I want to know is what are the limits if I send out a cloud event is it guaranteed to reach its destination even if it starts I don't know on HDP then goes into a Kafka queue then goes into I don't know over and then goes through HB binary to the final consumer I want to know that whatever event I send either I know it goes through all of these steps or it sort of fails somewhere so that's what I want to know as a producer I think if you write a consumer you also want to know kind of what are the things you have to expect and support or things you maybe can write push back and not accept yeah so this is basically the main motivation behind this and I wrote down a couple of points that I think are worth discussing for today if I mean general points maybe can touch on these just to declare one thing though you're suggesting that we either do this section or this section but not both so it's one that is okay just want to make sure yeah just a quick question Jim is your hand up from the previous PR or is this a new hand up no no it was on this subject okay and I think Christoph was on a really good job tempering the conversation on the on the thread as he mentioned I think in one of his comments on that thread I'm more leaning personally to the fact that we really want to say that an implementation must support messages up to a certain size but may support stuff larger than that so if you want to guarantee to end then you make sure your messages are below a certain size otherwise if you're in a closed environment with your own larger limits you should still be able to have a compliant implementation rather than forcing people to always work within a set of constraints so it's more of a language change without sorry sorry Christoph were you saying should because of must it's more a provider requirement what we should be saying I think is that if you provide a cloud event transport infrastructure you must support messages up to a certain size so you're basically talking about setting minimums rather than maximums yes yes this is kind of what I do in this size guarantees section or option yeah and I would tend to go down that more down that road because I think it's less limiting okay so you got one vote for the second option okay before Christoph goes to this is John my question is is there any minute call that is dealing with whatever you want to call it IOT kinds of use cases where the minimum limits we might reasonably talk about in other environments are going to get hosed or now be out of compliance I think John that was why I was more leaning to a minimum yeah we rather mandating that you as a producer always produce events of a up to a maximum size what we're really saying is if you want to guarantee end to end then if you're below a particular size it's guaranteed to work otherwise you're at the bequest it has to be your particular service provider because people will vary yeah I suspect does that answer your concern John well I'm not doing much really constrained embedded systems work these days so that's why my question is more for people who are in those kind of markets like what a realistic minimum you know that will still work for them and not put them into an awkward position right okay so Kristoff why don't we go through a list of things that you wanted to bring up for us discussion points yes so the first discussion point was basically should we have a limit or what I call a guarantee or a minimum guaranteed size so I think we cleared it up um the second thing I want to talk about are the attributes so basically here I propose two limits the first one is I think the more important one which is it says like the whole event including metadata and including the data itself should not exceed 128 kilobytes well then I here I added another one another limit on the top of the attributes so the reason I did this is that if you write a consumer or a middleware the attributes are different so the payload the data itself I can sort of ignore and just pass on but for the attributes I have to parse them I have to load them memory I have to look at them and understand them if I want to do something like routing or whatever so potentially for a middleware it will be bad if someone would send me 128 kilobytes of the top level attributes and I may be like a byte of or no real data so I don't know if that is a concern to anyone if no one thinks we should have these additional limits on the attributes then we can just kick them out what I would like to hear from if what the general opinion is on this one so Crystal do we we don't make statements there about the size of an attribute content length do we it is saying it's a maximum of 100 attributes does it add any value if we don't go on to say add an attribute can't be bigger than a certain value in length yes you are right on this one I think I put a zero proposals and I hope that people who actually implement or have to implement things like these would speak up and say it's they want or prefer especially if you look at the binary HTTP binding then we already have in some HTTP service limits for example they would limit the size of an individual header at a one kilobyte for example so we could think about introducing these limits as well but for me the general point is more should we have a limit on that is that useful to people and if that is a yes then we can go and look at what should the limit exactly be like so I have a question on this because I don't have much experience in this space relative to size issues and so I'm wondering is it the overall size of the HTTP headers in combination or is it the sheer number of HTTP headers I've never because I always kind of assume that people were low on space they were concerned with the overall size of the headers in aggregate and it wasn't the exact numbers of headers that would have been an issue but like I said I don't have a whole lot of experience here so I'm curious what people's experience are on this the HTTP spec is silent about really what the sizes are and I think the newer ones might have some suggestions but it's all kind of all over the place and so people have run into issues where they get a fat JSON web token and try to stuff that into a header and then start finding out that there's a 4k limit or an 8k limit or some other limit in the respective web server that does happen but are those in that particular case the 4k limit is it on a per header basis or overall basis per header that's what I was wondering and then there's also and most of those limits are governed by security concerns where you just simply want to avoid that someone comes and stuffs your server with data through the headers because headers are stuff that usually gets read and buffered and they touch the body and the body is where people then are concerned about loading up the memory but there's usually some memory buffer that you keep around to load in the header section with the message and you don't want to make that too big so that nobody can basically just dust your server with too much of a program there are some real limits that are just caused by whatever config okay thank you Clemens anybody else have any questions or comments on that? Kristoff was there another item on your list of things to discuss? Yes the final one was one that also jam brought up so there are I guess it's not a new limit on the side of the messages and our pattern is to work around with this particularly the claim pattern so basically what you do is you say my payload didn't fit into this message and you can find this message at this URL and then if you receive such a message and you sort of want to get it then you go back and fetch the message from there so typically you would still send all the metadata with it it's just that the payload itself is fetched from somewhere else so the question will be like it depends a little bit on what limit we choose but is that something of interest to people because they think they will run into this limit so if that is the case then we can start thinking if we want to include that inside the spec such a pattern but I'm actually not so sure if it's such a big thing because I haven't seen a lot of imitations around this which makes me a bit wonder if it's a wide issue or if most people just send the messages of a few kilobytes and then they're just fine and don't need that and then it kind of blows up the spec a little and is an implementation over had to a few people maybe yeah and just to add to that I was only proposing adding that because if it was an issue that lots of people were going to face then rather than ending up with lots of disparate implementations at least we could say well okay if you're going to follow this pattern then this is the sort of attribute name or property name we'd like to use to store that reference and that would sort of be the limit of where I think the spec would go in that context because there are security holes and lots of other problems with that pattern especially going across provider boundaries so it was more just providing a mechanism and a pattern an implementation of a pattern yeah exactly so if you think about I'm storing this in I don't know AWS S3 bucket then maybe I have some concerns of who should access this bucket and so on so it's not clear if it's a public URL that's the right answer here but even if we don't out of this discussion comes out we don't want to do it with sort of planning to propose an extension because I think it will make sense at this level at least but it's I'm not sure if it should be sort of a required part of the spec yeah maybe an extension is a good way to catch that I'm not sure so it sounds like at first the other question for the group is do we want to try to I guess standardize the client check pattern in some way whether it's a first class property or an extension do we want to explore that path at all and I think the reason that I floated out originally was if we're going to say to people you can't send messages over a certain size then we should have some statement as to well this is what we can do if you do want to send messages over a certain size yeah that was really yeah because to me it seems like even without the potential of something in the spec that does some sort of limitations the idea of having a consistent way to do a client check pattern might be something useful for us to consider even if we don't have the hard limits in the spec right like that but then it gets into the problem I think one of you mentioned how far do we have to go down that path is it as simple as provide a URL where to go get it do we need to then define the security mechanisms around access in that URL or is that all left as an exercise for the reader how about that that's where you get hand wavy I think you have to what other people think is it a client check pattern something worthy of exploring for larger messages for larger events certainly and I think so we give the guidance for Event Grid generally if you have a large item that you need to go communicate about always point to it and never include it so but that doesn't necessarily imply that we have to standardize it as part of our spec we could say that's an application detail the question is there value in us specifying a consistent way to do it I don't think it does need harm if you say if you're going to pass a reference by convention we put the reference in this place I think it would be the limit of what we would do from a spec perspective I'm asking just play levels advocate I actually do think it's a good idea to head down that path I just want to make sure someone doesn't think we're stepping the batteries of a cloud event spec and getting into application level semantics or responsibilities any comments, concerns questions so Christoph I'm wondering whether it would make sense to split this PR into two one is to pick one of the two choices and crisp it up as you need to but then open up a second PR to deal with the claim check pattern because I do think they are individual problems I can be solved separately yeah I think that sounds fair and let's do that okay before we do that though is there anybody on the call who disagrees with even exploring these two options or these two different PRs I want to make sure that the group in general is okay with attending these directions so we don't waste Christoph's time I'm being quiet but I think this is a great idea okay thank you I'm going to assume that silence is generally okay with the direction of the conversation so on my second point on the limitation on attributes what is the general feeling here I didn't get a clear answer so we know that at HDP binary for the HDP binary transport it will be a problem but is that something we should then take on and make it a general problem or should we not have these limits here just because it's basically only a problem of the HDP binary thing anybody have any opinion I just think it as said it just seems odd to limit the number of attributes without then also making statements about the size of those attributes so I'm not quite sure if it's just adding more more language that people are going to struggle with what does it mean and what to do with it and if we just take it out you could argue that an SDK could switch from binary to structured if it thought the payload you know if headers were going to be too big or whatever that would be another way to address that I guess anybody else have any comments okay so I'm kind of getting no one really wants these limits so then I'll just take them out yeah I'm quite sure how to interpret silence but yeah I'm for giving some guidance certainly I'm not so I find the must provision a little bit harsh but in all practicality there will be limits and people will have to deal with them and making a statement that is kind of a should provision and says here's kind of a corridor that makes sense isn't terrible I have to say 128k is well within reason and most cloud messaging like if you work with any cloud messaging or cloud event system you will run into some limit that is at or below one megabyte I'm trying to figure out Christoph do you feel like you have enough guidance from the group in terms of next steps on this yeah I'm still unsure like let me put it the other way if we only have a limit of 128 kilobytes is must be sort of accepted then it means that I can send 128 kilobytes of HDP headers basically that's basically every default configuration of an HDP server will not accept that which maybe is fine and maybe the whole thing is the other thing that I'm sort of is in my head we could say okay the HDP binary is more an optional thing and if the HDP server gives you back 4.1.3 this payload is too large then you just switch back to the structured mode which you have to support anyway and then we're kind of also out of that problem that would be another way to just solve it but apart from this particular issue I think I have guidance to go forward with this okay yeah okay I'm trying to figure out if I want to say something from a personal point of view but I'll wait I need to formulate my thoughts here first anybody else have any final comments on this one I think we sort of beat this one to death on this call okay Kristoff you know where you're going with that and that's the end of the agenda are there any other topics people would like to bring up no okay then last roll call thing I've had everybody except for Stevo are you there I'm here and Laurie are you able to get a microphone yet okay Laurie if you're on the call just put a message into the Slack chat into the zoom chat and I'll get you in there oh hi Laurie also do me a favor then also include what company you're from because I think you might be new to the group just so I can get your attendance in there properly um yes and thank you thank you Kristoff for pushing the discussion on all of these I appreciate it hey Doug this is Arun I joined five minutes late I'm sorry who's that Varun oh yes I'm sorry I did see your name though we got to write it down thank you obviously I'm here as well yeah don't worry I got you if you spoke up I definitely got to alright last chance any other topics to bring up for the call today alright in that case we're done thank you guys very much we'll talk next week thank you all for a very good conversation thank you bye guys bye