 Oh, wow. Okay. It's already three past. Why don't you go and get started? We've got a short list of people today. All right. Let's see. Clemens and Rachel, you guys did your action items. Thank you guys very much for doing that. Even though it's later on since it's on my head right now. Clemens' action item was to make sure there was a call set up today to discuss PR 218 and the topics around that. I believe it's at 2 p.m. Eastern, 11 a.m. Pacific. It's using the same zoom call as this one. So if you want to join in the conversation, please remember to join that. Rachel's action item result is on the agenda today, so we'll get to that later. All right. Community time. Is there any community related topics people would like to bring up? I don't see anybody new on the call, so I'm guessing no. Okay. Not hearing any. Keep them moving forward. Okay. SDK subgroup. We did have a call right before this one. I don't think there's anything earth-shattering worth mentioning. I'll just check the notes here very quickly. Yeah, Clemens is going to write up some guidance around the MIME type possible confusion, or like we were phrased, around the batching stuff that the batch stuff introduced. So keep an eye out for that PR to come in soon. But other than that, I don't think there's anything. That's all though. We have two points. What was the other one? I can't remember. The other one was the the serialization guidance for for the binary mode. Ah, okay. In short terms, there was an issue raised on the C-Sharp SDK where the C-Sharp SDK does not magically turn an object graph into JSON in binary mode. And that's intentional because the assumption is that you bring either a readily string encoded data or binary encoded data. And that's being put into the payload because the encoder that we have is not really for the payload, but it's for the envelope. So, so just, you know, by the way, also using that for payload encoding doesn't seem right. That's specifically not for the binary mode. Right. Yep, sounds good. Thank you for once. So that was the SDK. I don't see Kathy on the call and I don't think anything happened with the workflow subgroups, nothing to mention there. So Scott, on the proposal for the next demo, I apologize, I did not get to look at this to see if there are updates here, but is there anything you would like to mention here? I think some people may have commented on here. Was there anything worth mentioning, Scott, from your perspective? No, there hasn't been a lot of interest, maybe, in parties. So maybe we need to think of another demo if this doesn't pique the interest of people. I'm trying to think of the best way to sort of force that discussion. Do you think it would be best to set up a separate call or just schedule time, say, on next week's call to discuss it at this time? I could be happy with a demo call. Okay, let me go ahead and take the action to set that up. I'll probably do a doodle poll then to find a good time. That people can opt in if they want to participate or not. Okay, cool. Anything else on that one people would like to bring up? All right, we already talked about the call later today. Okay, Kupkan, Klan-e-tan, EU. We should probably start thinking about who is going to be giving a talk during our intro and deep dive sessions and what the various topics were. I think the intro is probably pretty much what we've done in the past. If this is for newbies who pretty much know nothing about us, we could probably use a lot of the same material there. I do think there's going to be more opportunity for new stuff or invention in the deep dive section because we'll have newer things in the spec to talk about or potential other exciting things just to mention there. So I think that might be where the more interesting area lies. But we should probably first start talking about who's going and then of that set who would want to actually be part of those presentations. So let me sort of open the floor up and see if someone wants to sort of raise their hand to get out of the list. I'll probably go. Would it make sense to have a cloud events booth? A cloud events booth. You mean like on the showroom floor like where all the companies live? Yeah like have the interop demo constantly happening on the floor. That's an interesting question. What other people think? I guess there's two aspects. So one is do we think it's worthy enough but of that way. But then also I would need to check with Dan Kahn and the CNCF guys to know whether quote sandbox projects are allowed to be that prominently featured. It's a different issue. But what do people think? Is this something you guys would want to pursue? My main concern is money. Don't you pay a lot of money to get a booth? Probably yes. Yeah this is Ginger with Sanadia. For to get a booth at one of the Kubecons you have to be part of the CNCF membership and Linux Foundation membership. So and that costs a decent amount of money. I can buy a table. That's a problem. Can you set up a brownie table to and put brownies on there and sell lemonade at the same time? So tell you what Scott let me I'll let me it's worth at least asking whether it's even possible. I suspect that the the folks who mentioned this are already probably correct. They probably would require a fair amount of money which I don't know if anybody's going to put forward. But the other aspect that I'd actually be concerned about is this like I said worthy enough to actually be there 24 seven on the on the booth floor. Yeah I know and a lot of the vendors already have a booth so maybe we can just have a coordinated demo in each vendor booth. Yeah that might be possible. Okay so based upon what I'm okay so okay so if nothing else then I'll ask Dan and wait for him to come back and give me an answer even though it's probably going to be no without money. But relative to my original question of who might actually want to present. So Christophe I have you Scott I have you Clemens are you glad I wasn't clear whether you're raising your hand to potentially talk or you're just saying you're going to be there. I'll point to talking I don't know exactly about what but sure I know. I would I would do a talk here. Well okay so yeah we should probably first let me gather the list of people and then we can have another call or a different discussion to figure out what we want to do there because one of the things that I'd like to do is give an opportunity for newer folks or other people to present. For example I Clemens you and I presented so maybe best if we stood back a little that some of the guys do it if they wanted to but we can figure that out later but we'll see. So okay so those those four people actually I'll put my name in there too because I'm going to talk if necessary but we'll see okay so think about it if you guys wanted to potentially talking you'll be there let me know otherwise what I'll do is I'll set up another call not in a not too distant future because people need to start making planning arrangements and stuff like that and get permission and obviously being one of the speakers can help with the approvals from your company so I'll set up a call for us to talk about that all right. Oh will we have two sessions like we had at the one in Copenhagen or we have another structure yet? Yes we will definitely have at least the intro and deep dive whether there's an additional session there I think that's up to whether you're speaking whether someone submitted a call for a proposal around it and whether it gets accepted or not. Now having said that I just remembered oh god I'm dying of blank Chris Christina check pasted a message to our Slack channel saying that they're thinking about doing some sort of serverless conference thing I can't really say that phrase he used at KubeCon EU yeah I can't remember if it's a half day or full day thing I think it may be a full day thing so that may be another opportunity for us to present what we're doing there I don't know exactly what's going on so I may be completely wrong but that may be an opportunity there as well so I'll try to get more information about that as we go forward so maybe another opportunity okay all right anything else relative to KubeCon EU that people need to they want to bring up that we should be talking about all right moving forward then compliance testing so Scott you mentioned this one you want to sort of summarize what your question was even though it's pretty obvious yeah but I would like to propose that we start developing on the wire conformance test to verify that each SDK can speak to each other and maybe you know if another vendor has a library to test it against what the spec says is good okay now I know that in the in the section of our of our repo we do have sort of an open source section with a list of open source implementations and I know I think there's at least one or two sort of cloud event verification tools in there so that might be a good starting point for us to consider um but I'm assuming you're probably uh thinking it's something a little more formal than just someone puts a thing out there right are you thinking about like hackathon type events as well or just some sort of hosted thing that people can hit against what yeah did you have any thoughts on that I actually didn't know about that list which repo said it I think it's an R repo let me just double check here oops yes I think it's an R repo maybe it's under community open source and so there's this one I thought there's another one cool okay yeah I think this one might be a little bit old I don't think it's up to the laser version of spec um I think at the last I heard we're talking about that person they were in the they were thinking about doing but they haven't done it yet I thought there was more than one though but anyway if you want to sort of flesh out and I just like too deep because I don't want to put too much work on you but if you could like jot down what you're thinking relative to the compliance testing um then we could take the next steps from there okay thank you um okay cool thank you all right moving forward on to prs Matthew is not on the call however we talked about this one last week and I think we were generally okay with it but there was one question because I think he had on oh just your first refresh his memory what he wanted to do was to modify the what is this the swagger duck um to give minimum length to all of our strings and at one point he actually had a constant here of 0.2 and I can't remember who it was but somebody pointed out that that means you can't support anything but 0.2 and that's that'd be problematic for forward and backwards compatibility so he ended up removing that I think other than that there was that was really changing made so are there any other questions or concerns about this one because we almost approved this last week except for that one comment all right any objection to approving them all right cool thank you guys all right Christoph number one actually let me get you Christoph is this the order you'd like to go through yours or does it not matter um I don't know it doesn't matter really let's just take him in order then we don't uh what changed since last time on this one I can't remember uh since last time and not a lot I think Clemens made a few small suggestions which I implemented but nothing major if I'm not mistaken okay so did anybody have any questions for Christoph on this one do people need more time to review this one I think this is a great proposal Christoph thank you for putting it together I really like it I did the implementation of it and the only thing and that's one of the things I'm going to go and put into the sdk notes is that the new media type that we're defining here for the format is um has there's a risk there's a bug risk here which is not too grave but it's an easy buck to make because the cloud event application such cloud events is a prefix of application such product that's batch and since we're using these suffixes for the actual format um my implementation in the beginning did a um you know check whether the message is a cloud event by by doing a starts with application cloud events without including the plus character and that would have matched the batch as well which would have been wrong and so I have to go and correct that so I'm going to make a I'm going to write up two sentences about this for the sdk notes so that nobody makes that mistake but that's really the only thing I found otherwise from a factoring perspective since we're now move that into the into the jason spec the jason now jason has effectively batching while that hb does that's really where it lands is um you know distinguishing the messages by um the the media type and then really dealing mostly with the jason decoding and not with uh anything that's htp specific really the htp specific piece is touching and evaluating the content type so that's the right it's the right model so for me that is good and we should good merge it okay any other comments or questions going for sorry type um I mean we can also call it batched cloud events if that makes it better I don't have a strong opinion on it yeah it's funny as we're talking in the sdk call about this it I could see the fact that that the the start of it matches I'm sorry it starts with application cloud events can actually be a good or a bad thing um right because if you just want to find all cloud events geez wouldn't it be nice to only have to search the beginning part of the string um but then if you only care about non-batch you have to go a little further and check one more character so that's interesting because you get impact yeah so I did it I put a couple comments in here I think things like content type I think needs to be changed data content type um I had one other question in here you have any comments of this one Christof yeah I tried to um make this a part of the jason format and that I said but I'm not 100% sure what I'm trying to say there is the jason event format and there's the jason batch event format and those are technically completely separate things I mean logically they're very similar but when we say jason event format we mean the non-batch one I remember we say jason batch event format we mean the other thing one thing to make it even clearer will be to split it up into two files but I'm not sure if that is a step too far and if you have any other idea how to make the distinction really clear I'm all up for it I generally agree that the distinction should be clear but it's not it's not really separate because it one embeds the other yeah I know but in terms of um if you look at here's a list of event formats that I support then these are two different things one builds on top of the other but in terms of like yeah mime type or whatever there are different yeah but I still I still think it's okay to have them all in one file so I also think so that's why I put it in one file um because they are so similar so am I correct in assuming that we want to well okay the wording here I believe says you have to support the the the traditional jason format but you do not have to support batched right and if so is that what we want to say or do we want to force people to also support batched personally I don't want to force everyone to use the batched mode um if we take the function as a service as an example you have the model you get an htp request that should run on a single instance of a function process the event and then be done so you kind of have the model that a single htp request should model to one event once you make a batch of events um then this model breaks so this is one good reason why I don't think I want to force everyone to do batching okay especially in context of uh serverless okay so then what I'd like to do is I'll go back and think about it but I'd like to probably modify this paragraph ever so slightly to make it perfectly clear that this must only apply to the jason format and I know it it literally says that but the problem is I think people will very quickly confuse the batch jason format with the normal jason format and think they have to support both and so I'd like to augment the sentence just to make that clear I'm not going to change the semantics I just want to make it clear that we're only talking about the one format that's okay sure that's okay okay um does anybody else have any questions or comments on this proposal so aside from those editorial tweaks um is there anybody on the call who would object to adopting this I'm not necessarily calling the voter what I'm trying to tease out is whether anybody would like more time to review this because you guys are awfully quiet today so do people want more time to look this over or do they basically think nope it's good to go and it's just editorial tweaks at this point this is Robert looks really good to me okay thank you Roberto anybody else want to I think it's fine like I'm like quiet is a sense okay just just want to make sure I can never sometimes tell whether it's I agree and I don't need to say anything or I just don't get a crap so that's so this is good thank you for speaking up okay so let's go with this um so I I believe that let's see where is it I think there are a couple spots that have to be data content type I think those are obviously very like type of kind of things what I'd like to do is treat the tweak to this sentence as editorial um but so I'd like to at this point is ask for approval condition upon that editorial tweak and then once I get say two LG Tams offline we can then merge it in but that's all predicated on the fact that you guys approve this as it is today with the agreement of those minor editorial tweaks so is everybody okay with that yes I am okay anybody else want to speak up okay thank you guys so I'll get I want to mention a quick thing that came up the discussion of this that's in the discussion about the format I think it's not in the inline comments but more on the I think it's better see them on the other page yeah um there was a suggestion about a different shape of for if you scroll a little bit further down oh oh Doug made a comment yes that uh for the rock oh was it further up I missed it oh there it is yes there it is yes um which is which is um um effectively if we wanted to if we want to go and send time series data and we say the time series is um all encoded as as with cloud events metadata which means every single time event record is a cloud event per se um then um for you know industrial data etc those batches can become very large and then it really makes a lot of sense to use a effectively a table format where you have headers like close your eyes imagine csv but different differently encoded and so there's ways to do that obviously in jason and there's ways to do that with other encodings um and my suggestion over there is to say yeah we should probably look at uh afro or parquettes as um additional encoding formats specifically to satisfy that that um that requirement because because I think it's a real I think it's a real case but then if we want to send these these fairly large batches then and I think we're going to we're coming to the you know minimal maximum size thing um then we should probably have an encoding that's um that's more efficient and it's actually really good at these these time series batches so that that much is a comment of of probably AI bug issue that we should go and investigate but I think it's this is a real I think it's a real requirement but I think it will I find jason a weird format to do this in because you have to kind of force it um and and you're just dealing with an idiosyncrasy of jason in that it has the metadata with every field and dealing with that with a more efficient format uh is probably better but I think the net of what you're saying there is that this is almost a a different issue other than batching itself this is a wholly separate sort of discussion point right yes I don't think it's a I don't think it's a batch of of individual records but it's a it's a different kind of it's a time it's it's a batch but it's not um a you know take an event in a particular shape and now send multiple of those but you're literally starting with I want to send side time series data and here's a bunch of records right so let me ask this question is there anybody on the call who disagrees that this is a separate issue not not passing judgment or whether it's a good or a bad idea but just whether it should hold up this particular PR itself because I'm hearing at least from Jim and now and Clemens that they view this as a separate issue and it's not necessarily a blocking one for this is there anybody on the call who would disagree with that assessment okay so let's do this uh okay I will reach out to Doug and talk to him about that one all right is there anything else related to Christoph's PR then that people would like to bring up before we move on all right cool thank you guys Christoph number two so we talked about this the last two or three times I'm not sure if I should do an intro again so basically what it says is there is a minimum size that every implementation of cloud events has to support I think today we want to discuss what the actual size is compared to last time I added a big paragraph um that kind of where I'm trying to explain the difference between the size of the event and then the size of the message so we're what I'm trying to do here is to define the size on the event itself and then you can encode that in one way or the other so maybe you take AMQP and in AMQP it is encoded the AMQP message is actually above 256 or whatever our limit will be or you still have to accept it because the event contained in that message is larger but if you make it smaller than that so if you if you would also if you take a different event and serialize it in AMQP and it would turn out it is below 250 kilobyte that is also no guarantee that is being accepted because if you would serialize it as JSON it may turn out to be bigger um so if you are a middleware that wants to guarantee that you can send the event on what you should always do is measure it in JSON if you're just concerned about like rejecting messages uh so that you protect your own self then you can take a sort of arbitrary limit that will fit any 256 kilobyte or whatever limit uh JSON I hope that made some sense I struggle a bit with that explain me that reasonably I'm happy if someone well reads it over and make maybe make some suggestions apart from that I think it's basically unchanged from last time anybody have any questions or comments everybody okay with this 256k size all right I think Clemens wanted to um talk yeah um um are you are you telling me I should push back against it well I want to be sure that you will end up supporting it yeah I find 256k is uh that is that is uh that's four times the size of what we currently support on on the event grid um these numbers are always totally arbitrary which makes it kind of hard to go and argue for or against it because you know r64k are as arbitrary as 256k here so having a rational argument for it is um for against it is kind of difficult um the reason why we made it smaller is um to force um everything that's everything that's pii and everything that's um you know a binary file to force that either into a callback model um or to uh force a claimshake pattern so we it's it wasn't it's it's one one thing is obviously a scale concern um of you know a torrent of met of messages that are floating around all at the same time and doing that at platform scale which is the reason for why we do this in event grid because event grid is a platform level capability um that's there for all of azure in one region um and then the second is um to literally keep the payloads size small um so that um you're effectively forcing everything that requires access control to view um with that size limitation we're making it fairly obvious that you should turn around and go to back to the place um and to the effect of the the place that raised your your event to get at stuff that requires access control because that's one of the things from a privacy perspective and this is i'm just explaining you the product rationale um here from a privacy perspective since pops up is generally a thing where you are raising an event and then you have a bunch of parties being able to subscribe without having differentiated access control you probably don't want to include too much detail into that event that you're raising but just effectively including just enough information with that event for that other party to know whether it's relevant for them and if they need to have further detail they just turn around and walk up to the the center and there they run into an access control gate and if there's if they're not authorized to see whatever personally identifiable data etc um then they won't get at it um so we have based on those considerations we've been landing at uh 64k being a fairly reasonable limit okay so uh jem you have your hand up yeah i do hey i'm masquerading i was glad to wear it so i i agree with what uh what tems is saying i i guess i'd like to hear from some of the people that were arguing for larger messages originally i believe they sort of had iot use cases where you know claim checks may not work for them um so i i i understand you know claims is point um i just want to make sure whatever limit we put on you know is going to support those use cases that people were concerned about and again i would add it's better to start with a lower limit than and raise it than try and start with a higher limit and then wind it back because that would never work about that i want to say i don't think raising it works either because if it's a minimum support in size and you have pipelines supporting all the version one any middle where in the middle of it supporting only version one you'll only be guaranteed the smallest smallest size that was ever specified in your major version that's a fair comment yeah absolutely i i can't remember who it was dog can you remember who was asking from an iot unfortunately i don't remember i'm sorry if anybody in the call would like to chime in about that so i so i i'll sort of raise my hand then um two things that went through my mind is i'm nervous about the first thing about to say because i actually like having strong words in there but the must there from the interoperability perspective obviously a must is the right way to go however on something like this where we know there are some systems out there that can only support things that are smaller um should this be a strongly recommended instead of a must or do we lose something because people will use the out and that and it won't mean anything at that point that's one thing that's running through my mind so sorry i was going to say are you advocating for a must support 64k but is recommended to support 256 honestly my original thought was just to replace the must with with strongly recommended so that not change the size but just change it strongly recommended but then my other question was directly towards commons which is commons are you advocating that we explicitly change it to to from from 256 to 64 um i would so if this was a strongly recommended that would give me a pass over it um i know i i like it and i'm afraid of it at the same time um well i'm so my my my my my my product considerations don't go away um but i understand the limit so i i like strongly recommended better because there are factually systems where um they will which will want to be um cloud event subscribers like for instance we have this thing called um you know iot edge um and we want to go and use cloud events effectively everywhere and also kind of in the embedded space and i'm i'm trying in the in the industrial space um also to convince some other standards groups to use cloud events and i know that there will be limited constrained areas where we can probably just barely fit a cloud event even into a transfer frame and um having a recommendation here is for you know the the general broad everybody is in the clouds um um the area having recommendation there um is certainly better than a must where it's then becomes fairly impossible to go through a compliant implementation for these cases at at the edge of the embedded system right and and those and those folks are um relatively humorless when it comes to normative language humorless i like that phrase yes uh christ in that case it's room we don't just say must be uh 64 kilobyte then because i think then i like the strong language and then having a lower limit then basically having or i would have said so i'm the guy who sends events and i need to make sure for my customers that they arrive if they don't arrive then i am the one that has the problem so basically i need to guarantee that my events will throw through the platform or whatever they put behind my software system um so for me that i don't really care what the limit is i just want to make sure it is there yeah the reason why it must why a really hard rule is um is difficult is that the further you look down into the into different systems the smaller your sizes get that you can support and i think there's the applicability for cloud events in that case so let's say if you wanted to go and make cloud events work on tsn which is time sensitive networking which is a if nobody has heard about that um it's a way to create an ethernet setup which gives you guaranteed hard real time over ethernet with an extra layer of hardware and software pieces but you can literally if you say i want to have the message a message over at this other place in one microsecond it happens in exactly one microsecond and so the tsn layer does that but tsn is not ip it's not tcp it's none of those things you basically just get transport frames and the transport frame you get to use is one and a half k which fits the cloud event especially if we add if we add a binary encoding for instance with protobuf but then you know you're literally constrained to to whatever else you can put into that payload which might be sufficient or is sufficient for some some real-time applications so there's there's the further you further down at the edge you go the smaller the sizes get um which makes it really hard for us to kind of make we need i think we need to have a general interoperability rule which is true for you know all of us cloud people who do the the pass and the Kubernetes and etc and all that all of that but then kind of leaving out for people who are working on on those on those kinds of systems and don't feel repelled from rules like that are you implying that any minimum size would be a problem basically um i yeah i mean i mean if i if i take a look at the tsn tsn case then you are then you're effectively at the the maximum transfer unit size right for ethernet okay what other people think it seems like we're we're zeroing in on two possible choices um change the must to a strongly recommended or potentially reduce the size or is there another option out there because i'm not hearing anybody complain too much about this direction in general it's just tweaks at this point i've got many big tweaks but we're important tweaks but still i'm not hearing anybody object to this general direction so is another option people can think of have a poor point of discussion the other i was sorry like christoph go christoph i wasn't sure if kathy was gonna speak in my initial issue months ago i also laid up the idea that um i'm not sure if it's so good but we could have like several layers so you support a spec version and then you support i don't know cloud event 64 kilobytes or one megabyte or one kilobyte so you could more or less um explicitly say what you support and then if you build up the system you plug in you have to make sure that you plug in the right part so it doesn't make sense to plug in a producer who does one megabyte events uh into a tsn system that only consumes one kilobyte so instead of having one set limit you can sort of choose or declare what you have but it also adds a lot of complexity um i'm not sure if it's worth it but it's uh one idea it's basically become profiles is kind of what you're saying yeah that's a good word for it yeah i've heard i've heard different people have different reactions to profiles some people probably okay with it and i've had other people go screaming from the room when you mentioned profiles because they think back to the days of the the w.s. star standards especially around security when you had different profiles and that was great everybody was compliant with their own profile so you had zero i'm sure comment remembers those days yeah yeah yeah yeah absolutely yes so some of the longest i think the longest constant in that net framework is um one of those profiles yeah all right and is there anybody else who hasn't really spoken up they would like to speak up so i'm not quite sure how to move forward on here um because i granted this is my completely biased opinion but it seems like i'm hearing good arguments on both sides for both possible changes um i'm not quite sure how to move forward there i because i don't want to just come down to a vote that didn't seem like the right thing to do on this one i mean obviously we have to we will but um i don't know i'm looking for ideas on how you guys want to move forward on this one do you guys want more time to think about it do you want to put the two choices up for a vote and see where the group tends to land or ever seems to prefer i would suggest maybe um giving one more giving more time one more week for people to think about this all right so i think i think the transport and the event consumer is other ones that you know i need to be compliant with to support this size right this is uh basically the the size that need to be supported um as the event goes along the way it goes all the way to the eventual event consumer so all the middleware and the and the final event consumer need to support the size um i would like to go for the direction that we can start with a smaller size and then see how that goes um okay well so uh okay so kathy you you express an opinion or a preference in there so thank you for that but then you also suggest that maybe people a little more time to think about it and i like the idea of getting little people a little more time mainly because i don't think this is critical it has to go into date i don't think this is necessarily going to change the implementation code this is more about setting guidelines for our ability so i think we have a little bit of wiggle room relative to time on this um what if we do this what if we um give another week for people to think about it i will send out a note to the group expressing the two different sort of options that we sort of thought about on this particular call and asking for people to either come with their with their preferred choice on next week's call or if they can't make the call to express that opinion through email or through the pr itself um and then maybe on next week's call we could look at some sort of start of a vote if nothing else maybe at a bare minimum a sort of sense of preference for which direction we'd like to go um and see how that takes us like i said i'd like to try to avoid too much of a formal vote on this because i don't think it's necessarily contentious and i don't think there's a there's an obvious answer out there um but what do people think about starting with that sort of come back next week and give you a chance to think about a little more than try to force some sort of decision next week yeah sounds good um i have a question you say two options are you doing the two options different size or do you mean that one option is we set aside the other option is like mentioned before you use some profile use some messaging sync up you know on the i mean to decide on the size the two choices that were mentioned was to decrease the size here i think the proposal was to decrease at the 64k and then the other option mentioned was to change the must to something like a strongly recommended oh okay i see that way people do have an out if they need to but we really really really want you to stick to a size that we that we pick which is 256 as of right now could we also get some input from somebody who's doing iot and considering cloud events uh clements mentioned the people doing tsn was it and i don't necessarily well no this is mean uh no if iot producers would are much more likely to use their own system instead of cloud events setting a lower limit that might impact cloud producers where we have higher limits might not be worth it but if there is actual iot interest for this obviously a lower limit is it's kind of a mean comment but we kind of have to take this into consideration too like iot and edge the producers most likely more likely to use an proprietary event system and event format obviously we do want interoperability but still okay so i think we're running a long time here so i think the idea you had there those a good one that i think a lot of people on the call do you have either customers or work with other people in other areas like iot sector and stuff like that so maybe we can use this week to try to reach out to some of our colleagues and friends and see if they have an opinions on this and we can bring those to the next week's call is that fair okay but with that what i'd like to do is to to move on and see if we can wrap this up next week skip people we could think about it but with the short time that we have what i'd like to do is ask christoph to quickly introduce his claim check thing and then great show maybe you could give us a minute or two overview of your pr as well both of those are relatively new so i wasn't going to push for a vote or anything like that today but i did want to at least get the ideas out there feel to talk about them so we can start thinking about it so christoph maybe you could summarize this one yeah so this is basically the claim check pattern so there's a new attribute next to the data that's called data ref and that's basically a reference to where you can also get the same data so you can have both at the same time or only one and there are basically three use cases for it that are outlined here the first one is the content is too large basically what we discussed before so if it is too large you put it into a different place and then the consumer can retrieve it in that case uh it would probably or it could be public and then the two other cases are more security related the first one is that you want to verify that the data hasn't been tampered with so you would retrieve it again maybe it's duplicated and i would check that it's really the same and you trust that source um and the other is uh that vince also talked about today um you have some personally identify identifiable information in there uh you don't trust all middleware between it so only a trusted consumer can retrieve it so you don't put the data into your message you only put in the data ref and then only the trusted consumer has to secret to view it yep and then i'm being pretty open-ended i'm just saying it's a uri reference and then have fun with that there's also while on purpose there is nothing inside the message so that you can authenticate at that point the idea is that in one way or the other the secret has to be pretty sure um yep that's the overview i think okay so as i said i could push for a vote but are there any high level questions or comments about this that you'd like to share at this point in time i'm not sure what that that's a first class thing and why that's not just inside the data because the event could be not only about one context but about many and then your um um or many aspects of a context and then you have you know the distinction between a thing where you can only have one uri versus a payload where you then need two and then and then you're breaking it out into a different so it's not it's not clear that that needs to be first class first class object i want to make sure i understood you Clemens there are you suggesting that another alternative would be to have a data content type that implies a claim check pattern so that way the the data the data itself is your is basically the url or a set of URLs yeah you have a yeah you have a you have an object and the object describes effectively has has a pointer to um whatever you want to um you know refer to it it's it i find is that i find it i find it a little constraining as a as a first class construct okay anybody else have any comments people are commenting in chat oh is claim check about eventing in pattern or is that messaging oh gosh messaging versus eventing discussion that's like begging for a a seminar from clemens no i think it's i think it's legitimate um actually i think it's very legitimate if you have a um a single large so i have a large document and you want to tell the world about that large document it would be unwise to go and send a copy of that large document to everybody but rather you want to go and just inform them that just something happened with that large document and you want to point people to it so i think this is the legit um uh general as a pattern anyways it's just that for me the data like the data ref one uri thing is i'm not sure um that adds a lot over um just having data a data field with a uri inside of it and then the the flexibility to have you know further metadata metadata explains what that means okay so uh 30 more seconds any high-level questions uh clement i'm assuming you can make your comment that you just made into the pr itself so i feel can comment and think about what you said could i just bat in one comment since we've got 20 seconds left yeah please yeah i'm good oh shoot i was just trying to remember what i was trying to say data is uh um i'm going to show my ignorance data is essentially an opaque blob though isn't it so i mean um i i i guess to clement's a question would be how can it either be an opaque blob or a a reference to something else it does not make it not very well typed or maybe i missed the thrust of what what you were waiting for um yeah if if uh um well i i think if you receive the event and then what you do is do you blindly pull the the uh you know 10 megabyte payload without knowing what it is and or do you rather have in the data um some description of what that might be and but that's application specific and then you make a decision of whether you want to pull it or not because that's really that's really the difference here right you here we're saying there's a payload that might be a arbitrary size and um we're not going to tell you what it is but you have to go and get at it while if you make it a proper if you make it an upright part of the event payload and say aspects of this payload might be elsewhere and you're including your eyes to those aspects okay making it more expensive that's that's that's why like i'm not i'm not in principle opposed to you know having the payload uh as external and using that reference just from a practical perspective um i'm not sure i would use it okay so for you it's more the i need more information as to whether i should want to follow this link at this time yeah and that's so for me that's a that's so i'm not doing i'm not my answer is not based on some protocol principle but is based on you know what is my what what what i think about as an application use case of whether that would use that feature right okay all right okay so with that please comment make put your comments or your idea into the pr and we can start having offline discussions about that but very quickly rachel i'd like you to if possible summarize your pr here on create a space for specs for proprietary protocols if you may or if you can sure so the pr as i opened it is extremely permissive it says anyone who would like to add a spec for their prepar proprietary protocol or encoding can do so by adding a spec that looks like any other spec for any other protocol in a special place and explaining what this protocol is used for the comment that i got yesterday which is an interesting comment um is that we should perhaps have a higher bar for proprietary specs that we should ask them to prove that if a cloud events uh if a cloud event um goes in and then comes back out it is still in the same format um i think that's a pretty high bar but i'm open to that if people want that um so that's that's the status okay i'm going to leave it open for a week because it's like it hasn't gotten very many comments yeah no i think from a procedure point of view is dco needs to be signed for the the commit needs to be signed that's right yeah um but any any high order questions for rachel yeah i i read that comment too um and i found that i found the argument fairly convincing and that is um and i have i made another comment somewhere else specifically on the rocket mqpr that kind of goes in the same direction and that is um i think i think the question is is that spec that lives in our repo useful for anybody outside of that project um and and how does that how does that spec in our repo and help interoperability i think that's um that's that's this comment that were that so when you say when you say anyone outside that project do you mean anyone who's not developing that project or do you mean anyone who's not using that project so let me let me let me use the rocket mq example just because it's it's good to illustrate that point you gotta be quick Clemens because we're running out of time yeah so because so they don't have they don't seem to have have a proper protocol spec they literally just have code and so there's there's nothing that the binding could really refer to except for you know a version of the code and then there can't be any compatible if a compatible interoperable you know version of the protocol because it's only documented in the code so then the question for me is what is the value of that map of the binding if that only refers to something that's inside of their project then it becomes really just an advert advertising service and more than anything else okay and i'm gonna i'm gonna cut people off there because i want to be respectful people's time because it is the top of the hour so just a quick last attendance check joe are you there joe sherman okay what about christian right here okay li gia li gia over at erica yeah i'm here okay and what about eric i'm here okay uh joe are you there i know joe sherman and li gia li gia i'm here okay thank you hey dr joe sherman i'm here hey joe got you okay anybody else i missed from the attendee list i think i got everybody all right cool thank you guys very much and i apologize for running over one minute thank you guys talk next week i'm clements so we have this calling in an hour right about the order and i have two cool two calls between between now and then but we'll meet right here in what we have it and we don't postpone it for the other guys who actually opened the issue uh well we can i haven't heard from any of them what they wrote something in the actual pr well let me look at it yeah all right well we need that we can go postpone then all right yeah okay see you later