 Hey David Good morning, how's it going? It's going quite well good We actually just upgraded to Google mail corporate-wise All the CNCF meetings got dropped off. Oh, no It's kind of funny. Actually, it was like, hey, so but it's you know, it happens when we It's not uncommon when males Services merge or change and things like that for something to get dropped off and most of the external meetings Anyways, that's TMI, but so I had to go back and read the notes to make sure I can get in correctly. Yeah It is still amusing to me that how after all these years We claim to have standards around mail protocols and stuff and yet the different mail systems don't seem to work quite right together It's amazing to me. Yeah As always seems to be a one-off implementation per per site as well Yeah, which it's part that we ran into that's in an old config of exchange Customized to the corporate things we're working. So it's that's part of it as well. Yeah today's the SDK meeting afterwards This week is no this week is interop interop. Okay our discovery in rock. Yeah All right Hey, Matthew No, you actually have a very short agenda today I was struggling to find out their topics to add to the list. So maybe a quick call I'm not sure anybody would mind Hey, Tommy Yo, hey, Scott. Oh, okay, and Eric Hello Hey ginger Hey, Doug. How's it gone? Good. Oh, yeah I'm fine made it through the debate and everything last night. I Was watching that and I kept thinking, okay, what are people gonna do with the fly I Swear that the best the only thing that I I could Attribute to making it through the whole 90 minutes was watching the live tweeting going on by people because it was hilarious Man All right ace linky And there's someone else on Manuel, are you there? Yeah, hi. Hello Thomas Hey, hey, Remy. How's it going? Good. You're on time. Yeah, it's always good That's the thing earlier We don't actually have a whole lot of topics. So if you guys can think of things, please let me know But otherwise we really don't have a whole lot to discuss Or just a couple issues I added to the end of the list that might be interesting if we have time Mm-hmm. Hey Lance. Hello. I managed to download it in time and Excellent. Hey mark. How's it going? Okay, here we go one two three four. Is it five sevens or Yes, I suppose See if he gets that Christoph, how's it going? Good. How are you? Good. See if I can spell your name, right? Oh, they mullied Brian Hello, Doug. Hello another minute or so when we started mr. Mitchell. Howdy Mm-hmm. Thank you very well. I'll try to have that the list Hey Nick. Hi doc. Sorry. I didn't see the person was up there Keeps you guessing Hey, Jim you made it. Yeah, well you pull it me into it You're not supposed to say that holy cat. I was supposed to be secret, but yes It worked. Do that to other people as well You have no idea how many people I ping behind the background No, I'm just joking. No, it just I happen to be on the slack channel and I I don't know why but I noticed that your icon was looking like it was typing So I decided to pick on you So anyway, all right, why don't we go and get started it's three after Let's see 18 all right any other or any anything from the community you want to bring up. All right, just a reminder We do not have the SDK call this week this week. We have the discovery interop call I haven't seen anything going on with the doc itself. So that might be a quick call So be thinking about if there is something you guys want to talk about there I don't see Tim around the call and I don't think offline. He mentioned me anything too exciting going on there So we can probably skip over that All right before we jump into PRs any other topics people think I should have added to the list. All right, let's get underway then So I did not notice any comments on the bulk import thingy What do people want to do with this? Clemens said he could not make the call today. So I did ping him about this when he mentioned that to me he Didn't mention adding in objections, but I also got the sense. He may not have actually fully read it either in full disclosure If this wasn't my PR I'd say we'd merge it because no one's had any complaints for two weeks now But I'd also and aware of the fact that being the moderator. I don't want to be biased What do people want to do? Are there any questions on it? This is for the management API mainly yes, yes Well, I'm gonna be honest and say I have not read this one But it seems fine Through us most this it feels good Okay, I Given the spec is so new. I'm inclined to say Let it in and we work through PRs to fix it I did implement this so that that was a lot of the driving force behind some of the changes I made as I was as I was doing the PR I mean to be honest if even if even if we go so far as to say screw it We don't want to do Imports at all or anything like that. We can we want to rip it all out. I'm okay with that eventually I just I just feel like I want to make some forward progress because I would like to test this as part of the interrupt event If possible, and I believe the interrupt event is scheduled for what November 2nd or something like that, which is less than a month away I didn't read it as well Curious now, but I agree with your statement. I think we can merge and fix after Okay, so I guess I'll just formally ask the question. Is there any objection then to accepting this for the draft? Is it required? Is it required? I Believe so because I've never correct. I think part of this scenario that we're going to talk about in the interop is Setting up like a circular list or at least some sort of link list of DEs, right? And so when you start doing that you need some way to Possibly mass import stuff, but that's in the push versus pole model of this is definitely the this is definitely the pole model. Yes That seems fine the pole the push model. Well No, this is the this is the push model. Well, okay. Yes, you're right. I was thinking of backwards You're right. This is the client pushing a whole bunch of things into a server. So this is this is more for Initial loading more scenarios. Yes. Yeah, I mean, it's there's symmetry with the The pole model as well. So it seems fine. Yeah, okay All right in that case moving forward Slinky do you want to talk about your web socket one? I know you made some changes So maybe too soon to merge or maybe you could update people on well, I didn't any changes For five days. Oh, just one change. I did just one change now to fix some some conflicts with master That's it. Oh, that's it. Okay. Sorry then. I didn't realize. Yeah, and I For me, it's fine and you can go ahead and merge it. I think Clements looked at it Well Remember the name of the other guy that looked good That's a Thomas, I believe. Yeah, Thomas Okay, um, you may want to take a look at the Travis build No, the Travis build is feeling because I do a link to Read me a link to the to the spec and that fails because the spec is still not there. Okay, that makes perfect sense. Okay In that case, um, I'll take your word for it. The Clements is okay. He hasn't mentioned anything to me offline Does anybody have any questions about this or comments? I am excited to use this Do you feel comfortable Scott us merging it without people having Play with actually let me back up. That's probably incorrect assumption. So so slinky you wrote this up Have you actually coded it up and verified from a coding perspective that everything sounds, right? Of course, of course I just took the There is a there is an a sample in the SDK JavaScript if you're going to read me. I've heard it there is a sample in the SDK JavaScript that I really Basically implements this except for the sub-particles part and I implemented the sub-particles part and it's like five lines of code. So cool Feel free to submit a PR to the JavaScript SDK with those changes for these sure Sure, and I'm not sure where where they fit honestly because that's in that sample you You don't have like like a client Well, yeah, we can think about it All right, cool Any other questions or comments? Okay. Oh, Jim your hands up. Yeah, just a very quick one. I must admit I've not I've not read this all the way through it. There is a handshake. Yeah, so you know that When you've sent one of these events down the wire that it's actually been accepted Or is it like a fire and forget model? No, no, no, there is an handshake only to agree on the even format to use So I Okay, so when I publish something it's sort of in the wind at that point There's no I get no indication that it ever got anywhere No, no, that's that's that's that's more The semantics of what you do with that. I mean, I think this Finders-on-corrected this is out of the scope of cloud events protocol bindings So we don't define semantic we define only how to smash stuff inside Right. Yeah. Well, I think it's sort of right. I mean we we know from an ATP perspective There's a web hook spec which sort of tells you, you know, how to handle statuses and I assume that when the MQP binding was written there's sort of this underlying assumption that, you know, AMQP is going to tell you, you know When stuff got delivered or not But I think does that need to be made clear that is this is like just a fire and forget protocol or am I overthinking it? To be honest, to be honest, I don't know. I mean An apoptetic application could decide to define I would say a protocol on top of it So for example to say So one end sends the messages and the other end replies back with a knack in In the shape of a cloud event But that's something that you define when you create the application And the reason I asked that is because you know, I was mentally preparing myself to look at a GRPC transport and That same sort of thought was resonating with me as to whether, you know, that transport should have some sort of acknowledgement capability in it. Okay That would be my only comment and I'm not sure there I'm not sure it's in scope for a transport Any other questions comments any objection to merging then we're proving. Thank you for that slinky Alright, that's it for the open PRs A couple issues I thought might be interesting mainly because I was trying to think of which ones might actually impact the coding effort This one I think I opened this one because of a comment. I think Scott may have made on a previous phone call Which is should the epoch value actually be global and not just specific to one particular service so that we can do something like query a discovery endpoint and say give me all the services that have been updated since a particular epoch value and Obviously, that's only going to work if you have sort of An increasing epoch value that's goes across all Services and isn't just local to one particular service. I actually like this idea a lot I don't think it's a huge burden Because it up because it does require Even though it does require some sort of locking or Consistency mechanism across all the services. I don't think it's that big of a challenge for people to have a Never increasing number across small But what do people think good idea bad idea? Any more time to think about it. I Don't think you can do this really why? Because if you have a chain of the producers and you do that aggregation mode each Link in that chain would have to increment that that value Because each can suit each producer would have to be in charge of whatever that value is So if you end up with that ring situation, you would get an ever-increasing Epoch value as it synchronizes because the value would always be different to that producer Yeah, I wonder what the real world scenario for that ring would be I think Accidental complex systems Yeah, well Just if I look at DNS and all these other Maybe somehow related technologies. I'm not familiar with any Concept there that they have something like a ring. Well, actually, it's so wait a minute. I'm sorry Scott Maybe that isn't a problem because I think the way you described it is if you have a ring you would have to Sort of synchronize the epochs across the whole ring or the high the the next available or the highest epoch value across all the ring And I'm not sure that's true Because with the PR that we just merged of mine of the mass import thing I specifically say when you import something the epoch value gets reset based upon what that DE wants to set it You don't retain the epoch value. So I don't actually think you need consistency across of epochs across all of the Across the entire ring I Think once it you when you can still deal with things on a service by service basis in the sense that you get you Want to pull something in to your You want to pull something in and has a different epoch value. You're gonna you're gonna import it I'm trying to say here something. I had something in my head. No, it's just left. I apologize I'm just I think even with your previous PR in the push model the the producer you're pushing the new Services to is going to get confused around the epoch because it might not be Maybe even Maybe it's not global to it. I Think you're gonna end up with a wrong value on the the producer you're pushing to But if the Purdue if the guy you're pushing to resets the epoch value And basically ignores the incoming one because you're doing an import He's gonna sign up the next highest value, right? Yeah, but the whole point of having the epoch is to be able to compare what you have to what you're getting Right. I was specifically talking about import Not update But in the pole model You do need that value and you need to trust where it came from So if you're doing like a service that does a combination push and pull now you're confused again Okay, so let's walk through that. Are you doing a pole model and it's doing an update If the epoch value of the thing you're pulling is less I would assume you'd ignore it, wouldn't you? That assumes that your producer and the and the downstream producer are They have epochs that are Equatable like time maybe But if you turn it around and you're doing an update via push The discovery and point should reject the request if the epoch is smaller, right? Because that means they're trying to update based upon an old thing and they need to refresh their copy Well, okay Let's say two nodes in that ring one increments epoch by one and one increments epoch by a hundred they're gonna get out of sync and the the source of truth won't be able to push the downstream because It's epoch will be Significantly smaller than the thing it's trying to push to Yeah, I see what you got. I see where you're going with it. Okay. Never mind. Let me think more about it Maybe something I have to kill Yeah, people like that. It has to be Each is probably unique for every producer in And it's only comparable for a particular producer Unless it's something very complicated like like a Kubernetes service where you're using resource version I'm kind of wondering whether The entire idea of synchronizing between discovery endpoints might require its whole whole different set of semantics Okay, I'm wondering whether, you know Uploading a set of services from an end user perspective is different than uploading a bunch of services because you're trying to sync between two discovery endpoints Yeah well, so To give more context for the group what I'm trying to do what I would like to do with this The two specs of discovery and subscription is to be able to do upstream and downstream Upstream subscription propagation so that I have a complex System that's delivering events with subscriptions on to it as New subscriptions get added It propagates that the fact that it there's now a requester of a certain filter downstream to the upstream producers you get You block events if they're not being listened to as far up the chain as possible But I also want to bring down what's available from from that chain to the downstream Consumer perspective, so if there's a long chain the What's available to make subscriptions on comes from the fact that the discovery endpoint has aggregated all of the services all the way down the chain Right makes sense. Thank you Do people have a different use case? I would assume that usually there is somewhere a source of truth where someone deploys a new version of the service or Provides this discovery content and from there it's Propagated into one direction Yeah, that's right, but you need to understand Which service that thing came from so as it comes down the chain Larger epochs of the same service or should be trusted So I kind of feel like you can't change the epoch of things you've seen even if they try to make it relative to yourself And so where the loops come in is Once you do have that long chain It becomes very easy to make little eddies in the loop where You know in the middle of your chain something branches off Those go a couple hops and then goes back into an upper part of the chain and now you you have a loop So one thing about your use cases Does it apply? I mean what I added here as a remark does that apply to your use case as well so that on the way the Some data in that discovery has to be changed like the subscription URL or something like this or is it will it all be Propagated unchanged. I think that might be the up to the Each chain. So you might want to say actually no I you get subscriptions from me and I'll delegate up the chain or use might say No, you go reach out directly to that consumer or sorry that producer, but I think it might depend. Okay. Okay. I think more thinking needs to be done on this. I'm starting to Start to wonder how complicated the discovery and point synchronization problem is going to be and whether That is something we need to address immediately or first focus on just a simple administrative API I Would keep it simple to start and not worry about the semantics because I think it's going to get complicated and probably unique purse per environment Yeah Well, if we did if we did that simple approach, then Does that mean that this This issue becomes more possible But I don't know what global means in this this case I just meant global within the DE As opposed to service specific, which I think is what it is today. Oh, I think the only way to trust is is it's You know who the service came from is right like It's a relative value you're comparing with different versions of that instance of the service entry not It's nothing more than that it had like epoch has no actual meaning Right, right, and that's the way it's defined today, right? You know it you can only compare it against a different version of the same service But what this doesn't allow you to then do is to say give me all services that I've been updated since a particular epoch value And I'm sure and I'm wondering how useful is that? Scenario I thought it was useful, but if it's too difficult to do then we can drop it I just thought it was that thought that was kind of an interesting thing to do for somebody who wants to Sort of monitor an endpoint and they're not doing it through Moderate discovery endpoint, but they're not doing it through notifications. I Think we should punt on that to be honest. Let's get it Let's kind of get it up and running and then once somebody comes back with the problem of hey I have this discovery endpoint that has this thousand services and it's too difficult to understand what's updated Then we solve this problem My opinion, okay anybody else have an opinion on that. Okay, I'm okay with holding off and waiting so we can do that All right in that case one of the other ones I thought that was interesting Was from you Scott you were suggesting that it might be nice to have labels the third one Yeah labels So for example here and as I was looking through the issues today to see ones that might be of interest it dawned on me that this is having horrible flashbacks to buckets for extensions in the CES back And I'm wondering whether it go ahead. That was a terrible name for them, but Yes, but so technically what is the difference between this label versus an extension of the top level called prod colon? What it's? Or what sits? What's the difference to you? the I think the difference is that labels actually has cement meaning meaning that it's it's a identifier with Right, it's metadata instead of something that is an actual property Jim your hands up. Yeah, I guess I would echo Scott I it seems that if you want to look for labels you want to look somewhere not just a random stuff that's appearing in the higher level object Just a way of grouping logically grouping things so you know where to find stuff Right, I kind of get that but I guess So if if labels were only the the way of sort of adding Tags to things like a github a github label strictly a tag right and that's the only thing github does with it It's just tags you can search on them period however And if you look at what how they're used inside something like kubernetes In particular annotations and labels are kind of done the same way where people use them to Sometimes change the semantics of what goes on behind the scenes, right? So they're not simply a tagging mechanism Or a searching thing, right? Okay, and and that's when I start wondering. Well, okay You know at what point does Who sits? How do you how do you distinguish whether that's just a tagging thing versus a semantic thing and To say oh, well, you shouldn't use a label with semantic thing. It's a property then It gets very very fuzzy to me between the between the line Which is why we killed off the entire cons of the buckets to begin with in the ce spec it isn't they It's I get the difference between See, I don't think these are these are tags anything that's not A key and a value is almost like well. It's a pair. Yeah tags for me are just a list of random things but At the end of the day, aren't these only of value to the To anybody that has to understand them. They don't You know, they don't need to have value for anybody else Right and I would I would claim the same thing is true for top-level extension properties Right. This is this is one of the things that keeps running through my mind is is Anything anything you anything anybody could possibly say about what is special at a label versus what's special about an extension I bet someone could make the exact same argument and switch it and say no I'm going to use label for exactly what you want to use type property for or the other way around Because I think I think I'll let I'll you know yield after this one But I think the point is that if you don't use the word bucket if you group stuff like that at least it's safe Yeah, um, you're not going to get future collisions at the At the outer layer, you know if I if I Add an extension of a particular with a particular label And then you come along and change the spec later on now. Now there's a collision. Yeah, because you haven't You haven't got a namespace for those For those tags or labels They're repeating history here. Yes, we are No, no, it's fine. It's it's bound to happen. So Thomas your hands up next It's actually funny because we just introduced that because we So we planned to use cloud events and we just added labels to it as as optional things to mark certain Attributes and and give the flexibility to the teams implementing that and there I see the the huge Advantage you give so much more flexibility There's always a trade-off to this of course because then You're out a little bit in the wild But it gives a lot of flexibility. That's what I see and we see it more like your label issues I think in github. That's also available, right? Where you say, oh, this is a bug or this is the to do or this is this and that I see it more that way that more in the way of a Tagging thing and and you use it to group or something like that Okay, it's got your hands up Just wanted to point out using labels and annotations and kubernetes to be Stuff that should be in the spec is an anti pattern Shame on you dug me. I'm I'm thinking about k native Yeah, we use it to turn on and off features of Of how to interpret things but really it's it's not a great pattern because you don't get There's a bunch of other things you don't get if you change the label You don't know how to compare the spec and you don't know Which version of the labels is reconciled currently and which one's failing is causes all sorts of problems For this I think Maybe a distinction we make is that as you're importing things You don't There's no requirement to persist the labels from the downstream So those labels are yours to be able to understand that record And they're maybe not linked to the epoch Now you're adding a whole bunch of complexity to it Right, I want these to be metadata to be able to For If I'm a producer I have this list of services I have applied some labels. Maybe I've allowed some labels to propagate down and I've appended some more But it's for my consumers to kind of give context to the the service entry So are you you actually suggesting that? On an import command we actually don't retain the labels I think it's the judgment of the producer to to choose if it wants to save or project the labels Right because like I could assume you could you would do some sort of Block list of maybe there's sanctioned services that get propagated down and Maybe use those labels to be able to restrict The propagation of that service or the import of that service I'd think about that one Okay, well, we don't have a PR either way on this one I just wanted to get a sense of where people's thoughts are because I gotta be honest with you. I still See them as being no different than extensions And I know that it's it's it's hard to think of it that way for some people because of the semantics that go along with labels, but It's just a name value pair to me And where it sits doesn't matter but Okay, so if somebody wants to make a PR either way, you know feel free to I just want to get the discussion going So we could try to resolve the issue one way or the other Yeah, this I raised this issue just to ask the group is it worthwhile to try to introduce this before the effort of writing the PR Well, there at least are some people who are interested in labels is what I heard Take that for whatever it's worth Scott Okay Next um I don't want to talk about the extensions ones. Let's talk about the one that Manuel brought up here in chat Who is this? So Alex Collins So Manuel since you wanted to talk about this one. Do you want to introduce it to the group? Uh, yeah, the title is a little bit misleading but what I got from Alex Collins. He reached out to us and asked about Yeah, standardizing this is that when you do webbooks from github and git lab you get different headers set that try to authenticate with whoever receives the webbook and He sees this across different kinds of event sources that they are Getting data from I think what he wants to have is a little bit of Unified way of how these sources are Authenticating, but the interesting thing that came up here is So Alex Collins is from Argo and Argo uses an event gateway. It talks cloud events And when you use the standard webhook you get an htds channel to your Receiver so that is a confidential channel and you can use the jw2 t the Authorization token the bearer kind with a java web token in it and that one would Authenticate with the receiver, but what Since they are introducing this gateway or they have this intermediary what it does not guarantee is that From the producer to the eventual consumer of the payload Nobody guarantees that the the content is not messed with Yeah, this is something for which you would want a message or payload signature and Um, I think this this problem might have been solved with the use of the authorization jwt url in github and git lab, but What we don't have in cloud events is a message signature or any word on how to use signatures if Usable from transport layers or whatever So I wanted to bring this up and ask how do people feel about message signatures or am I maybe overlooking something? Is there maybe in the java web token or the in the or out? specification a way to also introduce a signature that would A signature of the payload that is of the the the data transported in the htdp webhook Anybody want to chime in? I know it's something that has been nagging at me for a while And I think there's a need I I I can understand how we can add signatures, you know when we're using I don't know the base 64 encoding stuff I Get a little bit concerned as how we would do signatures in jason sort of structured mode You know because That that's going to be a bit interesting, but I think it's I think there needs to be a way to do You know, they're sort of signing a verification In github you actually get both with their own header. So what they do is they sign The htdp message payload And then whoever receives it can check with that header Received whether the signature is correct In git lab you don't get they get that they only send a token, but So really about end-to-end the producer to consumer message signatures I'm not sure how to feel about this either, but I thought it might be an interesting topic There is java Oh, sorry. Sorry jason web signatures and it's used as part of the jason web tokens Um to sign the jason the web token There is this jason web signature standard and it I think this is a signature that works on jason It could be used for structured mode. The only thing is that when transport is What is it? Oh The htdp structured transport were cloud events Uh parameters are put in the htdp headers. Uh, you'd have to recreate the jason structure first before you can Verify the signature that might be a bit of an overhead And then I don't know um if if that is really uh, if that should sign the entire cloud event or if Selected headers should be excluded from it So really Anybody have a use case? It seems like a pretty interesting extension We so in k-native we're doing we're looking at similar ideas, but nothing formal yet But basically we want to know who's who is authorized to receive a certain event And so Some way for the producer to be able to say I've made this thing Send it down a bunch of middlewares and the middlewares can filter based on subscriber um authorization Do you happen to know if the jason web to a web signature is flexible enough to Select only parts of this jason structure to verify with the signature So in htdp in structured mode, you can also promote member member fields into the headers But it's important to make it work with every protocol Make it be lossless So I just want to make sure I understand he's just looking for us to standardize it or Yeah, standardize a particular signature header, right? It seems like I think with cloud events check something might be Uh, not a great pattern because It could change format and it's still technically the same message or it could change transports And it's still the same message so we'd have to think about how How we do signing for for the consumer for the consumers to understand a producer produce this Very hint of the message, but we allow for extensions to get globbed on in middleware. So like how do we deal with that? Yeah, and I and I seem to recall in the past we purposely kind of Avoided security because it's a it's a whole rat hole all by itself And I'm wondering whether we want to dive into that at all Jim your hands up Yeah, sorry. I completely had to drop off. Um, so I missed the last couple of minutes but When we talk about signing we talk about the event The the event data not the whole cloud event Is that correct? Just the business data side of it. Oh, that's interesting. Yeah, I'll take it says does it That would solve my concern Because if you wanted to use JWT, then you just would yeah You would send it as a JWT Or some sort of signed construct in the data payload If you wanted to make sure that I don't know the source is not replaced or Other fields of the event wouldn't you want to select those and make a signature for them as well? Yeah, I Yes, and I think that's the trouble. Yeah, there are two levels of there are two levels of signing one is the That sort of envelope enveloping sort of construct and one is the the data itself And and probably they need to be done independently because I don't think I don't think cloud events needs to make any statements about how you choose to Sign or secure your content because that's really up to you But it's more concerned about the the enveloping aspects And the attributes But middlewares are allowed to change the envelope Right, so they the middle bit is the contract then between The producer and consumer and the intermediary. Yeah, that they're the ones that need to know that the Enveloping wasn't being messed about with and The the data is always passed without interpretation Or trans potentially and if you want to transform it between formats Then that would have to be a trusted party and then your relationship from a Signing perspective would be with that translator not with the end producer The trust relationship. Yeah I sense someone's digging a gigantic hole. Yeah. Yeah, I've got a spade Does I'm not saying that okay, but I'm gonna ask a question I'm not suggesting that if the answers no, we shouldn't necessarily close the issue But I am curious does anybody actually want to head down this path I think we need statements about it. I think we need Principles or something around, you know, um, where the responsibility lies if nothing else. Yeah, and if we want If we want to ensure that those Cloud event attributes have not been messed with Then we will have to address it. I think and sort of formalize how those should be signed Well, that's kind of what I'm asking, right? Do we even want to touch that because it is a whole big followaxe It is And there are lots of different specs out there that already talk about had a handle security And getting a green interrupt. I'm just having flashbacks to my web service days, right? I mean we tried to create these web service specs, but everybody wanted to do security slightly differently So we created a framework, but there was zero interrupt But hey, we can claim interrupt because we all adhere to the ws security spec, but You know each exact mechanism within the ws security spec was implemented by just one company So you had technically zero interrupt And I think the fact that we got zero interrupt was is telling that maybe people in the silly Want interrupt but can't But but I mean do you get to the point where maybe your statement is simply, you know the The signing of the payload is is out of scope. You know that that's somebody else's problem and you have a trust relationship with The endpoint that you're delivering events to And it's that trust relationship that That implies that date those headers are not going to be Mutated along the way and it sounds very hand wavy, but it I think you need to make a statement one way or the other and say it's either definitely in scope or out of scope And I don't know where that statement lives I just want there is something somewhere that said maybe it's in the primer and said hey, we're not going to tell security That's out of scope But I can check You were a little hard to hear there Eric, I think you said we there's something someplace we decided to punt it. Is that what you said? Sorry, yeah, I was I was part of that earlier discussion Clemens was the large objector at that point And I've actually been even though he's not here expecting him to speak up because of it but um It's silly in my head, but here we go anyway the We we basically said that we were going to punt it at the least if if we ever dealt with it um, I I think there's another concern that if uh, something like you need the original producer of an event to Sign off on that event and then uh intermediary wants to Add to that event that you probably need to leave that original signature in place because In some ways going to have to be something only that original producer can produce And then uh, so the augmenter of the event. It's going to have to add some kind of an additional signature Um, and specify what they added or something like that. I think there's some I don't know maybe I'm Off on a weird tangent, but um, I think there's some really weird stuff that can come out of this Yeah Okay, so is there anything that people want to suggest in terms of next course of action on this one Oh claus your hands up Yeah, so I remember that also in the early days I at one point asked if um The context attributes may be modified by intermediaries and I think just for that discussion we introduced the term intermediary originally and and so the result was that yes, that's possible and um If you want to allow this so Why why would you now create a signature mechanism to prevent this? If I if I made I think the signature doesn't prevent the modification. It's just to secure Certain fields so you could still have a lot added to the event But you wouldn't want the source to be changed for example Yeah, so that's what we I think added in the in the primer or somewhere that um If you change certain fields like source and id then this is technically a new event and not the same anymore But yeah So it was better example In the primer, I think we talk about how the envelope properties should be regeneratable from payload Although that's it's a recommendation not a requirement Maybe I know Hold on. Let me see if I can bring up the spec or the primer So by the way, the spec has a section about security. It's just mentioning that the um context attributes shouldn't contain sensitive information because um at that time we I think we we always thought that just the payload would be um encrypted But it's not really about doesn't really touch the signature topic Yeah, I wrote that I think Isn't that's why we introduced data ref So that we could delegate encryption to a second party just in case Some stream of events get replayed But you get this trouble of if you encode the key in the In the cloud event, then you don't understand how to like you can't do key rotation if you have a historical event stream I think that was one of the use cases. Yeah, the other one the primary one for that. I believe was sort of large payloads I believe Yeah, I thought that was the main driving force for that one was large payloads Well, you could certainly use it that way because that that's another interesting scenario. Yeah Trying to find that section you were just talking about scott, you know, nothing's jumping out at me Okay, but as long as it's in the data That is an end to end or application problem. So the application should deal with it. That is the producer and the consumer I think we agree on that or would should cloud events Provide A field to store a signature is that really off the table signature of the payload So I think if you were to add a a new optional property called signature, you'd then have to define what it was for So don't think anything stops you stops as adding it, but you'd have to then be prescriptive about what that signature was This is a very good case for the our extension model. It's a formal extension. That's not part of the spec but If it gets adoption, then it gets promoted into the spec Good cool Scott was this this texture thinking of What stuff I highlighted? Yeah, that looks right Because that's not quite the same thing as saying, hey, we recommend you be able to recreate all of the ce metadata It just says it may be duplicated in some cases Yeah, I might be quoting an old version Okay, so we're almost out of time but back to the issue um Is there somebody who actually wants to try to take a next step on this or is it Still unclear whether you want to do anything or not to try to figure out And well since you're the one that mentioned this is somebody actually want to like create a pr Add more discussion to the issue. How do you guys want to move forward on this? I wait for this one for alex to get back if he wants I can ping him and ask him if he Needs anything at it or if he wants to drive this forward And if anybody wants to pick up the signal, I personally don't have a use case for it But if anybody has a use case Can bring it up again. Okay. Yeah, that'd be great if you can poke on them and see what his thoughts are that'd be great okay um with that at the end of Klaus do you want to talk about this one at all or defer it or almost out of time? Do it um the null values Yeah, the null. Yeah, the null value one. I just wasn't sure if you want to talk about it since you had some I don't know. Um Okay We don't have to we're almost out of time for two Um, that was that old discussion where I Um found that link for slack somewhere down the discussion or was it that one? Yeah, try to merge. Ah, yeah, okay. Yeah Yeah, I'm not sure if I click on it whether it'll show up properly. So I won't click on that Yeah, um I just remember that while preparing I think the demo in for for basalona During the debugging session the night before that demo We encountered some problems and it was originally due to some null values in for attributes and Then we had that discussion how to handle it um and that An attribute not being present would be the same as no value And just as a distinction from the empty value, of course um I'm not sure what else was discussed here in that issue. I mean I didn't open it It was also the stk discussion. I think Yeah How this is handled in the stk's I suppose Yeah, I was gonna ask how do people either in stk's or just in general feel about this should null Be semantically equivalent to absent What are the stk's doing this stuff and ginger or jam your hands up? I don't think so. I know I would have to go back to The cloud events back and see whether it even Mentioned this absent any particular encoding scheme Does it actually say That an attribute can be present but empty Yeah, well, what's interesting is the spec specifically says for almost every single attribute that it defines except for extensions It says if it's present it must be a non-empty string or something like that Right, so and but what's funny is clemens You know, he's been here from the beginning. He's he's still interpreted that as giving you the freedom to say Oh, it could still be null Now we can come back and say clemens is wrong The word if it's the word null in a string But no, no, no No, he means he means, you know, I know he means nil. Yeah. No. Sorry nil. Yeah, whatever Yeah, which to me, you know That's very jason centric. Yeah, no doesn't follow the spec. It doesn't follow the spirit of the spec to me Okay, uh, slinky your hands up for me, there should be a difference If you pick for example HTTP address HP address cannot be empty so That this doesn't apply to to an event that comes from HTTP So in in general in dsd case like in golang we don't make any distinction In in rasta and in java, we make the distinction but only because the language allows us to do but you will never get An empty up to boot in the rastas dk for example When you receive the event from HTTP So I just looked at this for go and it looks like I we can't support The json nil value, but it's custom marshaling. I think there shouldn't be a nil value But of course empty values should be possible. Although the standard attributes usually don't allow it Well, if I if I receive a null if I receive a null value Now I'm thinking about uh sdk java if I receive an empty null if I receive a null value in jason It's just null uh in the cloud event if I receive an empty value in jason It gets back I mean The attribute is is an empty value. So it's an empty string for example While From HTTP it's always, I mean it's or now or something. It cannot be empty If the attribute is not present at all, what is the consumer of the cloud event see? No No, it's no just because that's that's the semantic of jam Like in russian it's no Because that's the semantic of the language. I think that's what clements stated as well that of course for strongly typed languages internally you will have null values so how If I had a structured jason cloud event with the word nil against an attribute value What happens when I want to send that over a You know turn it around and send it over a binary As a binary payload I'm I meant now to put an HTTP header in and the string null Next to that no and no it's good thing you mentioned that because it should be an upset Uh, it should the editor should be there if you're right, but my point is what why is it? I still don't understand why it's even in that jason Structured document in the first place if it's well, maybe maybe it shouldn't be there I mean maybe Maybe the jason spec we should say that null it's not allowed as attributes Well in jason, it's very important for patching Because you want to know if you want to clear out a value something on some struct No, it's not No, I mean, what why do you need that if you for example because If the if the attribute name is not present in the patch, you don't know to clear that particular field so if the attributes present and the value nil Then you know that that update is asking for that property to be deleted Okay, so I'm gonna have to call time on here because I apologize. I didn't realize it was after The top of the hour already So let's try to continue the discussion in the issue itself because I do think we need to kind of resolve this one Where the other is a little bit ambiguous and okay by scott um So thank you all for joining. I guess there's one so hold on a minute here before we let people go. I think I only miss one person Asashi are you there? No, they left. Okay Please make sure I got your name for the attending list Does anybody have a topic for the discovery interrupt call because I know scott It had to run slinky's running. Is there anybody Who was doing the interrupt stuff who has a topic if not will cancel the call for the next hour? I do not I think Okay, that's fair. Yeah, I didn't have anything myself either so we can we can just cancel the call Okay, in that case, we will cancel the call Thank you everybody for joining today and please do comment on some of the issues We talked about here try to get a discussion going whether you want to close the issue or someone wants to pr Please try to get some discussion going all right And with that, thank you everybody for joining. We'll talk again next week Thanks. Bye everybody