 Hey, Mark, gotcha. All right, it's three after. Why don't we go ahead and get started? Let's see. So just two AIs I want to bring people's attention to. Rachel, whoa, hold on, what did I do there? Rachel, any updates on your PR? Yeah, I pushed up a branch. But Jim Day had reached out and said he wanted to collaborate on this. So I'm going to give it a beat so he can look over it before making a PR to the main branch or to the main repo. All right, cool. Thank you very much. And Clemens, this PR is the one about that. What is it, the key thing or something like that? Yeah. Can you take the action item to really, really pester and nag people to? Yes, yes, I will do it this week. I just had zero cycles this week. That's fine. I just wanted to make sure that we don't stall too long on that one, so I appreciate it. I'll take care of this. I'll try to get people on the phone sometime Monday or Tuesday. Otherwise, we're just going to let it sit. OK, thank you guys. I think everything else is minor. We can keep moving forward then. Community time, any community-related issues people would like to bring up? All right, not hearing any? Sorry, go ahead. Community related. I had talked about the CNCF thing on Kubernetes being made up, doing the demo there. But unfortunately, we don't have a meet-up in February. We have the next one in March 21st. But since it's such a long time away, I was thinking whether if there's a preference in this group, whether we should keep the demo endpoints up longer or just wait until Barcelona do that demo in meet-ups. I personally have no problem keeping up my endpoint. What other people want to call? Anybody else have endpoints on the call? Yeah, we have one. They open first one. But there's only problem keeping ours up. OK, I would assume people are going to keep their points up. Every now and then, I do kind of run it just to poke everybody, just make sure they're up and running. And what we can do is we can just assume people will keep them up until I start seeing some drop-off. And then we can revisit the issue if that's OK. Yeah, that's fine. Like before end of February, it would be nice to lock in whether we will hope that demo are not in the meet-up so they can plan their schedule. Yeah, obviously, if you know for sure you're going to need, you're going to want to demo something that everybody knows, so they don't take it down by accident. Yeah. OK, anything else? All right, SDK. We did not have a work group meeting, or SDK subgroup meeting. So I don't think there's anything to bring up there, unless there's somebody on the call who can think of something that they want to bring up. OK, I'm forward then. Kathy, is there anything on the workflow subgroup you'd like to mention? Sorry, I was on mute. Not really. But I think we probably need to start working on this to clean up a little bit on the document to make it more consistent. So I think I'm going to probably start working on this maybe next week, and then propose some PR for review. OK, that sounds good. Yeah, I think at some point we probably need to figure out what the next steps are going to be with this document. Let's talk offline about that, because I think we just seem to figure out how we want to move forward with this thing. So you and I can talk, and then we'll bring back the proposal for the group. How's that? OK, yeah. Sure. All right, so Scott sent out a rough draft document for the next demo idea. And Link is in the agenda doc right here. Scott, is there anything you'd like to say about this one, other than just asking people's feedback? Scott, you still there? Yeah, it's going to happen shortly soon. No, just please review and do feel free to make changes or comment or whatever. And it's an idea. If you have a better idea, please bring it up. Yep. OK, any high level questions for Scott before we move on? OK, yep. So please, everybody, when you get a chance, take a look at that, because I think right now the current goal that we talk about is trying to get this demo out there in time for Barcelona, which is, when is Barcelona? May? I can't remember, sure. So it's the end of May. End of May. OK, there you go. OK. First 23 for something like that. OK, there you go. So we got a couple of months, but time does fly very, very quickly. I'm sure everybody is aware of that. So please review when you get a chance so people can start coding things up so that we don't feel too much pressure as the day gets closer. All right, moving forward, PR review. So Topini, you did make a change to this one. Thank you very much. The changes look good to me, but I've wanted one more LGTM before I merged in. So everybody take a quick look. I believe everybody was OK with the general direction. It's just Mark wanted some editorial tweaks to remove the whole bunch of oars before. So we just changed that in two different spots now. You guys just a quick second to look that over. All right, any questions or concerns with this? Any objections to approval? All right, easy one. I like those. It looks like it's not changing the meaning, right? It looks like it's just rearranging words. Was that, did I miss something? I think there were two things. One is just to make sure, yeah, he added integer and then the rest of it was just wording, yes. There's no normative changes. It was just missing integer from the description of the anti-type. It was only on the list. Gotcha. All right, so one more time. Any objection to approval? No. All right, thank you guys. All right, Fabio, I don't think Fabio. Actually, relating to that PR, there's also, if you open it again, there's a line above it that says this specification does not define numeric or logical types, but we do now define a numeric type integer. So let me answer this. Should we just remove that line entirely? In my opinion, yes. Clemens, I think you might have written this line originally. Would you object if we just removed that line? We can fix that out, yes. OK, but about the bill on the call, any objections or above it? I mean, it's completely non-normative anyway. Do I vote with a true? But I'm bummed. Thank you, Mark. Sure thing. I'll be here all day. A little levity. That's good. Give me a second there, Tim. OK, any objection to removing that line since it's technically incorrect now? OK, Topini, would you mind squeezing that one into this PR as well? Sure. OK, I'm assuming everybody's OK with approving that once we remove that line? OK, let me just make a note of this so I can remember myself once we remove. All right, cool. I'll fix that later. All right, thank you, Topini. T-Favio is not on the call, but this one looks fairly straightforward to me. I think we can deal with it. So I believe all we did is add minimum length to our string types because I believe every single case we have a string, we say it has to be a non-empty string. And so that's why it has minimum length and whole bunch of different places. On this particular field, which is spec version, he said that it'd be a value of 0.2 to our current version. And there was one other change he made. What was it? Here we go. In schema URL, we added a reference to schema URL in the definitions section. Now I'm not a swagger expert, so I can't say for sure whether this is 100% correct, but it looked right to me. Anybody have any questions or comments on this one? Any objection to adopting it? Only a question about the const 0.2. Is that meant to be changed every time there's a new version? I would assume so, yes. I think that's true of the spec itself. I think the spec mentions 0.2 in it as well. So I think this would be one of those spots we have to catch. Yeah, we started to talk about release system. Yeah, hold on a minute. Let me just double check here. Yeah, so we don't specify which exact files. We do say change all specifications that could a version string. So that should automatically include it. All right, cool. Should be good. No change needed there. All right, any other questions or comments on this one? I have a question. So is this minimum length? Is it mandatory field? If there is a string type, which is way gone with the length could be 0 or what? Yeah, hold on. Let me show you what we're talking about here. Spec. So let's go ahead and take the type string. OK, so I believe what's happening here is every single attribute that is a type string, whether it's required or optional, I'm pretty sure they all say must be a non-empty string. And all he's doing is adding that min length equal to 1 to represent that. So let's see whether I think string. Yeah, but that's per that, so that's non-empty. I think that's it. So actually, we don't have any optional quote string types per se, other than content type, which is further constrained by RFC 2046. So it is just for the required fields. But it doesn't matter what it is. The fact it's required is not really relevant. All the strings are non-empty. Does that help you, Kathy? So that means for all the string type, the user must define this minimum length. If they're going to have the property there at all, then yes, it must be non-empty. This avoids the problem of, well, does not being there mean the same thing as entity string or not? We avoid that entire problem. So if you're going to specify the value, then you have to give it at least one character. OK? Yeah, I'm just OK. I'm just thinking, if you define the string type, yeah, the user can define non-zero value, non-zero string, right? Right. OK. OK, I think that's fine. Here, I just say that that's OK. Yeah, thanks. Any other questions or comments on this one? Sorry, just to come back to the version number. Actually, is it meant for everybody to use a snapshot of a release version or a release commit of this spec? Because that means every single event that is validated with this spec must have version 0.2. It doesn't support any other versions. That is an excellent question. Does const mean that it's the only value it can be? I guess it must because it's a constant. I mean, it musts. Yeah, I'm wondering if maybe it should have been default if there is any such a flag. That's an interesting question. So can you do me a favor? Can you ask that on this, and we won't resolve it right now? Wait till we get an answer? Yeah, sure. Thank you. OK, so that's an excellent question. Any other questions or concerns on this one? OK, so we'll hold off on that one. OK, cool. Moving forward then, swapping out, excuse me, swatching out the content type for, I believe it's data content type. We talked about this one last week, and then Clemens convinced us all of the brilliance of keeping the word content in there. So it's not just data type anymore. So I went through and made the change last week. Any questions or concerns on this one? OK, any objection to the change? Going once because it is a name change. I'll make sure everybody's OK with it. All right, we are done, and I will send that a note to the group for learning people of that. Cool, thank you very much. Christoph, now this one, I think you made some changes either today or yesterday. So maybe a little too soon to vote on it. However, I do think it's important to talk about the changes you made so people can understand where you're headed with this and have a brief discussion about it. That's OK. Yeah, so for context, last week we approved the PR that said the spec itself, the main spec, doesn't specify how batching is done, but HGP or transport bindings in general can define if and how they want to do batching. And this pull request you already discussed last week, it tries to implement batching for HGP. So last week we discussed a little and we decided that we want to have it similar to the structured mode in that it works with event formats. So the change I did compared to last week is that I moved basically the JSON array into the JSON format file and then referenced it from the HGP transport binding. So in the future, if there's a different format than JSON that wants to support batching, that can also be used with this new batch mode. Yeah, otherwise we can go through an old bar if that makes sense. The only maybe thing I thought about when writing this is that I feel if you can maybe scroll down a little to the JSON itself, I think the JSON format is further down. The JSON format is kind of one and then the batch form is the second, sorry, if you scroll up a little bit again, in the second paragraph of the intro saying, although the JSON batch format builds on top of the JSON format, it is considered a separate format. A valid implementation of the JSON format doesn't need to support it. So I'm not sure if it should really go into the same file as the JSON format or if it should be its own file to really make that distinction more clear that they should really be considered separate formats. Apart from that, I'm pretty happy with it, I think. OK. Any questions for Kristoff on this one? I like the change a lot. OK, that's good. And so instead of having a vote on this today, which is because it's a lot of large, I would try to kind of carve out some time between now and the next call to actually go and implement that in the C sharp SDK to see whether I find issues with it. But I mean, it looks good. That would be excellent if you could do that. Yeah, that's what I would be trying to do, just go and validate that with implementation. Yes. Oh, no. Because it's chunky, it's a chunky, the chunky exchange that we've had for a while. Christian, were you going to say something in there? Or maybe it's Kristoff, somebody? I want to state that it sounds good. That's great if Clemens gets around to implement it. That will really validate that it works. Yep. OK, it seems fairly straightforward when I looked at it this morning, but I didn't give a deep dive into it. Anybody in the call have any questions or concerns with headed this direction? So I want to point out we do have a new content type, which makes sense. All right, last chance. Anybody have any other questions? Comments? Like it. OK. All right, in that case, OK. First of all, Kathy, can you go on mute? I think you might be hearing your typing. Thank you. Oh, hi. Yep, not a problem. OK, so please, everybody, review that. Hopefully, we can approve it on next week's call unless people find something wrong with it. But I think that'd be a good step forward. Thank you. And then, Kristoff, you have another one here. Yeah, this one we also talked about last week. The basic issue is that there are limitations on all sorts of technologies that we use. So we better define what we're going to settle on. We want to define a minimum event size that everyone has to support. So as a sender, if I'm sending an event that is smaller than the certain size we're going to settle on, then everybody has to accept it. Otherwise, they're not a valid cloud event implementation. So the text itself is kind of small. We also, I think it's a fairly small change. One thing we can discuss is the actual size. So I kind of recommend 256 kilobytes. But I don't have a strong opinion on this one. OK, what do people think about this one? No comments? What do people think about the size? Too big, too small? Are there going to be some consumers out there that can't handle something that large? I would imagine that most constraints for things wanting to be really, really small might be more on the producer side than the consumer side. But yeah, an Arduino can't deal with that. I think that's mostly producer, even though, you know. But we can't make it fit an Arduino. So that's kind of difficult. We're not going to vote on today. I think it's too soon or a too new one. But please look at it when you get a chance. So the other thing that also Tapani brought up two weeks ago is that it has an influence on the HTTP binary mode. So if we just say the event can be up to 256 kilobytes, they easily can consume more kilobytes in header data than what most HTTP servers accept. So it basically means that a lot of implementations will not or will have troubles implementing it well at least. So I was wondering if we want to change. So right now it says you should support both the HTTP binary and the HTTP structured mode. So one way would be to say you should support a structured mode and you may support the binary mode. Another option would be to figure out if somehow sender and receiver can agree what the size should be. What is a bit tricky? Yeah, I was going to ask about this because 256 kilos sounds good for the event. But the metadata around the actual data of the events, we will have problems with HTTP implementations if it's. I mean, if people just end up putting most of their things there instead of inside data. Because with the binary mode, the headers can't be more than eight kilobytes basically and more than 100 attributes. So it's interesting that you talk about messages appear, but then they may reject cloud events above the size. Yeah, I think it's a bit difficult to measure the size of the cloud event itself. So I'm trying to measure it by transforming the event itself into a message in the JSON format and then using that size as the way to measure the size of the cloud event. Yeah, I'm just wondering whether this cloud event right here should be message. As it currently stands, people may say, oh, OK, 256 applies to the entire message. But down here, it kind of implies, well, maybe 256 applies to the cloud event itself and not the entire message. Yeah, it makes sense. Let me make that change. Anybody else have any other comments, concerns, or questions on this? I think it's good to define the size. I think this is really a good thing to do. We need this. OK, thank you, Kathy. Hi, this is Vladimir. I have mixed feelings about the size. On one side, I see defining the standard that can be easily followed. But in practice, I'm afraid that we will hit some edge cases. And the question is, what do we do then? What do we do if we have such cases where the size turns out to be larger? And it may be valid in a particular problem domain. So I don't have an answer to that. But maybe if you could have some kind of a pass or a policy, what do we do when the size does not match? Thanks. You said when the size is larger, did you mean when the size is smaller? Because this is just defining the minimum size that someone has to accept. Yes, I mean, when the messages are larger. And there is a justification in the domain for them to be that way. Yes, I think the first thing is that we settle on this not being a hard limit, but it's just a guarantee on a size. So you can always go above this limit. And if you control all parts of your system, then you can make sure that all parts of your system accept messages that are larger. So if you know I'm putting my cloud events into Kafka and I know Kafka accepts one megabyte and you yourself have validated that everything will work, then it will be fine. This is the one answer. And the second answer is that we will have a follow-up PR that also, Jemday from PayPal asked for, is that we are going to build the claim check pattern. At least I'm going to open a PR that implements the claim check pattern on cloud events. So basically what it is, it is you send an event, you would send all the metadata as you would before, but you would not include the payload to data object. Instead you would say, okay, my payload is too big, but here's the way for you to get this payload anyway. So this is also commonly used in other systems. I think it's a good pattern to implement. And then hopefully we have SDKs and so on that can kind of automate this process. So it gets a bit hidden from the consumer. Does that make sense? Yes, absolutely, absolutely. Thanks. Okay, just because I think we have way too much time on the call now since we're repeating the bottom of the agenda, let me just pick on one person in particular, Austin, since we haven't talked to you in such a long time. Let me pick on you for a sec. In your experience, I think you've interacted with lots of different clouds giving your product. Do you see any concerns with this size limitation? Is the minimum? I'm not up to date on what the size limits are for all the various fast products out there. Wasn't there an issue with the middle list? I can't remember. Yeah, I'm not sure at the top of my head, Doug. I'll have to look into it to follow up. Okay, that's fine. I was just curious about if you might have something. Issue 257. At the top. Oh, there we go. Yeah, so this is the list that I compiled from things that I interacted with mostly. So it's maybe not the most complete list ever. But Lambda has recently, as I wrote there, recently increased their limits to 256 kilobytes. So with the exception of Azure Event Grid, everyone is at or above 256 kilobytes. So what does it then mean if we've, if we've mandated that you have to support at least 256, but then there are some that are to the maximum of 256. Yeah. So this is listing out the maximum and we're considering the minimum. I'm not sure. Like that doesn't seem like it enters a question, right? It sounds like it actually might get interested. We're in situation. It does in the sense that, for example, in the case of Event Grid, you couldn't pass a valid like 100 kilobyte cloud event through it. Well, it's interesting. Actually, I guess we should say the text here doesn't say you can't have something smaller, but it just says you have to accept messages up to that size, at least that size. But then for somebody like this who has a maximum size of that, it seems like, I think, you know, I think this PR is about that, you know, the consumer should accept the messages up to that size. If it's larger than that size, then, you know, the consumer of the event can reject that message. I think it's cool to have a size for that because, you know, if it's, if we do not set a limit, it could be huge and just transport the messages to takes a long time. The latency is another consideration in the service application. So I think, you know, of course, you know, the actual size with this 256K or it's one meg or it's 128 that we can discuss, but I think it's good to set a size. So since Event Grid is our thing, the principle that lies behind Event Grid just supporting 64K is that we're basically, we're forcing with that all the publishers to think about, you know, pointing back to the source of the event and basically, like, get details here and encourage them to just include effectively metadata, like enough descriptive information in also including in the body that is sufficient to say what happens, but then give them a link so that if they need to go and dig further, they go to the original source. Like if the address of a customer changed, the mailing address, then it's actually not right to go and include all that personally identifiable data in that event or rather just give a link and then you have to go and fetch that data yourself. So that's kind of the rationale behind this. That's why we make this so small. And obviously there are architectural consequences from having a constraint like this, but I think, so 128K, which was the proposal last week, I don't think that would make things really bad. 256K, that's from beyond. So does this mean that Event Grid cannot be cloud event compliant? It means that if we set that limit to 256K, that must be supported, that it'll be a little bit more difficult for us. That's literally a discussion I would have to have, and I will then have with my deaf manager this week to say how far it came in the reason we pushed this. Because that actually has its impact. I mean, ultimately it's not going to be like this, that everybody is starting to send maxed out messages. But then we would have to go and support the occasional one. And then the question is, you know, how back can that possibly be? Yeah. And it's simply a discussion I should have. Before I say anything or agree, that's something that's a discussion I should have. And if I get a no with the reason that I should come back to you and tell you what that reason looks like. Okay. Makes sense. Hey, Clemens, does Event Grid have some type of dead letter queuing capacity or retries automatically built in? So retries it does. If it can't find, if it can't deliver the message, then it has an automatic back off. So it just keeps hitting whatever the target is until it gets a 200 class response code. Dead lettering is something that we just added. So there's a effectively associated storage account. And we messages that we can't deliver, we basically drop into that storage account. Yeah. Okay. And then you can obviously have another grid looking at that storage account, et cetera. And are there any limits for the payload size in those, in that storage account, in that storage option? What do you mean? I mean, how much can be dead letters? Yeah. For individual payload sites. So there's one way to look at this. And that's like the, what the size of the payload that the fast product can accept. But then also a lot of these fast products have retry functionality built in and maybe some dead letter Q functionality. So AWS Lambda, for example, for asynchronous event invocation, it automatically has two, two retries. Yeah. And dead letter Q in option available. And that's built on SQS, which has that 256 kilobyte limits. So, Oh no, that's where that limit comes from. So it's something else we should look into to answer this question. That is these, Yeah. We don't have an interaction like this. Like we have a gate at this, like if you give us an event we're enforcing the limit, like you can't give us an event that's bigger than this. And then, and then we pass that through, we do a request. And if you give us a five, if you give us an error effect, we will keep trying for a while. And if you, I think if you give us a 500, we'll, we'll fail earlier and then go dead letter, but that letter is effectively just writing that into storage account. And that's of virtually unlimited size, at least for those sorts of messages. There's no further constraint where stuff can get stuck just because it gets like, you put something very large in here and then it can't be dead letter because the dead letter mechanism doesn't have enough capacity. Like that, that problem can happen. Okay. So just, just another dimension to this problem. Some, some storage mechanism that helps them do retries. A lot of these fast products come with retries built in. Yeah. We do the retries through. Effectively run event. It runs on a server separate brain and everything that's, um, um, we have a, we have our own queuing mechanisms inside of event grid. Um, and then we also have a, um, we also have a, um, uh, uh, including failed save. And so all these queues are replicated in the cluster. So there's a full internal mechanism that's behind this. And that's backing up the retry mechanism. Like you can literally shoot down half of the cluster. Um, while we're doing deliveries and retries and we keep the accounts, right? Yep. By the way, does the. Um, I don't recall anything coming up in that space now. Not yet. Yeah. Yeah. It might be interesting feature optimization. Uh, later down the road. Sounds like an extension. Yep. Right. All right. So a little bit. I just want to point out, um, like Chris, if you do go for the claim check pattern, um, you will be able to do that. But I just wanted to point out, um, like Chris, if you do go for the claim check pattern, um, you would talk about metadata still being included in the event, then you will end up anyway, uh, separating the metadata and the actual data of the event into different sizes for the claim check pattern. And, uh, then I don't see a reason why they wouldn't be limited separately anyway. Well, E, um, Hmm. Well, I think that gives you an option. If you have, let's say your payload is, I don't know, 200 kilobytes and your metadata is a hundred kilobytes, which it really shouldn't be, uh, but it kind of gives you the option of still sending a hundred kilobytes of metadata and then sending the payload, not sending the payload, but having the claim check pattern for it. Um, so in the end, what you could end up with is really 256 kilobytes of metadata is sort of the worst case that this ends up being then. Yeah, that's true. Okay. I think we've reached the end of the discussion on this one, unless somebody has something else to bring up. Sounds like people need to go up and think about this more and do some investigation. Any last minute discussion points on this one? Okay. So we will revisit this next week. Thank you, Christophe, very much for that. And I think that's it in terms of open PRs. What I wanted to briefly do is to talk about our roadmap for a sec, because for version three, um, this one is obviously something we'll just get to as we look through the list of open PRs and stuff. Um, but I really wanted to focus on those security related issues. Now, I went through a list of the open issues today, and these are the three that I thought were related to security. And what I'd like to do is start having some discussions around these to one, decide whether there are other aspects related to security that we want to deal with in the spec in a 1.0 time frame. And just get people to start thinking about these issues in particular out here and start getting some discussions going to see if we want to close them with no action or open up a PR to actually address the issue brought up in the, or the concern brought up in the issues. Okay. So I don't necessarily want to discuss these here today. Let people go off and read these on their own. But please be thinking about the security related concerns that have been mentioned in the past, and please bring them up, um, either as new issues or if you feel strongly about one of these issues right here, go ahead and open up a pull request to try to address it. Um, now I should point out that in the past, we tagged the second one as not required for version one, and I'm okay with that, but I think you guys should look it over to make sure everybody's still okay with it and not being resolved in the version one timeframe. Um, so anyway, please get a chance to look at these and think about security in general, because that is a requirement for version 0.3. Yes, go ahead, Tim. Hi, yeah, and I'm not sure if it's in scope, but in terms of like whether, uh, in the cases where a publisher might want to have the data encrypted, do we want to have some discussions around that or recommendations? You know, where, you know, the event data may, uh, traverse through a pipeline and middleware might not want, you know, they may not want the middleware to be, uh, able to inspect the data. Yeah, I think, I actually think that has come up in the past. I just can't remember where we landed with it other than to think about. My first thought was you must be new here. Relatively, yes. Um, um, so my standard answer to this is, um, I would try to avoid this for as long as we can. Okay. Um, out of historical context. Uh, because the last time, um, uh, something had good momentum and kind of had an interoperability standard that was kind of built on abstraction like this. Started adding security that ended up seeking the ship. Um, so I would like to, um, avoid that complication because that gets very bad, very fast and rather figure out a way how we can, uh, externalize the problem by saying, you know, using use. Jason Web encryption or something else like as a note, but not be too specific about it because as soon as you drag that all, all that context into here. And if you look at, so J Jason Web encryption, if you look at the entire specs at Josie or Josie, J W E, et cetera, they're in ITF. Um, I don't, I frankly don't see a lot of uptake on this. And the precursor was W security, um, for soap. Um, and it's, it's a very complicated set of things to do end to end encryption. And they add a ton of weight, um, because doing that sort of end to end security requires a ton of negotiation of parameters of algorithms. Um, you need to do a bunch of handshaking to make it even work. We're having here a one way mechanism. So we can't even negotiate session tokens. So it gets, it gets really hard. Um, and it becomes kind of a multi-year exercise in how we even going to go and do this. And that's why I'm kind of in favor of let us make, let us get to 1.0 without end to end encryption. And if then there's some folks who really can't live without end to end encryption, then let's go take a look at it. Sure. Thanks. It's just, it's just, it's just so really so hard and to end that I'm just afraid of it just because it's going to, it's going to, it's going to dominate this call. It's going to dominate the work for, you know, probably a year. It's going to delay 1.0 indefinitely. And that's why I don't want it. So I put some notes into the, um, to the agenda or the, yeah, the meeting notes, whatever you got. Um, cause whether the group agrees with Clemens opinion there or not is something we need to actually discuss at some point. I think it would be worthwhile to formalize that discussion a little bit more. So what I'd like to do is I'll open up an issue to at least get the discussion going. And then Clemens, you can put down everything you just said in there into the issue. So we have some historical context for people to go back and read. I'll be happy to reset the entire history. Thank you. And then what we can do is let's say, for example, we decide not to actually have absolutely to the spec at this time. I do think it would be worthwhile to have absolutely to the primer to explain why we chose you to for it. At this time, just so people can understand our reasoning, because our primer doc is supposed to be, um, uh, informational purposes for people to understand why we made decisions. We did. And this is obviously going to be a very important decision for people to understand. So I bet, I think if nothing else, that issue will lead to a change to the primer doc, but we can hold off on that and figure out, uh, what do we want to do later? Whether it's PR the spec or just update the primer. But Tam, I do, I will open up an issue for that discussion. Is that okay? Yeah. Okay. There's one other aspect of this and that's, we have this great extensions concept. Right. Where people can start experimenting with different kind of encryption methodologies and security methods and stuff via extensions. And, you know, one of the big goals of my mind for extensions and any type of plugin architecture is when you're bringing a product to market, you always want to, you know, kind of focus on the MVP, keep it pretty lean. Um, and you just want to get it in the hands of users right away so that they can start using it and they'll actually teach you about your product. And the one great thing that extensions and any plugin architecture provides is that as long as there's an easy way to extend the thing and add functionality to it, users will start doing that and they'll start creating extensions, creating plugins. And then you'll start seeing demand increase for the plugins that are really solving important problems. And what this shows is basically it just guides you as to what should be in the core eventually. Um, so I recommend we, you know, we, we keep thinking about that. We keep trying to get this, keep this thing lean, get it out to the market, get it in the hands of users and make sure that there's a straightforward way to extend it and, you know, a place where people can post their extensions and we can keep an eye on them because when some of those take off, it's a clear signal from the market that that thing may need to be in core. And I can, I can imagine, I can imagine an extension that says, that defines a few fields that it says, here's the initialization vector. Here's, you know, a pointer to the key or reference to the key of some sort, um, crypto algorithm, HMAC algorithm, and then, uh, the, the, the data of the event becomes and the content type that's point, but points to it, it says encrypted data, blah, blah. And that's how you then express an encrypted, one way encrypted event. Um, in a particular way of how that one extension can go and handle it. Right. Much, much easier than trying to make it, make create a model that works for everything under all circumstances where we have to go in and deal with all the complications that come with that. Right. So we have a couple of other hands up. I don't know who was first, but Christoph, you're first on, as I see it. Yeah. So my question to Tom is, are you talking about the payload or also about the metadata? Just the payload. Just the payload. Okay. Because then it, like, there is technically nothing that stops you from encrypting it, right? So the consumer reads the website and kind of has to understand what will, what will I get anyway? So I think it. Yeah. I think it's sort of implicitly. If you are control both the produce and consumer, you can do that. Right. Definitely. I just wondered if we wanted to put some recommendations around the, you know, the specifics of the encryption. Definitely. I think it's worth the discussion to have to see what, how people feel about it. So we can, we can get that going. And Vlad, I think you're next. Yeah. Also as some history on this, we also got into encrypted payloads when we were discussing extensions and whether we wanted a bag or not. And that went down a rabbit hole with whether we wanted extensions to also be encrypted. And if we wanted to guarantee the event wasn't tempered with stuff like that, and it went really down rabbit hole as far as I remember. And that's when we decided that no, we're not going to consider it really seriously for 1.0. But I might be remembering this wrong. It does sound painfully familiar. Yes. Okay. So anyway, please be thinking about security. We need to get at least the issues identified. So we know exactly what we need to tackle for 0.3. Because I think that is probably the biggest work item for 0.3 is the security related ones. We've already started to look at some of these other things down here. But to be honest, when I look at these, I think these are smaller in scale or complexity relative to possible security related ones. So I'd like to get security ones going first on our plate. So if you think about that, open up issues for new things you can think of. And we'll get those discussions going. Okay. Is there anything else related to the security or 0.3 roadmap that you guys would like to discuss before we move forward? What is your rough time table for 0.3 will be also the next KubeCon? Honestly, I personally have not thought about that. I mean, if we can make it for KubeCon, I think that'd be great. I think it all depends on whether we get through like the security related issues. Like I said, I think the other ones are relatively small. So if we could put security behind us, I think 0.3 would be not too far after that. But that's just my opinion. I want to move as quickly as possible though. Okay. Anything else? Okay. We don't have a whole lot of time. So I just wanted to draw people's attention to these three issues here. These are just ones that I personally thought were interesting. Obviously people are free to add items to the agenda as they see fit. But I thought these are interesting because they could potentially either add new attributes or have normative changes to the spec in non-trivial ways. So please when you get a chance, I think at least the first two are probably the more interesting ones. This one, the last one about deprecated events, I think that might actually just be an extension. But I'd like to get people's opinion on that to see whether it's something that they're okay with leaving as an extension for later or whether they actually think of shooting in 1.0. But there's nothing else. I think Clemens, if you could take a look at the first one in particular, I thought that one might peak your interest. Okay. And so I may try to force discussions on these sooner rather than later. I just wanted to give you guys a heads up. Is that, is that from the first one? It might be. That sounds familiar. Hold on. Yes. Yes it is. Okay. Yes. There you go. Okay. Because that sounds like an issue for me. Yeah. All right. With that, I think we're at the end of the agenda. Are there any of the topics people would like to bring up? All right. Not hearing any. Let me just do last minute roll call. Richard, are you there? Richard. Yes. Excellent. And Michael. Michael pain. Yes, I'm here. Okay. Thank you. Renato. Renato. Yes, I hear. And Vladimir. Did we hear you from your camera? I apologize. Yes, I'm here. Is there anybody else I'm missing from the agenda or from the attendee list? All right. In that case, you guys get back a whole five minutes of your day. Thank you very much for a very productive call. I appreciate it. We'll talk to you guys next week. Thanks everyone. Oh, okay. Cool. Excellent. I'll take a look at that and get that merge. Thank you very much. No problem. All right. Bye everybody. Bye. Thank you. Bye.