 Doug, are you there? Yes. There we go. All right. OK. All right. Three past the hour, let's go and get started. Skipping AIs, community time, anything from the community if you want to bring up any? I'll look at you coincide now. Oh, we talked a little bit about that last time. I think it went fairly well. Demo seemed to work, which was good. I can't think of anything else worth mentioning. I've got a lot of good feedback from this serverless session. I'm sorry, the serverless working group session in terms of why people maybe resistant to your serverless and stuff like that. If you check back in the meeting minutes from last week, there should be a pointer to the notes that I took on that. You can read up on that if you want. Got it. Good to. Yep. Anybody else have any other comments on Cook on China? I'm sorry, not China. EU. OK, go back and check the meeting minutes from last week for the length. Let's see, SDK work. We didn't have a meeting, but anybody want to mention anything from the SDK stuff? I actually know there's quite a bit of activity going on in the repo's there. So is there anything anybody wants to mention? OK, moving forward then. Cook on China is in, I think, just under two weeks. I have put together some initial drafts of some slides, so you guys can take a look at those. Obviously, if you have any comments or questions, please ping me. There should not be any surprises in there. They're basically the most part of copy of what we did in the European Cook on. And just a reminder, if you haven't been pointed up for the demo, please give it running. I am planning on showing the demo there, assuming I don't run out of time. Any questions or comments on that? All right, cool. Thank you. So in terms of incubator status, last week we talked about how they give us the definition of end user, which is three users of products that have implemented cloud events. So high level question for people, do we want to go forward with incubator status? I did check, and basically all it means is we get on the TLC agenda, and we present our case for why we think we meet the criteria. And the biggest bit of criteria is meeting the three independent end users. So we would need to actually name names, basically, of at least three different people who are willing to say, yes, we use cloud events in this particular product. I don't think that'd be a problem, based upon who I've heard so far supports it. But I wanted to know whether you guys wanted to go start that process now, which I can start doing, or would you rather wait until we get closer to 1.0? Any comments either way? I would just rate that as 1.0, just like contact feeling. OK, Jim, I think your hands up. Yeah, is there any problem with holding off for 1.0? I mean, I'm not quite sure what you'd say if you're saying people are using it, but it's not the final version, or not a prime version. Yeah, so there is no requirement for us to be at 1.0. We did check on that. And I believe the only real difference in terms of being a sandbox versus incubator is obviously bragging rights, because you're a little higher up with the food chain. But we do get the ability to get more marketing material available to us. So for example, we'd be allowed to have our name mentioned in keynotes and stuff like that. Right now, they really frown upon them even mentioning sandbox projects and keynotes, that kind of stuff. Or I think there is some other official marketing stuff available to us at that point. But from a work perspective, it doesn't change anything for us from that angle. OK, in that case, I would vote to go for the promotion then. We may have a use case, so I'm not sure whether I can talk to it, though. So I need to get clearance for that. OK, anybody else want to weigh in on whether we should go forward an hour away? Since CNCF is a very happy marketing, happy organization, getting higher up in the flagpole and getting more attention from the marketing machine is not bad. And I don't think that's necessarily a maturity statement either. So whether we're at 0.3 or 1.0, for me doesn't make a difference. So we should just go and climb the ladder as quick as we can. OK, I apologize. I missed. Who was the person that spoke up earlier that said they'd rather wait for 1.0? I'd like to hear your opinion in terms of why you'd rather wait. Not for a big reason. OK. If we go towards the one where people are coming up, so you go for the included status right, and we get more eyes on the plug, louder than you're not at version one yet, this means that it will delay getting to version one even more. So that's because of more, what do you call it, branched ideas about how things work. So we'd rather have like version one kind of concrete steps that's set in stone and then updates come in, right? So at least version 1.0 is not delayed. OK. So let me play a different advocate there just for a second. If you believe that raising our status would get more eyes on there, would that not be a good thing? Definitely a good thing. But it will just be a result of getting to version one. Right, but my point is if those eyes result in changes that really should be done before we go to 1.0, wouldn't it be better to see that sooner rather than later? Definitely. I mean, yeah, the argument goes both ways. OK. OK, well, thank you for clarifying. Anybody else want to voice an opinion? OK. Since it's not unanimous, what I'd like to do, and I don't think we're going to have to rush this either, I guess what I'd like to do is put out more of a formal vote if that's OK with people and see how that turns out and let that decide. But that's not unanimous. Then we don't have a choice but to go for the vote kind of thing. Is that OK with people? OK. But keep in mind that if we do decide to go forward, I will need names for people in terms of customers who are actually using it because the TLC will want that. OK. All right. Moving forward then, 0.3. I know Microsoft, you voted through email, thank you, Clemens. Is there anybody else who would like to formally vote one way or other, or otherwise I'm just going to ask if there's any objection like we do on most normal votes? OK. Is there any objection then to approving the current version of all our documents as 0.3? All right, cool. Thank you guys very much. I'll start the process of pushing that out the door. All right, 0.1 discussion. I try to summarize where we are relative to the poor requests. And keep in mind that these categories are based upon my initial take on where the different issues in PRs fall into, right? Either require for 1.0. We should try to do 1.0, but it's not necessarily required versus definitely a post 1.0 thing. Hopefully you guys have had a chance to look at that categorization that I did. And assuming it's more right than wrong, I think this is the current layout of where things lie. So we only have basically 14 things in front of us for 1.0. And I know Kira who it was last week, somebody asked for a little more concreteness relative to when we're actually going to ship 1.0. What I'd like to do is to see how quickly we can actually attack these 14 PRs and issues. And maybe really, really try hard to get them done within say two or three weeks. And what that would do then is put us in a position where we can use the remaining time to sort of test out the specifications and documents, do final reviews and stuff and start working on the try 1.0. And once we feel like we're done with the testing period, then we can decide whether we should say, nope, we're going to ship right now. Even though we haven't finished all those 18 open issues or we're going to wait to resolve those. So it gives us a little bit of time to sort of do a little deeper analysis on those 18. So basically, I guess the net of what I'm trying to suggest here is we really push hard to say, try to resolve these 14 issues and PRs for 1.0 within the next say two to three weeks and then see where we are after that relative to these outstanding try ones. But that's just an initial thought I had on the process. Anybody else want to voice an opinion? Yeah, the proposal sounds good, Doug. I think that's the way we should go. I also agree, that's the right way to go. Okay, so in order to make that happen, I think the biggest issue I have is we have six unassigned issues. The four that are assigned to these four people here, whether you four know it or not, I did assign those four to you, mainly because I think you guys were the ones who opened the issue. So I tagged you as working on it. So out of the six that are there, I would really appreciate it if you guys in general looked at those and volunteered. Just keep in mind volunteering does not mean you necessarily have to come up with a poor request yourself. It just means you're willing to do the driving, pushing, nagging or whatever you want to call it, to get over the finish line. And the finish line could also mean to suggest we close it and do nothing with it. But I'm looking for six volunteers and I won't take up time on this call to do it. But please, if you're on the call, please look at those six and try to volunteer for them. Okay, if I don't start seeing people volunteering over the next couple of days, I'll start doing some nagging myself, but I really prefer not to have to nag. Okay, anything else relative to 1.0 if you want to discuss? I guess I should mention there are four PRs tagged for 1.0 that need to be updated. I know, I think Clemens, you have one or two. I think I have one. I can't remember who the other one is. But if you do have a PR out there that needs to be updated, please take a look at it. Yeah, I have one with just the SDK guidelines thing. And I think that's something we need to talk through in the SDK group. Yeah, I think we have a meeting scheduled for the next week after this call. Yeah. So we should definitely discuss it then if we don't get it resolved before then. All right. Anything else on 1.0 if you want to talk about? All right, moving forward then. So during my review or catarization exercise for those three groups, I then came across these four issues that I'm proposing that we close either because they no longer apply or based upon the sentiment of the group, I didn't think people were gonna buy into it. In particular, the last two, I don't think people wanted a method attribute because that sort of exposes the transport layer based on everything ever in the past. It seems like the right way to go. Receipt queue sounded awfully close to the partition key example or an example. Extension were redefined. So it seemed like a duplicate. I don't think we need to system architecture doc anymore. I think we're pretty much underway and our primer covers a lot of that. And I haven't heard anybody complain about the fact that we do the plus JSON thing in our MIME types. So I'm assuming that we could probably close that one. And if someone does decide it is an issue, they can reopen the issue if they wanted to. So anyway, that was my reasoning for those four. Any comments or questions on those four issues? Okay. Anybody who would like either more time or objects to closing those four. Okay, cool. In that case, this reminder, if you believe that we closed one incorrectly, we can reopen it. Don't feel like we can't. Usually we do require a little bit of a bar that just says if there's new information, right? If there's nothing new, you just wanna open it just because that might be a hard sell. But if you have new information that makes it so we should revisit our previous decision, then that's usually a good enough reason to reopen. All right. All right, moving forward. Now this one is, oh, before we get into the PRs, is there any high level other topic people wanna bring up? All right, cool. Now this PR is not technically tagged as 1.0. However, it's been out there for such a long time. I feel really, really guilty, not at least bring it up here. So it's the Kafka transport binding. I don't wanna go through it completely right now, but I did wanna ask Clemens in particular because I think you have done the most recent review of it. Do you have anything in here that you would think is worthy of bringing up people's attention? I think most of the issues, or I'm sorry, most of the comments you've made were relatively minor, more syntactical in nature more than anything else. Are there any high level issues or concerns you think might be worthy of bringing up to the group at this point? No. I think this has been updated with a hint for the callback mechanism. I don't like how the partition key is being created and that's sufficient. I think, yeah, that one. And then maybe the prefixes are a little long, but that's cosmetic, but otherwise that just seems fine. The only, since the Kafka messages, the message structures are relatively simple and doesn't even have a notion of content type and we're introducing that with a custom header. That's, I think that's all very reasonable. So I don't see, I think people who are implementing a Kafka client on top of existing Kafka libraries will be able to go and implement using that and that's the bar that we have. So that's fine. Okay, and I apologize, Neil. I completely forgot that you were on. Is there anything you'd like to mention, Velta, to the PR that you might want to think about as they're doing a review? Yeah, no, thanks, thanks, Clements and Doug for making the updates today. I had one question probably for you, Clements, is I'll put in the key attribute section there. I don't actually define any precedence on whether a partition key should override the use of a key extractor or whether they're mutually exclusive or not. I kind of presumed, and I guess this is a bit of a gray area. I presumed that the partition key would take precedence if one was provided and then falling back to the partition key extractor, but I don't really know what the semantic should be. That's just my presumption. I think Gunnar, he mentioned that on the other partition key PR as well. So what are your thoughts on that? Since partition, so I would construct it in a way that you always have a function and the default function looks for that key and makes that the partition key. And then you can write another extractor function that then goes and looks at different criteria. That's how we'll probably go and do that. Yep, that makes sense. Which means the specification per se will always talk about the extractor function. And then you can effectively point to the extension and say, this is what a function could do, but don't be perspective. And you might go and add an example, expand that a little bit more and add an example here. That's how it would approach it. There's no preconceived notion of what that key should be. It's always a product of function and the proposal is that there's a function that looks at that key and that's it. Okay, so I'll put in a note on the key attribute about a default key extractor that looks for the partition key and then people can provide their own to provide other kind of semantics. And for the partition, so I wonder in that case whether a implementation would actually go and strip as it has evaluated that partition key because it's then used up whether it would strip it. Yeah, I do wonder that too. So that's the, I think there's, you can, you're right. You can put a little bit more meat there in that section to explain how that works. And that would make the function, the function is the mechanism and then there's multiple implementations and one of the default implementations is to rely on that. Yeah, okay. Otherwise, I would just go and tighten up the prefix because I think we shorten that also up to CE and HDP. I think, I'm not sure whether I already did that for MDP, I might sort of go and do that. Yeah. But yeah, that's otherwise it looks good. Cool. All right. Anything else, Neil? No, I'm good. I'll make those changes. I'll have them first thing. It's UK evening here, so I'll have them first thing tomorrow. Okay, not a problem. Anybody else on the call have any questions or comments about this PR? Okay. Is there any reason to think that once those edits are done, people will need more time or I will have an objection? So I want to get these things out sooner rather than later. Okay. So Neil, if you can make those changes, we should be able to pretty much approve this one really quickly next week then I would assume. Assume no one finds anything major, which would be really cool. Okay. All right. Cool. Thank you guys. All right. V10 PRs. So let's get this one going. So I believe his name is Gunnar. That's just the typo. Yeah, so he wanted to change the definition of any ever so slightly. I think his biggest concern was it didn't include the UI reference or timestamp in there. Now I know this actually is going to probably overlap with, hopefully as soon to come PR from an issue that James Roper mentioned, but in the meantime, I wanted to at least get this one out there to see what people thought. In particular, Clemens, I think since you wrote the type section, I wanted to get your take on what you thought about something like this. Clemens, are you still there? Yeah, yeah, yeah, I have it. Let me read it. You guys don't read this stuff before you go to bed every night? I'm shocked. No, no, it's like that I have it all in my head all at the same time. What's the change? Like I said, I think the biggest change is the UI reference and timestamp. I believe most everything else is the same. Just moving around the text, I think more than anything else after that. Yeah. Yeah, okay. I mean, instead of we can basically say, this could also say, you know, can take the shape of any of the other types so that we only have to keep updating this section, but that's certainly right. Yeah, and actually what's interesting is I think that's directly related to the issue that James Roper opened up. So we'll talk about that, I think in another pull request to pretty much broaden this to be like just any binary thing. Okay, but in the meantime, I think this one sounds okay from your perspective. Is there anybody else on the call who has any concerns or comments on this one? Okay, any objections to approving that? All right, cool. Thank you guys. All right, next one. Okay, this one's mine. As I was going through the spec over the weekend, I noticed that the description of timestamp is actually kind of vague. In particular, the use of the word event, it isn't clear to me or it wasn't clear to me whether that meant when the occurrence happened or when the cloud event producer converted the original event into a cloud event. And so what I tried to do is to make it clear that ideally it would be when the timestamp, of the timestamp of when the occurrence actually happened. However, I did want to leave it out for people who maybe cannot determine that but still wanted to go to timestamps if the receiver could do some sort of time-based ordering if it made sense for them to do so. And so I basically said that you're allowed to use other things such as the current time but the event producer must be consistent for the same event source. Because what you don't want to have happen is if you have two event producers for the same event source where one is using current time and one is using the occurrence time which is typically going to be before the current time, then things are potentially going to be out of order or at least inconsistent relative to what time the receivers are going to see for this stuff. And so I tried to make it clear that they have to be consistent with the algorithm they use for determining the time. I want to see what you guys thought about this. I like this dog. You sound surprised, Clement. No, but it's such a great improvement. Okay, well, thank you. Thank you, Jude, for the plus one. Anybody else have any comments? I know there are a couple of LGTMs in the issue itself for the PR itself, so thank you for those. Anybody have any questions, concerns? Okay, any objection to approving then? All right, cool, thank you guys. Glad that was easy. All right, now right now, the normal pattern we have in these calls is I usually focus on PRs because obviously those are the most important. People who put in work to actually make spec changes. I want to get those in there and they'll make them wait. But just a heads up, because we're running low on PRs for 1.0, in fact, those are the only two that they've actually had in front of us. What I may do next week, depending on the list of issues, is that we might actually start discussion on some of the bigger 1.0 tag issues just to help move the issue discussion along if I feel like it needs it. Obviously, if you think I included something on the list or by mistake or I missed something that should be there, feel free to add it yourself to the agenda. But I just want to give you a heads up that we may start talking about issues next week and not just pull request. Even though I really, really prefer talking about pull request. I have a question into the group that I see on the next item here, the Avro format, which I'm supportive of. Does anybody care about having something that's a tag binary format like Seabor and having a spec like for this? Because that's the most direct mapping we would have from Jason into something that's binary and compact without needing any schema or et cetera. So if anybody would be interested in that, that's a spec that would still be interested in getting into the spec set. Anybody have any comments on that? Not hearing any. So I'm going to interpret that as no objection. Ha ha ha. Ha ha ha. So there you go. If you want to do the work, I think adding more specs would be good like that or any more transports would be good. Yes, it's one of those things where I think that having a compact little email-less encoding to compliment Jason as an option would be super useful. And between, I mean, there's a few options that we have for those. It's message pack and Seabor. But Seabor is an ITF-RFC. So even though it's less popular, it's well-implemented and it's a standard that we can rely on rather than just being a project thing. So I like things that have specs to point to. So that was, yeah, now there's a question, Grace, Apache Thrift, or GRPC and Apache Thrift. So GRPC doesn't have an encoding, but it's a protobuf and Thrift has one. And we can always add some, but specifically something that's a fairly direct binary representation of the Jason model would be something that I would think is useful. So yes, I'll try to find the time to write the necessary variation of the Jason. It's effectively gonna be a variation of the Jason format for Seabor. Cool, sounds good. All right, and with that, let's jump into the Avro one. Now, I gotta be honest with you, I know nothing about this protocol. Does this fall into the category of a community standard or is this gonna fall into the proprietary group? Does the author want this? Does anybody know? I don't think Fabio's on the call. I do know, but I'm just... Yeah, I don't think Fabio's on the call, so go for it. Okay, so Avro is, Apache Avro is the format that the entire a Dupestack is using for encoding data and for RPC internally. So there's Avro is both an encoding format and a RPC framework. And as encoding format, for instance, we use it in the product to archive events. So effectively our event hub is Kafka-align and also has a Kafka protocol head and we spool out events from our log into binary packages and store them at the storage and we do that for instance using Avro. So Avro is a super compact format that's really good also for time series data because it runs very small and it's also why it's popular with analytics. So effectively with using anything from the Hadoop stable then you will be able to use Avro as it is and the advantage of Avro is that it is quite compact. And there are multiple. And it also has the advantage that it can carry its own schema. So they have a schema language and you can go and create a container with Avro where you package the schema up front and then you reach the schema then you have everything that you need to decode the following binary. So that's also the thing. So you can use with embedded schema without embedded schema it's fairly flexible. And there are multiple implementations of it. And there are multiple implementations. Yeah, so there in the patchy there's a bunch of implementations for a bunch of languages. And that's mostly where that all happens. So it's a project in the patchy. Okay. All right, in that case is there anybody who, yeah Fabio, is there anybody on the call who'd like to comment on the PR itself? I haven't reviewed it yet. That was my next question is has anybody had a chance to review it? So I'm sympathetic to it but I haven't reviewed it yet. So I can't say how much I like it, how much I like the details. I like the intense. I don't know how much I like the details. Okay. Cause I'd like to at least have one person admit that they reviewed it before we approve it. I would insist that we do that. At least one yes. Is there anybody on the call who would like to admit that they reviewed it? Okay. Well then we're gonna have to wait. And I guess the only request here is that people take time to at least review it. Personally, I apologize. I have not had a chance to review it myself either. So. There's a typo right on line 94. 24. Oh. Thank you. I'm with Clemens on this. I mean, yeah, I completely agree we need it. I'm completely maxed out for the next week or so they otherwise I would have to re-jump in. Okay. I appreciate that. I don't think we necessarily have to review this or I'm not have to put it right away. Obviously I take it as post 1.0. I'm sorry, as a trifle 1.0 I believe. Because obviously not necessarily required but the more we get into it the better. So, you know, obviously fit it in as you guys can but it's not urgent for 1.0. Right. Any other things we want to talk about relative to this then? Okay. So it sounds like we just need to find time. Cool. All right. Next one is mine. So I'll let you guys read this. I just wanted to add a little bit of clarity around type. Trying to remember why I wrote this one. Oh, I think I wrote this because I was, hold on a minute. I was trying to address issue 188 and they weren't, I think they were a little confused as to whether type was related to the actual occurrence or of the cloud event itself or something like that. So I just wanted to add a little clarity here that this is related to the occurrence itself and there actually could be more than one event related to the occurrence. But anyway, this actually contains the value described in the type of the event related to the original occurrence. So I tried to tie it back to the occurrence itself. I don't know, there's nothing normative in there. There's no must or anything like that. But I tried to address it as best I could. If you want, what I could do is go back to the original issue if you guys want to see that what they were questioning. Yeah, so you can read this. There was no question. I don't know, what do you guys think? Did it try to address his concern? Is it okay? Hate it? Keep it the way it was. What do you guys think? Jude, you're up. I like the fact that it's specified that one or more events can be generated with the same type of events. Yeah. It's not part of the previous description. So I like the new description much better. Okay, thank you. Yeah, it's funny. If anything, I thought that was actually a more critical piece because it may occur it wasn't necessarily a one-to-one relationship. But there was the source, multiple things can happen to the source, right? And so you now state that the differentiation between different events is necessary because one occurrence can cause multiple events. I think the differentiation between events is necessary because the source, multiple things can happen to the source. You mean multiple occurrences, right? Yeah, multiple kinds of occurrences can happen inside of the source, each having its own kind of event. And yes, there might be a special case where a single occurrence might fire off multiple different events. But now what you formulated here seems to motivate the type by a single occurrence having different events fire because of it. Okay, I'm not quite sure, I see that. But is there a wording change you'd like to see or do you think the original text is more clear? What would you suggest? Well, you're introducing what seems like a constraint. You're anchoring it to, how should I say? What's the constraint you think I'm introducing? Because that definitely was not my intent. Yeah, no, no, it's like you're tying, it's now it seems to me like one or more events might be generated as a relative event. That is the, and now you try to differentiate events because of the fact that you have multiple events resulting from that occurrence, that's why you need to have that type. And I don't think that's true. That's what that now telegraphs to me. I see what you're saying, you're saying, okay, so let me rephrase that to make sure I understand. You're saying that the current wording applies to you, that the entire reason we have type is because we have more than one event from an occurrence. Yes. Got it. That's what that says to me now. And it should. Okay. Just out of curiosity, if I was to reorder those two sentences that I added, would that change it? Jim has a proposal, I like that better. Okay, I'm okay with that. What other people think? So Jim, I assume you are suggesting to replace these two sentences with that one, correct? Well, I guess I was okay with the original language, but if we wanted to extend it a bit more, I think I agree with the more I was listening to Clemens, I sort of understood where he was going. Yeah, I know what we're trying to say is that stuff happens and this is how you identify what that stuff is and a source can emit lots of stuff. Yep. So just a quick question, the May here. Yeah, all right, sure. Yeah, does, maybe. That's what I was wondering, whether you wanted to be normative or not. Okay. Yeah. The type is used, yeah. Okay, or yeah, is used, I like that. What do people think about that new wording? So you can read that to that bold, all right? Any objection to that new wording? Okay, actually, hold on. So I was gonna say, I mean, I still think we need the language of, it is to define the type of the event. Yeah, I mean, I think that's the fundamental purpose, yeah. I think I agree with that statement though, this is Mehmet. I don't think this statement is really adequate. You have to define it, the event types, whether it is admitted or it is received, doesn't matter, you should have the event types. Mehmet, you cut out there a little bit on me. Which part of the sentence are you worried about? I think you're saying that he's used to distinguish why the event was emitted, right? And I assume this is one useful type. In other words, type could be used for other things too. So the question is really, have we defined the event type somewhere to begin with? And therefore it doesn't have to be whether it is emitted or received. It doesn't matter what you do with the event. I think I see what's going on here. Maybe it's more in terms of the type is used to sort of categorize the event rather than distinguish, maybe distinguish is a strongly loaded term. I think categorize is a better word. That's for sure, yeah. What do people think about that? The use of the word why was kind of bothering me. Clemens, you okay with that? Yes, yes, I am. Okay, anybody else have any comments? Maybe switch those statements around. The type is used to categorize the event and sources may emit multiple events. Well, I don't know, I'll let you word Smith that. Well, we're gonna have to approve it because I think it's a small enough, we could probably approve it right now one way or the other. Anybody, okay, everybody okay with that? Okay, any, oh, Jude, your hands up. Yeah, if you'd read it out in a sentence out loud, it doesn't read correctly. The type is used to categorize the event, sources may emit multiple events. Would that help? Yeah. What about that? Yeah, I want to drop the word occurrence. Yeah, really? The word occurrence. Yeah, that's interesting. Why? Well, I think this comes back to the categorization and saying if you have, I don't know, an IoT sensor and that sensor is gonna emit events when something happens to it. Now, you could either say that the type says activated or deactivated or the type could just be sensor. Yeah, so sometimes it's categorizing and that enables you to interpret what's going on and sometimes it's very, very prescriptive of what's going on. But an occurrence, a thing, probably only emits one event. A source may have multiple occurrences, but an occurrence is a singular thing. Okay, pushing it down. Well, okay, I think Scott's right, we should probably take this to get up because it's not as easy as I thought it was gonna be. I don't think it's a matter of wording change because I've not, the more I think about it, the less happy I am with this because in general, obviously that's true. Sources can emit multiple events, period. If I thought the multiple aspect of this was gonna be related to a single occurrence, but if that's gonna be up for discussion, then we should take it back to the GitHub issue. So let's defer this because we have other PRs that don't require necessarily word smithing. Okay, but it did a good discussion. So the next one, I don't necessarily want to or try to approve today, I just wanna draw your attention to it. Some of us were talking at Kupkan EU about how it would be really nice if we actually gave some guidance on how to write what I was calling adapters, which means for popular events that are generated out there today, how do you convert them into cloud events? That way, in case there are multiple limitations of those adapters, at least have some consistency across them. So you can interrupt. And that way, regardless of which adapter, a particular receiver gets their messages sent through, they should hopefully get the same cloud event as the receiving end. So what I did is I wrote three different adapters, one for GitHub, one for GitLab and one for AWS SNS. And just basically put down what I thought would be the right way to map it based upon the data that was being sent along. As I said, I'm not gonna push to do this today because two of these files are rather large. I mean, the GitHub one didn't even show unless you hit the load def. But please take a look when you get a chance. In particular, the source and subject ones were not 100% clear to me. There were times when I could have been, when I had an option of being consistent with other events that are kind of related to this versus more purist in terms of what value I chose there. And I wasn't quite sure which way to go. I did comment on this in the issue or in the PR description itself. So please read that when you get a chance. But anyway, like I said, I'm not gonna ask people to review it right now, but I do wanna get that in there sooner rather than later if possible. And so please review it when you get a chance. Jude, your hands up. Over time, we'd have hundreds different thousands of adapters. Is that the intention? How do I say that? One more time, you could cut out a little there for me. Over time, right? We'd have like a ton of adapters, we'd have like 600,000, is that the intention for the adapter spot? So are you asking whether we're gonna try to have all possible adapters in the world put into our repo? One yes, but more than that, I think many people can contribute adapters, which is really good, but soon it will grow out and then become like 1000, right? Yeah, so it is not my intent that every single adapter in the world should try to push a specification to our repo. I mean, if they want to, that'd be great. I don't think that hurts anything. I at least wanted to get some of the common ones out there for two purposes. One is because they're very, very common. And I thought there actually might be multiple implementations of these types of adapters. Like for example, a GitHub one, I think is a very, very popular one. And I think there might be multiple of those. But the other piece of it is I found right in the PR to be an incredible learning experience. It sort of highlighted some of the problems, like for example, I think that's the reason why I wrote the PR around time earlier today, because it wasn't clear to me when I should be using current time versus trying to dig through the event that I'm receiving from GitHub and try to find a particular time in there to use. And I realized if I'd made that decision differently per the event type, then the receiver is gonna get different data, right? And that's why I pushed for that consistency aspect that we just approved earlier, right? So I thought doing this was a great exercise, one for me to make sure the spec made sense for these attributes, but also that as somebody who actually might write an adapter one day, you could look at the spec and understand exactly what goes into fields like subject. Because as I said, as I was writing this, it wasn't clear to me when I should use subject versus source for a particular field. And so I think this, if nothing else, can help educate people if they go off and to write their own adapters, even if they don't submit it to us, right? This will help them understand what we meant when we wanted to, say, fill this field in with this type of value, right? Because we have examples, but I think this type of example is even more useful than just the one or two examples we have in the spec itself. Does that make sense? Yep, definitely. Okay, I like that. Okay. Mehmet, did you wanna say something or were you just off mute? No, I'm okay. Okay, okay, it's my double check. Okay, like I said, please review this when you get a chance. In particular, I wanna pick on Scott because I know you've been very heavily involved in the K-native of adapters, and I wanna make sure that you're okay with this direction because there are definitely some choices in here that I'm not sure you'd necessarily agree with, but I wanted to draw your attention to it and pick on you a little. Yeah, it's the same problem as we've talked about before where we're not namespacing the adapter's type on behalf of GitHub. Yeah, but that's a lie, right? It's not GitLab that did this type. It's you that made the choice to have had to do this adaptation. Right, and I think that that's one of the things I would like to at some point have us discuss, probably initially through the PR itself, unless we have a lot of free time on these calls, but I will mention that I did reach out to GitLab and GitHub directly to see if they'd be going to collaborate with us on this. And in my mind, if we can get them to buy into the notion of saying, yeah, of course it should be GitLab or GitHub, then that's their way of saying it's okay for you guys to use our namespace because that's exactly what we would do if we supported this ourselves. So that's another sort of a backwards intention of all this was to try to get their buy-in as well. Have you gotten responses? Let's see, GitLab is definitely interested in participating. I think it's more about our time right now. And in fact, they went one step further and one of the guys suggested that maybe we should submit a pull request to have GitLab support it natively. But I thought it was really cool. The GitHub folks, I made initial contact with them. Thanks to Clemens, thank you very much. And they didn't seem objective, they didn't seem against the idea, but they also didn't jump them down and say, yeah, yeah, yeah, yeah either. But in fairness, I just talked to them last Friday. So I'm still waiting for them to get back to us and I suspect they're just busy. So we'll see how it plays out. And I need to actually hit up Tim to make sure he's okay with this mapping I did here, obviously. I did send him a note, but he hasn't asked to respond yet. So he may be busy. I did say, was it Tim opened the big issue about AWS adoption of Cloud Events? Did you see that one? I did see that and we just talked about that I think last week. Yeah, well, briefly we talked about it. And that's another reason why I reached out to him because I thought they may have already done this mapping and so he could tell me where I went wrong. Cool, okay. So anyway, I guess I'm not gonna rush this through other than I do think it's a great learning exercise and I think it'd be useful for the community in general to have this kind of thing. All right, any last minute comments on that before we move on? All right, in that case, I think this might be the last one for today. Eric, would you like to talk about your primer change for persistence? Sure, I can do that. Way long ago, Doug, you asked that we bring up questions that we had. In particular, I was thinking a lot about event sourcing and about writing the Cloud Events that I was receiving down in a log. And I was thinking that there are a number of things that don't get addressed when that happens or kind of are subtracted from the context. Things like who sent this and what rights do they have? Was it not a modified in transit and in order to receive it, I've removed the confidentiality that was used to send it the encrypted connection, all these sorts of things. And there's a lot of very deep considerations that we could go down there, particularly for Cloud Events, it seems like that's, and particularly in the discussions that have resulted. It seems like that's not something for the core spec to address, certainly in 1.0. Although kind of what I asserted here is that it is expected that there will be extension attributes and I kind of left that extension part implied that will help to individuals using Cloud Events to address these concerns without actually trying to address them in the spec itself. That security and everything related to it is frequently evolving and changing dimension of the field. So trying to canonicalize that within this spec when it's really just trying to declare how communication happens, what content metadata is important for that, seemed like it was out of scope. So I have questions related to this. So let's say, so we're implementing an event store for proofing, so events are similar to Kafka and that data gets persistent there. And we have a mechanism today that encrypts data at rest. And we have in a particular school also a skew, also a way for you to give us a key to go and store that, to control the how the encryption works. And we have input authorization and output authorization who can get to that data. So we have effectively several angles of security around this. None of those, however, are informed by the event per se and what's in that event. So what do you expect there to be as attributes that control behavior of data on disk? I can be wrong, but it seems to me that probably that is possible for you to do because you control the entire to the ecosystem. And that the definitions of how to encrypt and decrypt and manage the permissions have all been made consistent through the software. If that were to, if the writing and the reading were being done by different parties, then there would need to be some kind of agreement over the ways that that's done. And it could be that that's all informally agreed outside of the kind of specification and outside of some sort of a standardization. But if the parties don't necessarily know if they're a writer, what parties are going to read or if they're a reader, what kind of, what parties are going to write, they may want that kind of support. Go ahead. So I wonder, I really wonder whether you are not trying to hint at mechanisms that kind of exist for messaging systems that is like end to end encryption and end to end identity because that seems where that's going because there's no difference for at least not in my mind for whether you keep a message on the wire or whether you put that message into a disk and store it and do a stored forward because the wire established context is something that in common PubSub systems kind of terminates at the gateway. And then as you hand that out to subscribers, you don't carry that established context that you have like the connection context, you don't hand that forward to the subscribers. Perhaps another way for me to say this is I think that it's a declaration that that's an exercise left for implementers and that we're not going to try and solve that within the cloud of that spec. Though, I don't want to declare that we'll never provide support. It seems like there's enough of a need for solving this that support could be helpful, but that we're certainly for version one, not going to kind of dip our toes. And yeah. That's something that I completely agree with that we shouldn't be tackling this. But I think this is a variation of the discussion we've had twice or three times about identity and encryption because we have effectively if you, we either PRs or issues that we're specifically proposing that we introduce some notion of end-to-end encryption or end-to-end identity. And we scope both of those out with knowing what introducing those things did to other standards. At least early on and then decided that we might want to take those on posts. We wanted some point if there's enough interest. And I think it's a worthy thing to ultimately deal with which is don't have the necessary infrastructure right now standardized infrastructure right now to do with it because we need to have key registries. We need to be able to go and talk to those key registries. We need to have key rolling and there's all kinds of complications that I think. So I'm wondering, reading that text, I'm not sure how much that helps me but maybe it helps others. So is there specific requirements that you'd like to see changed or? Maybe it's my mindset that for me persistence is a matter of like that's something that happens all the time that I don't find that I don't think that's special. So maybe that's maybe of the audience for that text. Okay. Is there anything else on the call like the voice and opinion? We have about 30 seconds left. I'm trying to see whether we should vote now or wait. I'm inclined to say we wait just a little because I'm not sure if everybody's had a chance to review it yet. Eric, would you be okay if we wait till next week? Eric, do we lose you? I'm here, I'm happy to wait. And I wouldn't mind to be back at all. Okay, cool. In that case, please, when you guys get a chance, take a look at this. It isn't a primer, so it's non-normative but we do want to make sure the text actually accurately reflects our current thinking. All right, with that, it is the top of the hour. So one last little check. Did I miss anybody in the attendance list? Wow, this is old. Your name is on my, there it is, Kristoff. I was wondering why your name is up here there. It took a second to be updated. Anybody else on this for the attendance? All right, cool. In that case, we are done. Thank you guys very much. Very productive meeting. Thank you. All right, we'll talk again next week. Bye, everybody.