 Oh, actually, let me share my screen. I forgot to do that on a sec. All right, it's three after, I think I have everybody on the attendee list. All right, let's go and get started. This is Joe Sherman, I just called. Oh, hey Joe, missed you, thank you very much. Thanks. All right, so first up on the agenda, community time. Is there anybody on the call from the community who'd like to bring up a topic for discussion or an issue or just something? I don't think we have anybody who's not a normal participant, unfortunately. Hi, I'm here to represent the community today regarding expansions and what we talked about. I'm currently helping the startup and they're exploring the idea of using cloud events inside the application too, as in events and between functions could be a cloud event too. And we're right now exploring how would the payload, the data field and cloud events should be structured. And I wanted to ask the community if anybody's done any research into this and if they have any schemas or examples they could share. I know this was also brought up during the SDK year at all because there was a talk about having, about converting events to cloud events. They converting S3 file events to cloud events, put objects or stuff like that. Is there a schema for how data should look in that context? Is there anything else? Yeah, I want to avoid terrible XML slope and whatever, so that's my last thing. There's a very slippery slope that you're on towards exactly that. Where you have like, talking about common schemas means you need to have a way to anchor them. You're kind of right on the slippery slope towards whistle and all that complexity. So just be aware. Yeah, that's what I'm afraid of. Because as long as the character is preserved of, and this is just my opinion, as long as the character is preserved of these events being one way things and you don't start building an RPC framework on top of the cloud events envelope. I think that's okay. But from a standards perspective, I think we explicitly don't want to be concerned with what the data contains really just up to you. Does that help a lot or? I'm wondering whether this should be started as like sort of an issue or Google Doc or something that sort of let people brainstorm and have a back and forth discussion someplace. I deeply agree that the cloud events like shouldn't add any restrictions regarding the payload. But I know I was just curious, there was obviously no best practices because this is super early. There is not even an SDK yet. But I was curious what ideas that we will have and I want to avoid as many mistakes as I can. At least I should make new mistakes. Maybe something will help kind of on the, since we're compliance, our implementations are by ways of a mapper. But we do have some internal guidelines on how we think about structuring event payloads and there's kind of a common set of events kind of at the outer level. Then there's kind of the emitting service has a set of properties that defines and then the respective function has one. We call this the ASIMOF thing which the ASIMOF model, which is ABC. So the A is, effectively you define for the platform what payload parameters always need to be in an event. That might also define for yourself and how you map those from your payload into the cloud events envelope, what you promote. Then B is effectively for the respective module that emits all the functions need to go and emit the same metadata for an event so that you can go and parse them together and then individual function that emits stuff that can have its own data. So that before the cloud event itself, you have your kind of, your substructure which gives you a way to enforce commonality across a system, commonality across a module. Clemens, would that kind of stuff that you guys are working on be useful to contribute to our primer or something along those lines that could help provide that guidance for everybody? Yeah, I have to, I think we even have a bunch of, like there's a stack of documents inside of Microsoft that describe, there's a whole philosophy of how our event pipelines work. And I don't think they're really secret. It's just that they're not made for public consumption. They're just made for the internal consumption. I mean, still 30,000 people. Yeah, I have to dig up and have to find who's the author and whether they're okay with it. But that's something that I should go and take a look at. Yeah, put me, make a note in the notes that I'll go and see whether I can find the ABC events schema rules because I think the abstract rules are really useful. Even if we don't wanna go and define, even if we don't wanna be specific here how they should look like, I think some best practices would be useful. Yeah. Yeah, if I can confirm, I did look at what the Azure is doing with cloud events and how you're structuring events. And I looked at what AWS had on the blog regarding how their internal events look like. And I was looking for some more, like best practices mistakes that were made and stuff like that. I didn't think it was welcome. Yeah, you're already looking at the result of several years of weeding out the mistakes. But yeah, I'll take a look once I'm back from my break. I'll take a look at what I can mine up in terms of driving principles. I think that's the valuable stuff. All right. Thank you. All right. Thank you very much Vlad. All right, Clemens. Any other community time related topics? All right, cool. I'm moving forward then. Logo. So Austin City is not able to make it today. However, I believe this link he pasted here may be pointing to the latest version. That's weird that it's seven days old though. I'm guessing he pasted the wrong link. And I think this is the last comment is actually the latest one he has. I believe he did tweak some of the spacing around the C and the E. And when I was talking to him through Slack earlier because he couldn't make the call, the suggestion was to present it to you guys here, look it over, think about it. And then I think at some point, whether it's next week or maybe it's during the week or something, we'll do some kind of vote or something on it. Other than that, I'm not quite sure what you guys want to do with it. Any comments on this, suggestions on different ways to proceed or commentaries in general? Do people prefer this better than the current one we have? I mean, does it look like it's heading the right direction? Do people not care? Clearly, no one cares. I think it looks a little bit better with updated font. I still think that we shouldn't have a common way to describe event data. Yeah, I think the sticker's gonna be just the first two bits, not the tagline, yeah. Okay, well, I'll tell Austin that at least one person seems like it's heading the right direction, no other complaints, but obviously take time to think about it. And we'll probably review it again on next week's call, if not through the email list or something like that. But, all right, any other comments before we move on to the next topic? Alrighty, cool. So Austin obviously is not on to talk about any SDK updates, but I don't think there are any other than to say he did a doodle poll and according to the doodle poll he believes that the next call is gonna be July 18th at 1 p.m. Pacific. I will nag him to send out a calendar invite to make sure that gets put out there for everybody to accept. Anybody have any other comments or issues to bring up related to the SDK work? Okay, moving forward then. Kathy, I believe you're on the call. Is there anything you'd like to update this working group on relative to your workflow subworking group? Oh yeah, hi. So we discussed, I think the meeting minutes was posted to the same document. We discussed multiple use cases and then derived some requirements for what should be put into the workflow specification. Yeah, that's about it, basically. And also some consensus, you know, what should, what would be needed for our workflow specification. And those are documented in the meeting minutes of the meeting. It's right here basically, right? Then the very next section. Yeah, then if you scroll down, you are going to see. Yeah, if you scroll down, you are going to see scroll down a little bit more. Scroll down a little bit more. Yeah, here, you're going to see some, oh, scroll up. Sorry. Okay. Up a little bit more. More, okay. Yeah, here. So here we have consensus and then we have some potential specification target. Something like that. People can take a look. And then I think in the next meeting, we're going to go through, to have, you know, people present different models and then also to have, to go through comments on the spec. Yeah. So people have enjoyed any comment. Feel free to post them on the Google Doc. All right. Sounds like you're definitely in the good stage of requirements gathering and figuring out exactly the scope of what you're doing. So that's all goodness. Any questions for Kathy? Yeah, I know. I would like to ask some questions for the workflow from the team. Yeah. From the members. You guys are awfully quiet today. I'm not hearing anything. Okay. Thank you, Kathy. We'll keep moving forward then. I don't believe there's anything issue maintenance-wise we need to deal with today. At least I apologize if there is something I missed. I'm just traveling, so I missed. I didn't get a chance to do the normal scrub. So why don't we go ahead and jump into PRs? Clemens, would you like to talk about your qualifying protocols PR? Do you remember this one? Yeah, no, I do. I keep looking at it. Yeah, so basically this is about how do we achieve the goal of interoperability? And what are the criteria for inclusion of protocols into the core of the specification set? And when does it make sense for those protocols to be included? And I tried to make a basic, a fair rule for when protocols qualify without reading the text to you, which you're all very capable of. I think the general direction is that the specification needs to be useful. So if we define a transport binding and if we define encoding mappings, those specifications must be useful for more than one implementer because otherwise it just becomes an advertising surface. And that's something that I think this project should, if it can't avoid, to promote more protocols because that's not helping interoperability. So if a project wants to go, everybody can go and build their own protocol, of course. And everybody can go and define a cloud events binding if they want to. The question here is whether that cloud events binding becomes an official one, which means that its specification lives in the main cloud events repository. So this is what this is about. And I think it can only do so if there is a realistic expectation that there will be other parties. And I'm not sure whether it's at least one party or at least two parties, but other parties who are unrelated to that project who will be able to implement a mapping to the protocol using that specification without necessarily having to talk to the originators of that project and its protocols. So for instance, I'll give the example of Kafka. Kafka is not a open standard in the sense of the protocol is an open standard, but it's a project, but it's broadly deployed. And there are actually secondary implementations of this Kafka protocol that are not made by the Kafka team. And we, like my team, as a party that implements the Kafka protocol without having a close relationship with the Kafka project and just reading their specs, we would actually benefit from having such a definition. Without a project having a formal protocol definition that is a pure wire specification that is locked down so that it doesn't change when the code of the project changes and Kafka qualifies for that. It's difficult to see how a, cloud event specification, general one that sits in the main repository helps. I think with formal interoperability standards, protocol standards, that story is a very different one. MQGT and HTTP and AMQP are effectively protocol first efforts, where you first define the protocol and then you define a variety of different products that all use them. And where additional specifications and these extension specifications that clarify their use on that protocol clearly help. In the concrete case of POSAR, so I have to go and pick on that one because that's the example. It's not clear or actually also the other PR that's there is open messaging. It's not clear to me how the existence of those specifications helps anybody but that project and are therefore effectively just advertising vehicles for those projects inside of the cloud events repository. So that's my, so I'm trying to set a rule, I'm trying to write a rule that is driven by the usefulness of those specifications and it's also driven by the spirit of not everybody ought to go wrong and just create their own protocol because that doesn't help interoperability. What we're trying to do is we're trying to unify things and we're not trying to accommodate 28 different protocols, but rather go and really pick favorites so that we achieve highest possible interoperability. And effectively the favors that we pick are different enough that they cover kind of the broadest possible area. We have HCP for request, response, web messaging. We have NQTT for lightweight PubSub and we have NCP for transactional messaging and there's other protocols that go and fill similar roles. But we should not go and allow for proliferation of arbitrary number of protocols because that's actually counter the goal that we ought to have here and that is drive interoperability for inventing. So that's what I try to do with that. Any questions for Clemens? Comments? I have a question. So currently which one is considered the, I mean the standard for the, the protocol standard for this cloud event spec? So we have NQTT, we have NQP and we have, we have HTTP as the canonical ones. And NATS, I'm not sure how many, so because we have NATS in the repo and that's, I'm not sure whether that slipped through the cracks because I don't know how many, how many implementations of the protocol of NATS there are, but my belief is that there is actually more than, because NATS has a pretty well-defined protocol specification. So I'm assuming that there's more than one, more than one implementations of the NATS protocol that, and that would also make NATS kind of qualify. This is Colin, there are, and with NATS as well, we also are doing some pretty heavy Kubernetes integration with operators and we also integrate with Prometheus. So I think that will help with the cloud events efforts in the CNCF. Yeah, and so that's why I think, I think NATS is one where I had no, NATS didn't trigger any, didn't trigger this problem with me, but I think there's a, as if there are efforts that are kind of more, they're open source projects who then choose to go and define their own protocols rather than adopting existing ones. Let's not clear how that new protocol is rationalized versus the existing ones. Then blessing those with official support from the cloud events project is actually not helping our joint goals of better interoperability. If the protocol itself has already kind of, is category defining as Francis Kafka is, that's a different story, but otherwise if it's not a proper protocol definition that stands on its own and has different implementations and also convenient implementations are more skeptical because my goal is to drive interoperability through protocol and not interoperability through names. So I see here, I see one of the statements said, you know, I mentioned web showcase or events on the web, are those the standard, those? Hang on. HTTP, I think that's, yeah, that's good, right? I think everyone using that. MQTT is fine. How about the WebSockets, those are generic enough, everyone using them? So WebSockets is a very broadly adopted mechanism. And so as much as we're mapping to HTTP, so our mapping to HTTP is actually generic. So it's, it works for HTTP 11 and HTTP 2. I took WebSockets in, even though we don't have an official binding as an example, that's a protocol that everybody's using and that everybody is, and everybody means it's very broadly used. And there will be the need and people will go and put cloud events onto a WebSocket. The question is whether we need to have a proper binding for it, maybe have to eventually. But that's a protocol that clearly qualifies for us to go and do work for because it is so broadly adopted. And because it is not tied to a one single particular project, it's an interoperability protocol. I have a quick question on my understanding is that the events would be part of the payload. And in most of these messaging protocols, the payload is actually a serialization problem for interoperability versus the transport problem, or if I can get it to the producers and the consumers, regardless of the transport, I still need to have a common serialization. And this is where most of these are, how do I represent a stream, a map, or do I use just a binary transport and use something like Google protocol box or some other serialization to represent it where I don't care about the language or the operating system, or they would just keep it as JSON and just do a string serialization. How is that being addressed to make sure you will get that interoperability where it might come in as a WebSocket and then go out potentially as AMQP where I want the payload of the event to be the same. So I recommend that you read the AMQP specification and the AMQTD specifications because they are actually doing exactly that. So those two specs take a cloud event, our InfoSet, which is our abstract model and actually project them onto an AMQP message and projects them onto the AMQTD published packet and basically take things from our abstract model like for instance the content type and project them onto the appropriate property in the AMQTD published packets and onto the appropriate property in the AMQP packet. So it's not just that you take the event and just stuff it into the payload. We're actually defining rules for how that actually works. So we have media types that are appropriate which are indicating to the receiver that this is an AMQP cloud event that's carried in the payload if we're using the structured mapping. We also have a way to go and if you don't want to use the adjacent serialization but you're actually carrying significant size data payloads that are binary. We have a binary mapping, we have to define. So basically what the transport, the transport mappings are due that we have in the repository today is they are defining rules for how to take a cloud event and project that out onto the HTTP message, onto the AMQP message and onto the AMQT published packet. Okay, I'll definitely have a closer look at that. Yeah, so it is really explicit and the reason we're doing that is because we want to have broad interoperability and we just want to make sure that if a cloud event gets stuff into an HTTP request on one side that it gets, comes out understood as a cloud event on the other side. And more importantly, if you have multi-hop routes over messaging systems where the message also gets stored on disk and sorry for the church bells in the background. If it gets stored on disk and then gets resurrected and then robbed with someone else that basically that integrity of the message is preserved and never gets lost. And for that, it's required that we actually have a clear notion of what the projection looks like. So take a look at the specs. Okay. All right, any other questions or comments or comments? To me, this is one of those things where it's potentially very touchy subject, political in nature. But if we are gonna have some sort of bar at all in terms of what we accept for specifications into our repo, then obviously we'd have to very clearly define that bar and I think this is a good step in defining that bar. So let me ask the high order question. Do people object to defining a bar at all or should we allow everything in? It is, John. I totally agree that there needs to be a bar. And I think the rationale was just explained. I think that's very important to the goals of interoperability. Okay. All right. Anybody else wanna comment on that? Okay. I'm not hearing any objections to the idea. Oh, I'm curious if there's a fast experience that specifications have been used for. All right, bad timing for advertising efforts in the past. I could think one or two, but I don't really wanna mention the my name because I don't wanna pick on people. Yes, this has happened. And I've seen it more than once. Where, yeah, this is not unusual that people would try to piggyback their things onto larger trains. Also, I don't wanna necessarily mention it. This is a very common occurrence. All right. Well, thank you for addressing my ignorance. All right. Yeah, I guess I can chime in. Relative to this organization at this time, right? I'm more on the usage side of the fence. And for sure, the number of vendors that come to me pitching their stuff and how that gets pitched. Yes, I see variations of this regularly. Everybody wants to promote their stuff and it's like every opportunity they can to do so, yes. Exactly. I think it's good to define some criterias, but I think it's probably better for people to look through this to see whether, you know, all these descriptions are good for everyone. Yes, I was not gonna push for a vote today, but I don't worry about that, Kathy, yeah. The consequence from not having a law on this is that down the road and setting that bar now is that down the road when the hawk has left the building, which means Doug is no longer our peer's leader because he's moved on to greener pastures, that effectively you can get into compliance with cloud events by submitting a spec that maps to whatever the proprietary thing is that you have and then you can go and put the badge compliant onto your thing because you had a mechanism to go and sneak it into the repo and that is just not right. Yeah. Okay, so I'm not hearing any objections to defining a bar. So now the question then is, is the bar that Clemens here has defined the appropriate one for us to have at this time? As I said, I'm not gonna push for a vote on this today because I suspect most people have not had a chance to read it, but given what Clemens has described, does it sound like it's the right general direction for people or do people have concerns with the way he described it? Because if they're concerned, I'd like to try to bring those up now and then see if we can address those quickly. The only concern I potentially have here, and it's not that people wouldn't, but it feels like we're making a bit of an exception for Apache Kafka specifically. And I understand the reasoning behind it here. I'm just wondering if are we concerned at all about that slipping in? I think it's struggling with that. And I think what the, was Kafka is special, and I don't currently see, at least from where I sit, I don't see anything that is even close to that right now in the messaging space as in terms of category defining, like there's a pattern that is currently taken effectively by, like if you look at large installations, like just volume of stuff that goes through it, there's effectively two large, well, three large implementations, right? There's Kinesis, there's EventObs, and then there's Kafka, which is what everybody else is using. So that protocol has become de facto the way how you do event ingest and reading. So there hasn't been a formal standards body form to then first made that protocol, but kind of has emerged out of a multi company collaboration that is the Apache Kafka project. And they have had the discipline to go and put a protocol spec up that actually holds up to third party implementation as we've proven. So that's why I'm taking that as the example and actually the sole example I can think of that I would consider worthy, but I agree that it isn't a protocol first effort. So it's a little, I agree that it might look a little iffy for there to be an exception, but I just think because it's category defining and it's hard to contest that it's category defining, it's the thing I call out as special. And as I said, I'm happy to take any corrections to that stands. And if we think we need to go in a minute and make another rule or we're not gonna make an official binding for Kafka, but we treat Kafka as an extension, we create effectively anything as protocol proprietary, project proprietary as an extension, a thing that where it affected the project needs to go and define its mapping. That's fine with me as well. So if we want to go and have the rule that there's got to be a standard protocol, the standard protocol needs to come out of a, multi-party standardization effort and only then we're gonna go and do the mappings that will work for me as well. I just think that we will help ourselves by carving out that extra little corner for Kafka specifically. So Gana, did you try and approach Confluence team to try and potentially and maybe even create a standard because I assume you guys have a compatible Kafka compatible implementation, every version to Dell change, something that's probably also great challenges for you. We also have a Kafka compatible implementation. So would it make sense to try and get them to write some protocol? So I would approach the Kafka project per se and not necessarily one company that contributes to it. We actually have committers in Microsoft who work in LinkedIn for the Kafka project. And yes, we're contemplating whether we wanna go and propose making that a standard. We certainly have the intent of showing up in the Kafka project as brazing our voice on the protocol side of this, but that's in the early stages. So I think that's, I certainly, we certainly have no interest with a third-party implementation of it, server implementation of it. We certainly have interest in having that go in an orderly fashion and not just go where the Kafka project wants things to go. At the same time, they actually have a pretty good community process now where they have proposals, like they have a real process that they are following them which is pretty involved. And they're now shipping Confluent which is a big driver. They're also now shipping services which is gonna curb their exuberance of adding things to the protocol. And so I'm happy with it where things stand but kind of promoting that into a standard thing would certainly be helpful. So we'll have to have a conversation. It's a conversation that we want to have but we haven't had yet. Yeah, that's why I'm not sure you want to give such a discount and make it an exception. Maybe you actually want the Kafka group to go and come and form a server standard or even a draft of protocol. You may be right. And I think that's probably, you're probably making a good case there. Yeah, you want to motivate them to sort of form a draft protocol versus giving them the discount so that you can do whatever you want. So how do people feel about that? Because that's something that I, so I think, so I want to have a Kafka binding eventually because I think that's important for us. It's just the question of where does that go? Do we make a draft and never, because that's the other way to do it, right? You make a draft, you put it into the repo but never make it part of the official standard but just have something for people to look at. Or you say, we don't care about Kafka. The problem is it is such a dominant thing that I wonder whether we are damaging our own effort if we don't have a mapping for it. Yeah, I worry about other very, very popular things popping up into the ecosystem that have no intention of ever going to an official standards body but it'd be really foolish for us not to have a specification for it. Yeah, and this is the only case right now that I can think of that is where we would exclude ourselves from the ecosystem and we have no leverage against the, or very little against the Kafka project because they're happy even without us. So, yeah, as I said, I'm torn. Yeah, the problem here of course is it gets very, very subjective in terms of what's a de facto standard versus just one projects thing. Yes, that's true. Yeah, the difficult one. So clearly people probably should think about, I guess that sentence I highlighted there or the thought in general. Yes. Are there any other high level concerns or questions for Clemens because he's gonna be vanishing for I think three weeks? That's correct. Yeah. A lot of people just need time to look through the actual text itself. This is Colin. So let's say you have another protocol another messaging system that wants to integrate down the road in a few years. Is there gonna be some sort of defined thing that this messaging system or transport needs to meet in general, just generically, just for forward thinking or would they have to go through the same process we're going through now? I think they would have to meet the same bar in terms of principles. Like their protocol, so you can't come with a messaging system. How about that? You can't come with a messaging system. You actually need to come with a protocol and that protocol needs to be implemented not just by that messaging system but needs to be implemented at least by, you know two or three independent efforts. That's actually the bar that we put up for in OASIS. There's like, if you want to promote a specification into an official, you know, release spec you must prove as the standards project even that you have at least three independent implementations of that thing. And I think that's a good measure for interoperability that you can't come with a system. You can't come with an implementation. You really need to come with an interop spec. So yeah, I think that's a good point. I think it's better to put that criteria clearly, bail it out, one, two, three or one, two, three, four, five so that in the future if someone would like to propose a new transport or protocol you know, we can see, you know whether it satisfies all those criteria. Kathy, I'm assuming you're asking for in essence that numbered list appears somewhere in this document. Is that correct? Yeah, yeah. So it's very clear and objective. Is that okay with you, Clemens? Oh, the number of implementations? No, well, just the list of criteria like the number of implementations. Oh, yeah, yeah, yeah. Yeah, I can go and add that. This was effectively the, what I wrote here is what's meant to be put into primer. I think this might, yeah, I don't know where it goes but yeah, I think I can go and make a bullet list of principles. Yeah, I think that'd be useful because while the text you have here is good, whoops, Fudge, while the text you have is all good it's a lot to read and I think narrowing it down to a bullet list that people can adjust very, very quickly would be really helpful for understanding purposes. Yeah, the T of the R. Yes, exactly. Yes, for those of us who can't read. And I think it's important that there's actually a there there. We had a discussion on the PRs and in the issues just this week on the particular PR the one about messaging where there's actually no protocol and there's no encoding to map to. And if that is not given, then you can't make a spec because that spec is not gonna help anybody. It literally needs to be like there's a, we have now a protocol architecture has emerged here where we have an abstract data model and that maps into formats and it maps into protocols and if there's no protocol to map to and if there's no event format to map to well, then you can't write a mapping. And so that's something that we, and that's not evil from by me to give that feedback that's just the fact that it doesn't fit into the architecture. So I'll summarize that into a few bullets as an extension to this one. That'd be helpful. Thank you. All right. Yeah, I guess, I think in the next, yeah, I think in the next meeting we can go through those bullets and then to, if everyone agree with those. We'll be good. This is like we're defining the criteria, right? We're not just just defining a specific protocol. I think the criteria is very important. Yeah. So before the next meeting, even though I will be not be participating you will have that list. I will go in and make that as my homework for tomorrow. Excellent. Thank you. Sure. All right. Any last comments or questions? I think we've completed our deep dive on this one. All right, cool. Thank you very much. So please everybody take it when you get a chance, please review the document. We'll try to see what we can do in terms of voting or approving it next week or discussing next week. If anything big comes up, you may have to defer it for a couple of weeks as Clemens will be on vacation, but if it's not controversial, then maybe we can get it in. We'll see how it goes. All right, cool. Thank you, Clemens. Now, next on the agenda was this transfer binding. I put this here only because it's kind of related to the previous one Clemens has talked about, but I'm wondering whether it makes more sense to actually defer this until after we resolve Clemens PR before we start reviewing another transfer binding to let in or out. Does that sound fair to defer it? I think that's the right way to do it because the prior one exists because of this one. Right, exactly. Well, and the next one, yes, these two. These two. Yeah. So is there any objection then to deferring these two until we have the well-defined bar in place? Okay, we'll do that then. All right, now next on the agenda, as I mentioned, it's a little bit scatterbrained because we don't have anything really earth shattering that we need to discuss. Kathy, for your correlation label PR, I decided to almost try to defer that because I wanna have some more offline discussions with you. I know we didn't get a chance to sync up this week, but I'd like to try to see if we can sync up before next week's call. That's why I kind of put that a little bit further down the agenda if that's okay with you. What I'd like to do instead is to talk about just extensions in general, because I think there may be some disagreement in terms of where we wanna allow extensions in our serializations, in particular things like adjacent format, stuff like that. And what I wanted to do if it's okay with people is to quickly walk through some scenarios or use cases to see if we can get agreement on whether we as a working group want to support those types of use cases or not, because I think that will help solidify the extensibility mechanisms that we want to support in our specification. Is that okay with people? Okay, we actually only have about seven minutes. Go ahead, Kathy, sorry. Sorry, I think we already have some use cases and use scenarios in the repository, right? Are you copying those over or is this something new? This is different. This isn't use cases for using cloud events. This is use cases around extensibility points, right? So for example, it's a question of do we want to allow extensions at all? Or do all cloud events only have the properties that we define in our spec, right? That's one use case. If the answer is yes, we wanna allow extensions, then okay, do we wanna allow extensions only in an extensions bag or do we wanna allow it in this other spot over here? Those are the kinds of use cases I wanted to see if we can get agreement on because I don't think everybody in the working group is on the same page. And I think we have to answer those questions before we can go too much further on some of the other topics like your correlation label bag. So first, probably, you know, I think we need to think about what's the criteria to further group to, I mean, to make a decision on what goes into the official spec and what goes into the non-official extension. Is that, I'm not sure what everyone is clear. It's on the same page as to that criteria. Go ahead, Tyler. And then, yeah, go ahead. Yeah, so the question here really, what's the mechanisms that we have in place today? I think ultimately the question is whether you're, the one that you're using in your example is always the room and the floor and like the building example, right? Whether those, the room and the floor are extensions and whether they're not. And I think, so I've been thinking about this a bit and I actually commented on your PR. And I'm coming to the similar conclusion that Doug comes to, I think. And that is that your correlation bag more or less already exists. If you allow yourself to say the room and the floor and the building are really extensions to the event format. And if you allow them to be at the outer level of the event, then you don't need to have that extra bag, which means your correlation properties, bag per se is already existing because we allow arbitrary extensibility on the event per se. And what we need is a set of rules to effectively deal with potential collisions. But we can go in and use the event in a way that we say you can do, you can put whatever you like onto that event. And the only thing that, the only thing we're concerned about is future collisions. The future collisions can only and ultimately, and ultimately what's important to note here and this is something to internalize is that the only party that can ever create collisions is the publisher. So the publisher creates an event and the publisher basically sets all the properties on the event, then the event flows. And if the publisher doesn't see any collision risk, which means it takes the standard cloud events properties and then adds room and floor and building, that's fine. And then it's just up to the receiver at the other end to go and take a look at the event type and then basically expect those three fields to exist at the outer level of the event. Now there's a difference here and that's also something I talked about with Doug separately for cases where you have an external standard that you need to adhere to. Like you have a set of properties that are defined from somewhere on the outside. Like I'll pick Opus Ua as standard. It can go to automation ML. You can kind of use any sort of existing standard. And let's say they are defining a source or they're defining a property that's colliding. Then with the mapping rules we already have in place, the one thing that has been accepted already in the HEP spec that yesterday, we actually have a mechanism in place where you can go and add a property, let's say that call that Opus Ua, where you can go and put all the Opus Ua properties into a bag and they get serialized appropriately into transports and they get included as a bag into serialization. But it's a top level extension. And it's a top level extension where you can go and avoid any clashes with the existing cloud event schema by sticking them into the back. But you can do this today with the mechanisms that we have at this today without having that extra property bag, which means the requirement that you have by adding extensibility items that you need for correlation is already satisfied with the spec per se, as we have it. If we say we are allowing arbitrary top level of extensibility. So I think first I'm not sure whether we should allow arbitrary top level anything when they're computing at the top level. I don't know whether that's the right way to go. Let's see how. There could be thousands of them. So the building number is just an example. So I'm not sure whether that's a little misleading or not. It's just an example. It could be a travel request ID. Yeah. It could be anything. There could be, you know, so, but for a specific event source, it will not be, you know, a lot. It will be just one or maybe two or three. But, you know, because a different event, correlation label or we call it identification label could be different. So that's why I think, you know, it's good to put into a bag because I don't think why, you know, why this bag becomes such an issue because this bag is clearly defined. It's not like just a very generic bag. So, Kathy, let me jump in here and I apologize. I should have actually jumped in a lot sooner. I didn't want to say discuss your particular PR yet. I was hoping to discuss the broader issue of extensions and I apologize. We got a little bit off track but then I realized we're running out of time anyway. So I decided to let it go since we're not going to probably be able to have the time to deep dive into extensions in general. So, because I don't think we have time right now to talk about either issue, I'd almost rather not start and run the risk of running over time and people, you know, being late for the next meetings. Yeah, I do know. I'm going to be late for my next meeting so it would be great if we could. Yeah, right. So let me do this. So I put this document out here for people to put comments on, give their opinion, like Rachel has just did on here. So please go through this particular Google doc, add your opinion about whether those particular use cases relative to extensions in general are ones that you'd like to see a support or not. Kathy, I'd like to get together with you probably next week because I'll be on the West Coast. So time-wise we should be able to sync up much better than we were able to this week. They have a more in-depth discussion offline about your PR if possible and I'd like to at least have you understand better why there are concerns. And then we'll try to revisit this again on next week's call. Is that okay with people? Because I don't think we have time right now to deep dive on any of this stuff. Okay. So I'm not hearing any objections. Let me just quickly do the final roll call. Eric asked to paste the Google Docs link into this Orchard. Okay, hold on a sec. It is in the agenda, but... I got it, got it. Oh, you got it, cool. Okay, thank you. All right, quickly then. Obviously I heard Kathy. So Rachel, I heard, Clinton Reeves, are you there? Clinton? What about Stanley? Hello? Fraud? Hi, I'm here. Hello, Louie, are you there? Yes. And, oops. There you go. Louie, I got, Eric, are you there? Yes, I'm here. Okay, Brian, I think I heard you already. Dan Barker, are you there? Yep, I'm here. Okay, and Chris, Christoph? Yeah, I'm here. Excellent. And I'm back to Clinton Reeves. Are you there? All right, is there anybody I missed on roll call? Okay, so please as homework assignments, please make sure you review Clemens PR for the bar that we're gonna be setting for allowing new specifications into our group. The extension's use case doc. I'd like to get some feedback on that. And I believe those are the two homework assignments. Oh, review the logo and comment on that in Austin. And I think it's an issue, not a PR. Please comment on that if you have any feedback on that one. We're gonna have three weeks to break stuff. There you go. With you gone, Clemens, we're gonna be able to get so much in. All right, any last, any last minute questions, comments, we've got about a minute. All right, cool. Thank you guys very much. And we'll talk to you again next week. Bye, everybody. Thank you. Bye.