 Anyway, it's three past the hour. Why don't we go ahead and get started? People can join as they join. Let's just think here. Anything on the AI is worth mentioning. I think Austin, he's not gonna make the call today. And I haven't heard back from him on the logo yet. So I'm assuming he's still working on it. He mentioned that they're going through some deadlines and stuff internally, so that's why he's been so busy. But I think we'll get to it eventually. Okay, community time. Before we get to Thomas's K-Native Eventing demo, are there any other community-related topics people would like to bring up? All right, not hearing any. In that case, I assume I should stop sharing. And Thomas, you can go ahead and steal the share. Sure. Forgive me if I do the stupidism. The first time I've done screen share on Zoom in, or Zoom us, am I presenting? Yes, you are. Cool. Okay, so for those of you who are not aware, Google just revealed that we've been working with a handful of industry partners to look at this whole serverless thing. It has three pillars that are usable independently. They are build, serving, and eventing. So today I'm gonna be talking about the eventing repo. One of the things that I will do that you did not see on stage is I will be not using the build repo. So this is one of the design principles is that all three of these components can come together to have more value, but they are independent. So instead today I use a tool I like called Co. So for those of you who are not familiar with Kubernetes, you apply YAML files, you have to already have Docker available somewhere. Co is go Kubernetes. So you can check out function.yaml. You'll see in here that where it's supposed to have an image, it actually just lists a go namespace. And so if I co-apply instead of a kubectl apply, it will actually build, create the Docker image, and deploy. For me, that's even when I'm just doing a tech demo, that's an even easier mode. So I don't have to use the build version or the build product in order to use the eventing product. So let me actually quickly figure out why I haven't cleaned up my demo yet. kubectl get through vision. Okay, it looks like this is just old. Yep. Okay, so first what I'm going to do is show that the system is fully extensible. So if I do kubectl get event type, it doesn't know any event types, it doesn't actually ship with any event types or event sources. But if I wanted to, I actually, we have some sample ones already available. So kubectl apply, actually co-apply since it has some build processes, package, sources. We'll do kates events. So now this basically created these pluggable event sources and event types. The whole system now knows what a dev.knative, since we're following the cloud event standard, we're going to namespace our own things that we made up. Dev.knative kates events. So if I do this in command kubectl get event type, it now knows that there is one available. What this does, it teaches future commands that if I have a flow, which is our binding between an event source and an action, I can now reference those same type of event types. And I can have a declarative system where I want to reach into this and now Knative eventing knows the Docker jobs or the Docker images for the Kubernetes jobs that have to run instead of this infrastructure. So, let's actually create an action. So I have .yml and this is deploying actual auto scaling function. Yeah, .yml. So for this, we actually created a Go SDK. This is an evolution of the SDK we open sourced at the Copenhagen demo, which actually, to the best of my knowledge, was secretly the first live demo of Knative. So at Copenhagen, we were actually running Knative for the Google sample. So we, but we rewrote it to be a bit more ego idiomatic. So right here, we have something where it takes a go lang context for anyone who's familiar with Go. You'll just be a very familiar class. Pretty much every function has a context, a context. And then the second parameter is whatever the heck you want. You can, your handle can have zero, one or two functions. If it is two, it's in this order. We do the automatic and marshaling. You can also return either nothing, an error or anything you want that would be used for that continuation and an error. So you don't actually have to worry about the nuances of cloud events handling at all. And you say event.handler. And all of a sudden you have an HTTP handler that follows the cloud events, HTTP transport findings. But now that we have that created, let's go ahead and actually deploy our flow to bind the two together. I have flow.anl. Once we do that, we should suddenly get some new subscription feeds as well. And that will be what actually makes the cloud events flow from the Kubernetes native event stream to your cloud function that is absolutely. Thomas, quick question for you. Earlier you said you created an event source. Can you elaborate a little on what that event source actually is inside the Kubernetes environment? What is the resource and what does it represent? So, I mean, there is a new Kubernetes type, cool. So we have this new thing called kates events. So an event type can say it is served by or like it's delegate for handling operations on that is now kates events. And this is the thing that's basically our strategy pattern for how to actually manage that a flow has been created or deleted. So this kates events configuration is what actually fires off the Kubernetes job that will run having created this new binding between my event stream and my local Kubernetes instance and this cloud function. For this, I guess. The event source is not that that's the flow that's finding the sources. The thing that says you can, this thing is gonna have events, right? Yeah, it's the strategy for how to enact your declared interest. Does that make sense at all? Yeah, sorry, I was on mute. Yes. Okay. Okay, it's meant start. Although it looks like I have, let me figure out what went wrong here real quick. Image can't be pulled. It might have to back up, figure out what happened with this demo and somehow not able to download my Docker image for the feed. I should have had a pre-baked one. Oh, describe. This is the danger with Co is that you might accidentally run Co, or Q-Cuttle when you met Co. Okay, so those are. So while you're doing that, Sarah, that was you in the background, right? Yeah. That's what I thought. Okay, gotcha. Okay, we'll see if this can, case deployments, just a moment. Okay, let's try re-running that Q-Cuttle apply. Oh, not Q-Cuttle. Co-apply. F. So we should see any subscription getting created. We can also see what's. Okay, I might follow up on Slack with why, if I figure out why the image is being pulled. But if it's failing to download the image, I'm not 100% sure what's going on right now. I apologize about that. This is a real live demo. Yeah, I will come with a pre-baked one with the conclusion next week. Yeah, maybe we can record a little video if the camera team isn't already. Sure. Did you want to do that other demo or do you think that covers it? I can try. Closer doesn't take too long, yeah. And this one, you can see that I have a couple of poster topics in my project. I'm going to use the event source one as my subscription. I should see that the switch is created. If I were having. I have an idea why it might be failing. Oh, yeah, I filed the bug and there's a single line of code you have to change. So you have locally this, so you can install again, but change first this line. So feed controller is using always pull strategy. Okay. Defines the feedlet pod container with always pull should be if not present. So, yeah. Is that something? I shared the link. I don't know if you see it now. Where? Here in Zoom. Oh, I don't actually get to see that I think when I am presenting. Then in the eventing repo issue 284. If this is going to have us be rebuilding a couple of files, I don't want to hold the whole hostage. I will do either recording or show off a preview victim at the end. Okay. That sounds great. Thank you very much. Are there any questions for Thomas before we move on? I have a question. So Thomas is the goal of this connected event. Is it to produce an event or to bind the event to a function or to create an event? I would say, I assume the, I mean, is to bind the event to the function, right? Create the event just for simulation. You have a grant source, but in real situation, right? The event source could be anything, right? Could be an IoT device or could be a middleware or any storage or streaming source. Yeah. So there's multiple personas involved in this. So if someone wanted to make their event source available, they could create the YAML files that describe their event source and the Docker images that would actually execute that intent that would be able to create new registration or to clean it up. And then any actual cluster operator could now add that capability to their entire cluster. And the, oh, I think I just also remembered one of these. Oh, I think I forgot to install the bus. Actually, it's one of the things that the cluster operator is supposed to do. I should follow the demo scripts. Anyways, but then any developer inside that came in over their Kubernetes cluster would be able to receive that event feed. So it's in some sense, a matchmaking service. We are not trying to create all the event sources. We're trying to make sure that there's a platform where someone can create an event source easily that developers for those clusters can handle those event sources. And then also we've been working with another number of companies to possibly base their cloud products off of this, which means hypothetically, if you target the Knative framework, then your event source might be accessible, say in both RIF and OpenWISC and Google's offerings. Okay, so you're trying to bridge the event source to the event consumer, like, you know, or even serverless platform, something like that, right? Yep. Yeah. So you are going to... Go ahead. No, I'm trying to say, so you will, so I assume you're going to provide functionality for, what to say, for transport, for, you know, the transport event from the event source to the event consumer, is that right? This is actually the thing that I think I forgot to set up correctly, which is why I think the demos are failing, is that just like event sources are pluggable, transports are pluggable as well. So Flow is the top-level, like, friendliest abstraction. Underneath it, there's different buses that you can install, and buses can have instances of channels that will deliver the actual events. And so we are trying to stay completely agnostic to what the transport that is actually used. We have plugins right now for the stub bus, which is just an in-memory transport, Cloud PubSub and Kafka. Okay, I see. Yeah. That Flow, YAML file is the key functionality in the connective your product, right? Okay. So event source itself is outside of that, this connective project, right? Correct. You could interface with any event source, okay. Yes, it's an explicit goal that KNATIVE will support event sources without recompiling KNATIVE. Similarly, it will support different transports without recompiling KNATIVE. And that the cluster operator can make decisions about how that cluster would operate in default, that you could even theoretically using the lower-level components, set up exactly one event stream to use Cloud PubSub, for example, and the rest of them might use Kafka. And there can be in-cluster event sources as well. Mm-hmm. Okay. Yeah. Okay. So I have another question. Looks like there's quite some, you know, I feel there are a lot of CLIs. I mean, the user need to do to enable this. Are you thinking about like automating all this or make this, make the whole process simpler, easier? I would say easier. Well, we have an alpha you can send up right now for the serverless plugin for GKE. So it's a one-click install on the hosted solution to have this all. Otherwise, you, like Helm is typically the installer tool for Kubernetes. Sorry, but I think part of the question though, that's something that you're wondering is, is KNATIVE meant to be used by end-users or is it meant to be infrastructure on which function platforms built? Honestly, I think both. So I personally, you know, I like those abstractions. I could see myself sticking at that level, especially because KNATIVE has, even in its own, kind of an above and below the fold set of abstractions. So on serving, for example, there's just the service type and that's this nice friendly wrapper. You only have to deploy one. But underneath it, it actually has things like the route, which is the load balancing URI. It has the configuration, which is your current state of what's been deployed and has the revisions. So if, for example, you went to earshot, became more advanced and you decided that devs can push any revision out, but only the operators can control which one is serving traffic live, you didn't have to add or replace infrastructure to do that. So just to rehash one last thing before we end on this is I guess one of the main goals that you guys are looking for is to make sure that cloud events are compatible across multiple clouds, those that do adopt KNATIVE. Is that correct? Yeah, so I've been pushing very hard to make sure that cloud events is a lingua franca of this system. So that's why I apologize, I've been out of all of these meetings, I've been rewriting demos to use cloud events end to end. Got it, thanks. All right, I think we need to wrap it up now, but a very, very short questions for Thomas, otherwise we couldn't move on. All right, cool. Go ahead, Kathy. Sorry, I have a short question, I hope it's wrong. So basically, if the event source format is not cloud event formats, I mean, you are going to translate it, right? Is that part of your functionality? I mean, event source in the YAML sense, yes, it is responsible for making sure that the thing that comes out is a cloud event, though the conceptual, like a database, if that was an event source, could very easily just support cloud events natively. Oh yeah, yeah, if natively, yeah, right. But if not, you are going to do the mapping, right? Correct, that's generally what the expectation would be of someone who wanted to create an event source is that they maybe, for example, subscribe to some legacy webhook, but then they add eventful semantics and wrap it in the standard envelope. Okay, thanks, thank you. All right, cool, all right. So thank you very much, Thomas. Okay, moving forward, I know Austin isn't on the call and there was no SDK call this week at all, so I don't think there's anything to update here. But Kathy, is there anything you'd like to update the group on relative to your workflow work group? Okay, so in last, actually this year's SIS meeting, we went through all the comments and we've resolved almost all of them and then we specifically discussed a parallel state, which is proposed based on a comment. And we discussed that parallel state and also, I think the hero presented the use case that shows the need for that parallel state. And then we also discussed the filter mechanism at different point of the workflow, like from the event information passing to the function. And so we can have a filter there. And then when also have a filter between the information, between information passing, between passing of information between the functions, also between the states. So we discussed that, and then we have several action items, which are listed in the meeting minutes. In this, I think this meeting minutes had that workflow meeting minutes. So if people are interested, you can go and take a look. Yeah. All right, thank you. Any questions for Kathy? All right, cool. Thank you, Kathy. Moving forward, I don't think there's anything on issue maintenance we need to deal with so we can jump right into PRs. Now, last week we were talking about Clemens PR about qualifying profiles, basically setting the bar for when we're gonna accept new specifications around serializations and transport bindings. And then Ryan had a suggestion for some changes. Ryan, you wanna talk to your suggested changes here? Yeah, so I basically just put the first paragraph of Clemens description into bullets and kinda just do away with the second paragraph. Kinda to me is a lot of description, not very clean. So I kinda just leave it with the first paragraph into, in shortcut, it's really straightforward. It's either a public, like what's that called? It's either it's already a standard or it's a defacto standard. It's pretty much like that. So just be clear, let me hide the comments here for a sec so it's easier for you to see. I think what you're proposing is to replace both of these paragraphs, correct? Yes. So both of those go away in favor of just this paragraph and the bulleted list. Yeah. Okay. Yeah, it's either, it's either a foremost standard or defacto standard, to put it shortly. Right. All right, any questions for Ryan on that? I don't think it changes the meaning behind coming to this PR, it just puts into a different syntax more forcefully, right? Yeah. Kathy, were you gonna say something? I have a question here. So let's say a particles standardization body, at least it's some, at least four, the only standard or these are just some examples. And another question is how we define the defacto standard? Yeah, the defacto standard is, to be honest, it's also not that clean cut, but as we discussed, things like a Kafka or like Kubernetes in terms of container orchestration, those are type of things that become pretty much for, in the long term, I think it's called any reasonable people would think that's a standard. It's not as clean cut as I want, but seems like we cannot get away with that. And for the first question, I'm not expert in all these forms, I like all these standard forms. I just copy whatever Klamans listed there. The ski seems to know all these standard bodies. And if he thinks that's the exhaustive list, then that's it. If it's not, I would like to see a exhaustive list because that at least makes it again. My purpose is really just want to make it clean cut so that there less argument or things down the line, down the road, then people would say, oh, this is a standard body too. Yeah, that's the type of conversation I'm trying to avoid given this. Yeah, I'm not sure how realistic it is to list all of them in there. That's why I was wondering whether you should just put an EG in front of it. Oh, I see, yeah. And EG, I see, yeah, yeah, yeah. Just like the Kafka one. That would be good. Yeah, because for example, ISO is in the list and that's a recognized one as well, right? So in the list may change over time, so. Yeah, yeah, yeah. Yeah, that I think makes sense. Okay. What other people think about this proposed change in text? Yeah, I'd agree with that. There's a lot of standards out there. Okay. But as a general, aside from the missing EG, what do people think? So I just added a comment to propose. I think I liked somebody said that that maybe we should attempt to make a little more of a definition of de facto. And so I added a comment while we were talking, which is something that has an open source implementation as an end is in use by services or products from independent vendors. Meaning that like it's out there, different people are using it. It's just, it's a suggestion of making it like, like it might be like, it's a little vague to be like, what does it mean to be accepted in its ecosystem? I helped. Oh, I see. I agree with this as at least a min bar. For example, I could argue that Google Cloud PubSub is a pretty de facto standard in the industry. However, I don't think it meets the standard and they will imply in this board. Yeah, now it gets to the, hello? Yep, go ahead. No, yeah, yeah. I was worried about, actually I spent some time looking at the third paragraph is whether to try to, I tried to rephrase the third paragraph to make it more inclusive, but still clean cut. But at the end, I cannot do that. So I get it away with, because it seems like the Thomas thing is, belongs to a third paragraph. And I'm not too sure. I also struggled with that. I thought about these things and, but I cannot find a very clean cut. And I think the reason Clemens included that third paragraph is for those type of services. And, but there you go. We say third paragraph, do you mean this one, right? I'm sorry? When you say third paragraph, do you mean this one? Oh, the second paragraph, sorry. Yeah, the second paragraph. Yeah, it's kind of become a slippery slope. When you just say, initially I thought we could say, any service that is used by, at least three more independent parties, something like that. But then it's still kind of hard to define used by three. What does use mean? Would you need to find three different projects? So I'm not 100% sure. Okay, before we go to the next person, let's talk if you're typing, can you go on mute please? I heard some typing in the background. And Dan, are you trying to say something? Yeah, I think that third paragraph is actually kind of important because it's trying to make sure. I mean, I know we all love Kafka and stuff, but the thing to remember about Kafka is there's part of it that's an open protocol and part of it that's actually like an implementation. And that works because that protocol allows alternate implementations. So I think that's what Clemens is just of the third paragraph is. And I do wanna make sure we don't lose that just because the idea is to make sure that we're only letting things into the set that aren't gonna lead to Vendor Lockett at some point. Just to be clear, we're talking about this paragraph here. Is someone else gonna say something in that? This is Colin. I missed some of the early history of this. Why are protocols even blessed or defined in the first place? And thinking in terms of separation of concerns, why not just a specification of what a binding protocol is required to do? And then organically, just let whatever people wanna use and the most popular will be the most popular or the favorite will be the favorite. So I think the big question is whether we gave them space in our formal GitHub for advertising. So everyone should, to bring up cloud pubs again, we are going to have a formal spec of how Google says cloud pubs should have cloud events go on them. But Google's gonna have to use its own website to advertise that documentation. And then we can have a link from it here, but not a full markdown page. But yeah, I don't want, like nobody owns HTTP, right? And I think there are other ones where if it's a de facto standard, the person, like the community may want to create a protocol expression for cloud events without whatever entity originally made that thing being involved, right? And then there are other things where the company wants to promote their company. So we want to be like, okay, that's your business, not the community's business, right? So I'm trying to figure out the best way to make progress here. And I think there are a couple of different things here. One is stick an EG in front of the list here. Two is to see if we can firm up the definition of de facto standard. And I think Ryan or I guess everybody else on the call needs to think about whether Sarah's proposed text here is sufficient to define that. But Dan, your comment about the Kafka stuff, it wasn't clear to me what you were looking for on that. Was there a particular sentence in this paragraph that you want to make sure we keep? Well, I mean, I think that last comment was spot on is that we don't want this to be an advertising string board for any one vendor, right? True. And that's why the alternate implementations piece is kind of important. The Kafka thing was kind of a sidebar, is a distraction. I'm not saying change that part of the text. Okay, so if we were to adopt a language similar to what Sarah had proposed, would that address your concern? I think it would, yes. Okay. Yeah, I have a quick question on that. My understanding was that the bindings aren't a transport where something like a Kafka to me is more of a product than a transport. Like you're not going to see somebody going, I'm going to use a pub sub using the Kafka transport without all the other Kafka components. So shouldn't you make that a de facto standard of a transport as opposed to a product? I think that's true in the ecosystem category because he didn't actually give it a formal title, right? Well, and I think you've actually hit my concern on the head is that like Kafka means a lot now. It doesn't just mean a transport anymore. It doesn't just mean topics. So I think it's important to be careful when we carve out things like that that we're talking about. You know, the part that's relevant for cloud events and not the rest of necessarily the whole platform. So is there some alternative texts that you guys would like to see? I think it's probably touching on the text that Sarah wrote. Yeah, I think I agreed with somebody, like Thomas and somebody else pointed out that this is a min bar. This is probably not sufficient. Yeah, yeah. So I was thinking just like this, it will say used by not just by other services but put a quantitatively say more than five services. Like Kafka again, whether it's the original Kafka, the PubSub product definitely is used very widely. 10 is not even a, it's a clear bar. We can say more than 10 different companies are used, communities are using it. It seems to me we should put a quantitative number there, not just by others. Other than otherwise people can say I have another buddy new using me then they can clear that bar. I agree we could have that bar but my original thought was just put something more quantitative than someone is using it. Would people be okay with that? Just put an actual number in there. But again, is it talking about a transport which we're binding to or a product? Because nobody's gonna implement a, for example, a Kafka transport but there are transports that are emerging, for example, in the IOT world, that are becoming big B plus and co-lap and all these other, which really are transports that are not necessarily standardized but are a lot of products out there using them as transports as opposed to a full suite of products that have to work together. Because without the, for example, a Kafka client which everything is in the client, it's really not a transport. That seems to me it's a kind of different angle but for example, for the examples you raised are good examples. Those are, to me, is de facto standard. Yep, exactly. And transport standards as well which is what I assume you wanna bind to. Transport, that's the part I'm not 100% sure. I wanna hear others thought. I don't know if this is only transport only or it's actually applies to some other things. For the Kafka version, the reason I think the Klamas mentioned that was, also I thought because it's PubSub, so when the events coming in and out, that's the format we wanna use, like we can work with Kafka's Pub and Sub events. So I don't think this is the best use of our time to wordsmith exactly here, but Ryan, I do get the sense that you may have a general idea of where people would like to see this go off. Yeah, I can give another shot. Okay, yeah, and then people can come back. And just people keep in mind that the odds of us getting the exact right text, 100% accurate, or perfectly, pretty slim. So we need to set, you know, say what is good enough to get the point across. Just so that people, when we come back to say no to their proposed spec they wanna add, we can be clear to say, look, you didn't meet the bar we defined here. Yeah, yeah. That's what we're trying to do. Yeah. All right, cool. Any last minute questions or comments for Ryan? And I think Clemens might be back next week. I don't know for sure if he's not, but maybe we'll be back to comment on this as well. All right, cool. Thank you, Ryan. Moving forward. All right. So we can skip these two. Let's see if we can quickly look at David's schema validation one. Whoops, sorry, he opened up an alternative one. I don't think David is on the call. So what he was basically trying to do is to add schema, JSON schema, sorry, to our schema stuff, to our JSON stuff, I should say. So we add this little bit there and then he added the JSON schema itself for a cloud event. Add a separate document. I believe this change may have actually gone in like yesterday or something. So it may have been maybe too soon to formally approve, but I wanted to get people's general sense of whether they're okay with this direction or whether they don't like the idea at all. Any concerns with going this direction? Okay, let me ask the bold question. Do people want more time to review this or does it seem obvious enough we can accept it? I'm assuming we need more time to review. Yeah, need more time to review it. Okay, just want to double check. We expect to have people adding Avro schemas and the like also. Adding which kind of schemas? Avro. That I don't know. What are the people think? Sorry, the bigger question is, is there a set of arbitrary schema specification types that we could be opening a door for? Well, if we have a spec that defines how to serialize into a particular format, I think it makes sense to have a schema for that format, right? Sorry, I was muted. Yeah, you're right. I'm sorry. All right, any other questions? Okay, so please review this when you get a chance. We'll see if we can vote on it next week. Assuming there are no major comments. Okay, next. Hold on a second. Sorry, I had to clear the problem I threw up. All right, so let's talk about extensions. So I'd opened up a PR earlier and then I closed it in favor of this one. It's almost the exact same thing. I just thought it'd be cleaner to do a new PR. So this PR does a couple of things. First, it takes the step towards making our specification, our core spec into more of an info set, as Clemens like to call it, where we pretty much define in the abstract form what are the properties that we're going to define? And it leaves the entire notion of serialization concerns for the other specifications. So the JSON spec, the HTTP spec, will all deal with serialization of the info set that's defined in spec.md. That's the one thing that we do. It adds some text to the primer to make it clear or to try to make it clear when people should consider adding things to the cloud event as an extension versus stick it inside the data, meaning the real event itself. Try to find some guidance there. That way people don't think that, oh, everything must go into the cloud event extension when really it's not used for that interoperability layer that we're trying to solve. It's really part of the event data itself. Try to put some clarity around that. Next, I heard some confusion about the word extensions as a file name because some people seem to think that when we talk about extensions for our specification, those extensions only came from the extensions.md file as opposed to they can come from pretty much anywhere. And so to try to address that, I thought, maybe we renamed it experimental or something like that. Thomas expressed some concerns about that. Personally, I don't care about the name, but if there is some confusion around that, we may wanna consider an alternative name to extensions. So I'd like to get some feedback about whether that's a concern for people or not. Thomas, let me just finish going through the list and then I'll go back to you in a second. As I said, I left the serialization for the transport and binding specs. And then based upon the discussions that was going on in that Google doc that put forward about which use cases around extensions you wanna support, while it wasn't unanimous, I did get the sense that a fair number of people were saying, okay, it's okay to put for the JSON serialization extensions at the top level. And that may be a contentious decision, but I flipped the coin and landed the map so we can discuss that. And that's another part of the PR. So those are the main points of the PR itself. Thomas, I think you might have had a question. Oh, just a very quick suggestion is that maybe if it's the actual file name that's an issue, we could just call it documented extensions.md or something like that. Just to hint that like, yes, there are undocumented or? Yeah, and we could add to the text on top of it. Yeah, I like that other than it's really long. I like it. Yep. I hear bites are cheap nowadays. Yeah. All right. Before I start going through a little more detail, are there any questions about the high level goals of the PR? Okay. Now I'm not gonna say walk through line by line, but do wanna point out some of the key things. So in the primer again, I talk about a little bit about when things might appear as extensions versus in the data. And that the data is meant to be used by the component process in the event, while extensions are mainly used for the transport type of operations. Yes, that's what we're concerned about. And I do go a little bit into about extensions themselves in the extension section to make it clear that pretty much you can put anything you want into the extensions stuff that we allow for. There's something I don't want to mention what it was though. Yeah. Anyway, after it comes back to you, I'll mention that. In the spec itself, I deleted the extensions property here and just basically add text that says, producers may add additional extensions and that the other specs will define how they get serialized. That way we don't get into this whole question of our extensions under our bag or not because basically that bag and InfoSet level isn't part of our discussion here. Now that's not to say that a serialization could not add a bag for them. That's completely up to the serialization. So at the InfoSpec level, we don't talk about it. And I think that's the bulk of it. Are there any questions on that? Comments? Yeah, go ahead, Sarah. So this is saying that anybody can add an arbitrary extension at the top level. That's the gist of the thing. There's two different aspects to it. One, anybody can add an extension? Yes. Two, for the JSON serialization, yes, it will appear at the top level. I think I have an example. But then there's no way to differentiate what is the core spec and what is an extension except through the documentation of the spec. True, and I actually put together a document explaining why that's all goodness. I'm not sure we have time to walk through the whole thing, but in the agenda doc, under this whatever extensions, I put a link to a document here, and I actually put through, I put three little pages together to explain why that's actually a good thing. If you will want, I can quickly walk through some of the highlights of that. And we only have five minutes left, so I don't think we're gonna get through another deep topic anyway. So let me just quickly touch on it. So first, I wanna make sure that we level set a little about what are extensions. So first, I wanna make sure people understand or agree that for an extension to be used properly, typically the person who defines the extension is gonna have to do certain things. They're gonna, first of all, have to do things like define its name. How is it gonna appear? Is it foo versus foo-property? What is it called and how is it gonna appear when it's serialized? That stuff. They obviously need to define the semantics of it. What does this thing actually mean? What does its value represent? If it's just random corp, it's not gonna be very useful to people. And then of course they're gonna have to define its type. Is it integer, string? Is it a complex structure? What are the valid values? If there is a fixed list, those kind of things. So they basically need to fully define what this property is. Basically the same way we've done for our core spec. Otherwise, if they don't do that, there's a very big possibility that it's not gonna be widely used at all or properly used, people will misinterpret it. They really should be clear about this stuff. In my mind, they almost need to write a spec, okay? Now extensions from the cloud event perspective are basically anything that's not defined in spec.md, right? So that's our extensions.md file. Third party, vendors, middleware, whoever. Basically other stuff that producers may decide to include inside our cloud events. Those are what our extensions are. So anything outside of spec.md. The reason I'm mentioning this is because I've gotten the sense that there was some confusion that only certain of their properties are quote extensions and there may be different classifications of extensions. And I wanna make it clear that from our spec perspective, extensions are extensions or extensions. There is no specialization of extensions. There's not in spec.md, it's an extension. And that's what this last paragraph down here is basically trying to say is, from our point of view as of right now, all we have are spec defined properties and extensions. That's it, okay? Now. If I can ask a follow-up question, is there any such thing as invalid data then? Invalid data. Aside from the case where you actually put like bad values inside of a spec defined property or something like that, I'm kind of saying no. Okay, so I think that's one of the things where there's probably different camps. So can you elaborate a little on why you think there, well, can you give an example of what an invalid data might be? So if the spec, there's obviously things where you take an existing well-defined field and you put and it doesn't comply with the spec, but there's also different theories on whether or not someone conforms to our spec if they have any superset of data at all that we describe or if they use the sandbox so if our spec has a map property or a structured object property, are they allowed to add new fields that are part of that structured property? I would strongly, strongly, strongly say no because I would like to say that when we have to find our spec and the object has this format, we should be able to add new fields to that object and not worry about conflicting with things elsewhere in the world. So what I'm saying what you're saying correctly is if we define a property in the spec that is a map, that means there's a natural extensibility point in there that people may think exist. And you're saying we should not allow people to add additional properties to that map because we own that map, if I understand you correctly. And I think if so, that's up for us to decide. So when we define that property, I think it's our job to be clear about whether people can add additional properties to that map or not. Does the answer your question? So I have a dog. Yeah. I have another question, okay, for all of this, right? So the your, I think your PR is trying to define the serialization, right? I think we need to separate these things, you know? So there are some attributes that belong to the main spec which is in the spec.md, right? And then there are some attributes which are extensions, which means they do not have official standing, they do not belong to the main spec. So that's one thing we need to discuss or decide, right? To be clear to everyone. And then another thing is how we serialize those attributes. And I think the serialization should be the same no matter whether the attributes is in the main spec or it's in the, or it's extension whether it's in extension spec or not. That serialization should be the same. It doesn't matter whether the attributes is in the main spec or in the extension. So I would like the serialization PR to just address the serialization. And then maybe there's another extension PR to address what's like the criteria to decide what kind of attributes should be in the main spec, just like I think we have a PR to define the criteria for the particle which the serialization particle, right? Which will be part of this work group standard. I think we should have two that will not be confused. Otherwise if we confuse the serialization with the information in the extension or in the main that's why we discuss so many times in so many multiple meetings, we still have not sorted out because I think people are coming from different perspective understand it differently. That's my one comment. Yeah, no, I understand. Let me just address that first. So I know I put the PR in there probably too late in the cycle or too late in the week for people to really review it but I actually think that's pretty much what I did in the sense that I left the serialization out of the main spec and pushed it off to the other specs meaning the adjacent spec, the HTTP spec and I don't treat extensions any differently than spec defined property. So I think that actually is exactly what you were asking for. So I did that. The one thing this PR did not do and I agree with you, we should deal with that in a separate PR is are the rules clear as to when we will include a property in the main spec or when we're going to push it out and say no, it's an extension. I think that should be a separate PR if the text in the primer does not address that today. Okay, yeah, okay. So I have another comment is the format of the attribute. I think we should not let the format of the attributes of the metadata attribute to decide whether it's in the main spec or it should be out of the main spec. That's my thought. For example, I think you know, so give me a couple of this identification label for correlation. I think if it's needed by a lot of use cases it's going to be commonly used. It doesn't matter what's the format we're going to decide as a group. I think it should be in the main spec. But if it is not widely used, it's only used by, you know, very specific or specific use case, right? Or it's only used by a vendor that it should be out of the main spec. I'm just taking that as example. There could be other examples. I mean down the road. I'm not sure there's a disagreement on that point, but take a look at my PR and see if you think I say anything different. Like I don't think that's inconsistent with what the PR says. I think that opens up another question. So for some attributes, like for example, it could be a map format, but the content of the key value pair itself might not, we cannot pre-define it. It's just like this identification label, right? For different use cases, you have different identification labels associated with that event because I would say not for different use cases for different event sources because those event sources are all different, right? The additional identification label associated with that event could be different. For example, for some event, it could be a house address. For another event, it could be a travel request ID. And for another event, it could be a stock trade ID, something like that, right? Or it could be an employee ID or department ID. There are so many, they are all different. So Kathy, what do you have to say? All these are ultimately fall and identification labels. There are different attributes there, but they're still fall and identification. Is that right? Yeah, that's right. Yeah, I think as long as you find the identification label that attributes clearly, I think we are good, right? Because there are some attributes that by itself, the nature. Kathy, I'm gonna have to stop just because we're out of time. Yeah, everybody, please take a look at the PR. I think I actually did try to address your concerns in there, Kathy. But everybody else, please take a look at the PR. And in particular, also take a look at the short little PowerPoint thing I put together explaining why I think top little things are the right way to go. But very quickly, I could just get the final roll call for people. I think, Louie, I heard you. Dan, I heard you. Chris Borchers, you still there? I'm here. All right, Stanley. I'm here. Colin? I think I heard Colin already, right? Yep, I'm here. Yep, Sivam, I'm sorry, I'm gonna apologize. Okay, Simon, you there? Yes, sir. And Baram, you there? Baram, you still there? Yep. Yep, got it. And I know Dan was on, ping me offline so I know he was there. All right, did I miss anybody on the roll call? All right, cool. Thank you guys very much. And please do take a look at those documents and please don't wait for next week's phone call to comment positive or negatively. Please comment on the PR, any PR that's open, in particular, but on the extension one. So let's see if we can try to get this one behind us. And I apologize for running over. Okay, we should talk about the extension because it's like not moving forward. I know, I'm trying to figure out some way to move it forward. So if you have ideas, please suggest it. But the first step is probably please comment beside the PRs because silence doesn't help. All right, cool. Thank you guys. We'll talk next week. Thanks, Brad. Bye.