 usually administrative details. And let's jump right into the extension stuff. Now, on last week's call, I think we left it where there are a couple of people that wanted to talk to certain aspects of the issue, mainly from the perspective of why they believe a bag is needed. And I believe Rachel, who from Google was going to speak for the GRPC site or Protobufs? I can't remember. Would you like him to start now? Yeah, if you guys want to, yeah. Do you want to share the screen or just talk to it? I can. Or Kailash, if you want to share and if you're able to share, you're welcome to. Just let me know if you want me to stop sharing. It seems like I will share. OK. There you go. All right, let me see your screen. OK, I can see it. Cool. Kailash, do you want to present this? He's staying awfully silent. We were just chatting, so I know he's around. OK. Is he in the meeting? Maybe he's having trouble joining. What's his name again? I apologize. K-A-I-L-A-S-H. I don't. Hi. Oh, OK, cool. Sorry. Yeah, I had to. I just came from another CNCF, we digress call, so OK. Yeah, hi, everyone. So can I just jump right in? Yeah, totally. I have the slides up, or do you want to present? If you have the slides up, then we can walk through it. So basically, I wanted to follow up from the last meeting, whereby we took the two possible approaches for extensions that were actually discussed, and we ran a little bit around how this would look like in protocol buffers and wanted to share what we found. But before we got to that, for the benefit of everyone in the room, I just wanted to give a couple of slides of overview of what a protocol buffer is. So to recap, or to sort of like see for the first time, a protocol buffers is basically the interface definition language that we actually used to actually define strongly typed messages that can actually be compiled to a binary format across the wire. And it can also be used to actually define service definitions as well. So that's why RPC systems and so on, that not just Google and not just ERPC, but in many different places, they actually use such a format to define both the messages and the RPC systems. Thrift is another example that works very similarly to this. Yeah. So in protocol buffers, so in the previous slide, we saw how to define messages. In this slide is you can see how to define actually services and how these services can actually then accept a message as an argument and return a message as its return value as well. So this is the basic sort of way in which we actually define services. We define messages across the board. And our intent, at least the GRPC and protocol buffer teams intent over here, is to actually define cloud events in a similar way. That's basically our goal in trying to actually work in this group and try to come up with a conforming format for this stuff. So we have been on the topic of extension, so I wanted to dive right in. There are two actually language versions of protocol buffers. There's the protocol buffers version two and the protocol buffers version three. I am starting to look at protocol buffers version two. On the right side, you can see that the definition of a protocol buffer in version two is actually quite a bit more verbose than the messages that you saw defined in the previous two slides, which used protocol buffers version three. But the reason I wanted to actually talk about protocol buffers version two and extensions is that it actually has a particular thing called extensions that allows people to actually define extensions. And I'm presenting this to this group because this is where there's a lot of internal experience in building these kind of things. So in protocol buffers, basically you have a field number. On the right side, you can see the 0, 1, 2, and so on and so forth. These field numbers cannot be changed because if you change the field numbers, you are actually changing how the binary format looks on the wire and how it's going to be deserialized. What extensions actually are in protocol buffers is that they actually reserve a few field numbers for third-party extensions of your proto. So you can actually say that this particular proto is actually defined somewhere else. And basically what you would do is that you would then take the original proto, say you call it proto A, and then you import it into proto B and basically extend proto A with it. When I say import, that's actually an import statement that you can actually do there. And then basically what protocol, a proto B's format, wire format will look like is that it will look identical as if proto A had this new field defined within it. So that's kind of how extensions work. As you can see, this is a bit of a roundabout sort of way of doing it. But once you get the hang of it and once you're sort of in this particular way of thinking, it's OK. So basically, how does this work for older clients? Like say a newer version of a protocol comes in and you have an older client library running or an older service library running. If that service library, when you actually un-martial a proto, the unknown fields are actually not discarded. Because if the same message is actually later serialized, that unknown message can actually be passed along. This kind of allows services to actually upgrade at different paces through the proto definitions. And it's one of the big points of protocol offers in the sense that when you upgrade a proto, not everybody has to upgrade their implementations immediately. But they can upgrade as and when they want to take advantage of the new feature. So that's kind of how extensions work. Now, extensions have a lot of problems. Extensions is not future proof. Because proto 3 does not support extensions. So this is kind of a dead end in terms of if you want to actually pick up this particular way of defining things. The other thing is that a protocol buffers have a canonical mapping to JSON. And extensions don't have that. So things break when you try to do that. The other thing is that, as you saw in the extensions, that there's a certain number that you need to reserve for a particular extension. So because there are field numbers in protocol buffers and also in things like Thrift and Captain Proto and so on, if any approach like this would require a global coordination. Yeah. If you could go to the next slide. So the other thing that we took an action item is how do you actually promote extensions up? Just to conclude the thought exercise on proto 2. Now you're talking about not proto extensions. You're talking about cloud events extensions. Right. Yes. Sorry. Yeah. So this slide talks about cloud events extensions. What happens when you actually have to make sure that something that was implemented as a proto extension is now an official part of the proto because we've decided to actually make that cloud events extension part of the official spec? You have to actually redefine a known extension into the official protocol buffer. That's OK. But then the other problem is that any difference between the initial and the final message becomes completely incompatible. So if you decide that, OK, this works kind of well, but let's promote a slightly different variation of it, that becomes a big problem because you are essentially defining a different message. And by difference, you mean a different type? Yeah. For all intents and purposes, you're defining a different type in that case. So in protocol buffers v3, we got rid of the extensions and replaced it with any type. Protocol buffers have a series of well-known types, so types that implement a particular functionality that are encapsulated in a proto definition itself. So the any is a well-known type. These are sort of types that actually let you build upon basic types in protocol buffers. It lets you use messages as embedded types, and then it lets you specify a URL for where to actually decode that particular byte string that any wraps around. So this has a JSON mapping, but the JSON mapping actually persists the URL, which actually contains the definition of how to actually unwrap this, which makes sense if you're coming from a proto first into JSON world. The thing is that, yes, this is not going to map into a canonical Cloud Events extension if you actually use any in a proto, but at least it gives you a path forward to actually map into a JSON in a consistent way. In terms of unknown fields, as I mentioned in proto 2, it follows the same way as proto 2, as long as the library version is more than 3.5. The other problem with any is that if we are trying to use any to model extensions at the top level, when I say extensions here, I mean Cloud Event extensions, you still have that global coordination problem of field numbers that I mentioned earlier. I feel like this is a pretty important point, and it might be worth explaining a little bit more detail. So if, say, I create an extension called Rachel's extension, and someone starts using that, and it gets assigned a field number like 100, and then say you create an extension, and you could create the same thing with the starting with the tag number 100, and we would collide, and we would be sending my data would get into your, but become the value for your extension. That's the problem. Yep. I'm not sure I followed that. Can you elaborate one more time? Because the mapping to numbers I'm trying to figure out if that's just an interesting annotation detail or something we actually need to think about, because it sounds like you're saying if two different people define the same, if two different people define an extension, they're both going to get assigned a number, but then how do you make sure they're both not assigned the same number? Because they're working independently. Yeah. That's right. That's exactly the problem. But then that's just a problem for the protocol. That has nothing to do with us. It's a problem if anyone's going to use a binary format. It depends on what do we want to be compatible. Sorry, I might be jumping ahead of your presentation. I think that's a good question, though. The thing is, yes, this is a problem because of implementation detail in the protocol buffer format. But it is a problem that we would run into a lot of binary formats, because what binary formats do is that they actually use field numbers to actually tell people how to actually structure that particular message. So what we're trying to do here is we're not basically from a protocol buffer perspective, all we're saying is this is a problem, and we see that this is a problem that's going to be a problem for other binary formats as well. So that's the point we're trying to argue to this group. Just to explain why we're bringing this up, if Google was the only binary format that was going to implement this, this would not be a problem. We could just do it on our own, and it would be fine. But we're trying to make something that's compatible with anyone else who wants to have a binary format as well, and that's why this is a problem. So what's interesting is that we actually don't have a specific. By the way, hello, Clemens. I was wondering if you'd be able to guess. Hey, we actually don't have a protocol buffer format specification to look at. So that's for you making an argument about extensibility for protobuf is a little odd, because we don't have a format specification for protobuf, per se. So we have one written, and it's circulating internally before we submit it here. So it's a little odd that before anybody has seen that, it could go and give input on this, and also provide input on how to possibly do this. That we're discussing a detailed aspect of that. So that's a fair point, and I think that I have a few. I think the last slide of this presentation directly addresses that. So basically what has happened is that when we started out trying to design this particular format, we actually caught in the middle of, OK, how do we handle extensions? Because that becomes an interesting point as to how to actually define this particular spec for the Protobuf 3. When we are doing this, we actually jumped into this particular discussion at the same time as that other discussion. So we're actually in this midst of, let's make sure that we can actually handle this right. And at the same time, we want to make sure that extensions are designed in such a way that it helps do this correctly for binary formats. So you're absolutely right that in the ideal world. Let's talk about some prior art here. Well, hold on. Clemens, hold on just a second. Let me just finish this sentence. Go ahead. Yeah, so basically I think that is the sort of the state we're in. All we're trying to do here is to try and say, hey, we are trying to actually implement this spec. We want to actually get this out as a pull request. But at the same time, I think there are some fundamental decisions that are also being watered on at this particular point that is going to drastically implement how that looks like. And we think that there's a few different issues over here that pertain to a lot of binary formats that we want to bring to the table and give that information to the group over here. So if you're talking about a lot of binary formats, it'd be interesting to go and see a survey of that. Second, there is prior art for how to do extensions in a broader number of serializers. I pasted two links that are .NET. There's Java serializers, which effectively all do the same strategy. And that is to stash everything that's not known by schema into an extension dictionary. So the wire format, let me speak out. The wire format is flat. And this is something that is being done for. This comes out of very good practice in XML, where you have XML structures that you make with an open schema where you put in any element at the end, where you are effectively defining your version 1 and you're appending in your complex type in any element at the bottom. And then you're defining your version 1.1 and effectively you're adding elements in the place of the any and you shift the any down. The nice thing about that is that now the 1.1 serializer and the 1.0 serializer will be able to roundtrip that document because the serializer supports these extension bags if you have an open schema. So I pasted two examples of this. One is for JSON.NET. So that actually does that sort of roundtripping through a serializer that maps to an object with JSON. The other one is doing that same thing with the XML serializer of the .NET framework. And that actually also roundtrips the data. And there's another bag, since it's XML, to do that for attributes. So the strategy you can hang on, the strategy you can use for your protobuf structures is that you actually define a protobuf structure in for version 1.0 of cloud events. That contains all the well-defined fields and then contains this extension bag because the way how JSON maps to protobuf is completely irrelevant because what you want is you want to have a proper representation of the cloud event on the wire using protobuf and the way you would do this is you would go and serialize it into a format where everything that you don't know about goes into a dictionary and you serialize the dictionary out. And then if you are upgrading to 1.1 and you're adding well-known fields while you add them and then you still have your dictionary. Now that dictionary bag doesn't need to appear in the base spec as it doesn't do in all these XML and JSON objects that are on the wire as flat objects that bag only appears in serialization. So while I appreciate all the complexity and features of the Google protobuf capabilities, I just don't think they're needed. I think you can literally do an explicit object that has all the well-known properties of or structure that has all the well-known properties plus a dictionary that then gets everything that you don't know and roundtrip that. Yeah, so I just wanted to let you know that there's one more example of exactly what you described that's in the next slide. So, Rachel, you stop sharing so we can't see what the next slide looks like. Yeah, so basically what I'm trying to say is that we wanted to actually, so Clements, basically what happened in the last meeting was that there was a discussion whether we should have these extensions directly at the top level or we should have that in the bag as you described. So I took an action item to actually explain in the protocol buffers world what putting extensions at the top level would look like and what extensions putting in a bag would look like because and the pros and cons of each. So I'm not... You may have misunderstood what I said. I'm not saying the extensions go into a bag in cloud events, they go into a bag in your specific format that you defined for protobuf. So the point here is that the cloud events abstract data model is flat and at least we're advocating for that being flat and I think Doug and I and a few other folks agree on that being flat and that bag may appear in a particular format mapping because you need it. In JSON, you don't, in XML, you don't in protobuf because you have your schema bound you probably need to. But that doesn't mean that the specification per se needs to even have that bag. Effectively, the bag constitutes itself out of everything that is not explicitly known that you can't explicitly put in the schema because you know it. So you have any extension effectively is extension, the extensibility point, similar to the two examples that I pasted and that's where you put that stuff. I mean, so that's prior art to look at. Okay, Clements, I totally take your point. How does that explain, like how does that take into account and cover the case of multiple binary format seeding to interoperate? Well, wait, so if you go into, well, binary formats don't interoperate. Like protobuf doesn't interop with thrift, does it? Well, we want a format where if I send something from protobuf, then someone that's using thrift can pick it up and use it as well. Yeah, you do this via some object. I mean, you have a memory representation because you can't do straight protocol mapping from one to the next. You always have some memory representation that you go to and that you can make lossless. Can I ask a question? If you actually, Rachel, if you are, this is all fine. Can I just have a couple of minutes to run through these slides so everybody has all the information? Sure, go ahead. And then actually go to the last slide because I feel that we're not actually presenting a solution to this group because and also like sort of where we are here. So the idea is that the current stage of the proposal is still a work in progress and this answers the initial question like why is there not a pull request yet because we wanna actually solve these problems. So and these are the sort of four concerns that we're trying to address over here, right? Like one is that we want to follow the messaging formats best practices and I deliberately put messaging format here because I want to define a cloud events spec based on protocol buffers that follows protocol buffers best practices. And I think all of us want to do that. Like we want the JSON to look like idiomatic JSON and the next person who defines it for thrift to look like idiomatic thrift. That's the other thing is that is it important to maintain some sort of automatic mapping from one message format to the other? Sounds like your opinion at this particular point is that not necessarily we can leave that to whoever who wants to do that mapping to actually take that information, write the custom logic to map back and forth. That's good for us to know. Because that opens up, that makes us a lot more free to actually define the implementation in the way that we want to. But I just wanted to, that was a question that I wanted to ask this group. The other thing is that as we define this spec, one of the things we're trying to do with protocol buffers is trying to make sure that what we do over here works for other formats that are similar as well. Yeah. Cool. Thank you for all that. I think there are a couple of slides before this that actually propose that we actually put the protocol buffers in a flat namespace but in a map that actually contains all the extensions in a key value store. So that's something that's missed in this discussion but that's okay because we can actually review, Shad slides out and folks can look at the details if they need to. So Kailash. Can you show that last slide again? So yes, I'm struggling here because if I hadn't had the history of the previous conversation, I mean previous phone calls and stuff and all I heard was your presentation here, I got to be honest, it wouldn't be 100% clear to me what you're advocating. You made it clear at the end that you're still doing some work in progress stuff and you don't necessarily have a proposal or something like that to share with the group. And so I'm struggling with how to move forward with this information because you presented a lot of really cool stuff but I got to be honest with you and maybe I'm just getting old. It was very difficult for me to follow all of it because there's a lot of information there and it wasn't clear to me which parts were sort of background material versus directly related to bag or no bag. And so I'm trying to figure out how we can make forward progress here because people are itching to do a vote, right? Yeah and I think like we agreed to do it. Let me finish. People are itching to do a vote but at the same time you guys have this work in progress and there's some things that are sort of private to you guys that we haven't necessarily been able to see and digest to help make informed decisions. And so my initial gut reaction is to have people vote based upon incomplete information with the assumption that everything is changeable in all of our specs until we reach 1.0. So we can make a decision today and then next week based upon you guys sharing your work in progress with us we can completely change our mind. That's fine but people want to have a set something to go with for right now. And I'm trying to figure out the best way forward here and I'm not quite sure what to do with your information. Does that make any sense? Yeah but I think on the other hand, right? Like there was a lot of discussion in the last meeting and this set of information that's presented here was the exact response to say, hey, this is the set of information that we're missing that sort of like augments all the discussion that happened in the last meeting. I'm not sure if I can summarize the last meeting adequately for you at this point. I get what you're trying to say though but in summary, right? Like here's my question, right? What we want to do is we want to have a reasonable mapping of the protocol buffer implementation to say the JSON implementation and so on. Do we need to maintain such a mapping? If you do need to maintain such a mapping then I think that we need to grow a little bit deeper and then concern ourselves with how this information is structured in these different places. Actually, actually, Clemence, hold on. Just from a moderator point of view, we have lots of people in the chat are saying they want to speak up. So I know we haven't done this in the past but for this meeting, if you would like to speak up put a plus hand into the chat and I'll do my best to keep track of who's on the speaker queue. We don't want to make sure everybody gets a chance to talk. So with that in mind, I believe Derek, you implicitly have put a plus hand before that before. I think it might be the first one. So, Derek. Sure, thanks Doug, I appreciate it. And I apologize to the group. I was on some of the earlier calls and I haven't been on some of the more recent ones. So Kalesh and Sarah, I might be missing some things. And this is simply from an old guy doing this since the early 1990s. Most of the time I've seen when a producer is producing a format that wants to be generally consumed, which I believe is kind of the purpose of cloud events. There needs to be a single definition of what that looks like on the wire. I don't believe that it should be transport bound myself but I know that there's some conversations around that as well. However, there's consumers that might want to consume that data in a different format. And again, at least from stuff done on Wall Street and some others, they always want to consume it the way they want to see it. For example, in this case, a protobuf. And usually what happens is that someone then steps up and says, okay, I know the standard is this, but I'm going to define a mapping over to this protocol format on the wire. And then they actually own and define that mapping and they keep that up to date. And of course, I know you guys were talking about how do we do future revisions and as the standard changes, we are compatible with such especially for request or pie. But it feels like Google would step up and say, hey, we're part of the cloud events group and here's the mapping that's defined by the standard that I believe if I'm not incorrect is supposed to be JSON today. And here's Google's mapping into protobufs for anyone who wants to consume these events as protocol bubbles. Now, again, I think the producer should not have to worry about that. So there's going to be an intermediate layer that says, oh, I know that there's interested consumers on this type of cloud event but in a protocol buff format and I'll do the conversion for them. I'll stop talking there, but I just kind of wanted to throw that out there. And if I'm way, way off, I apologize to the group. Okay, thank you, Derek. Srikath, we're next. Yeah, thank you for that. Again, I've not participated earlier but I've been tracking what's going on in the space. There's a couple of points which I kind of want to mention, but I'll be very brief about it. One is that the data that is being passed, it's been enforced that it has to be a JSON. I think that everyone seems to agree at least at the protocol level that there should be a JSON support or implementation available when you put it together. So that's one aspect. The second aspect is the structure of the message as I think Derek rightfully mentioned should be understood by the producer and the consumer. So it makes more sense to have some format that producers and consumers can agree on, right? To put something in the bag, expecting that at some point a producer or a consumer would want to interpret it, right? Seems to be that we're just putting something that eventually someone would be able to understand it. So maybe one way to look at it is can you take again this little bit of a left side or a little bit further away to think about, but having a mechanism by which you can define cloud events and have a mechanism to extend them, right? So there I can think of work being done in linked data as one way to go about it. And there's already JSON LD that enables you to represent linked data in JSON format. So that's kind of taken care of and linked data enables producers to describe what event they are creating and they can also create new events telling that, okay, this event is a subtype of some other event and this is the extra attributes I'm adding to them. That way it makes it easier for everyone to understand what is being passed through. Okay, thank you very much. And I believe Clemens you're next and I have you Kathy on the list. Yeah, so if we assume the cloud events based back as an abstract data model and I think that's what we've defined it as, you can go and make a data structure inside of your application that reflects exactly that state and then has effectively a way to store anything you read from whatever what the serialization is into a dictionary. Let's assume that for a second. You can now go and take that data structure which exists in your SDK in your world with your runtime. May that be Scala or may that be C sharp or may that be node, whatever that is, you're gonna have some SDK and there's gonna be a cloud event 1.0 structure and that will have some explicit properties and then it's gonna have in that it's gonna have effectively a container where all the extension stuff is. With mappings that we have. So the standard is not JSON. JSON is just the one format that we have. You can go out take a serializer and serialize that out into JSON. You can serialize it out into XML. You can serialize it out to protobuf. You can serialize it out to thrift and it pops out on the other side and you can go and now deserialize that in another runtime into a similar structure. The question that we had that were that we're really discussing here is whether we want to have a closed schema where the top level is all defined and extensions always go into a bag even in the abstract data model or whether we want to have an open schema where we simply don't have that bag in the abstract data model, like in the document but rather basically allow for arbitrary sensibility at that level and on the wire, data flows flat but then in serialized state the data basically goes into well-known properties versus a dictionary. If you go and deserialize the event into a script language. So basically just take a JSON parser and take the event in. You're simply going to look at a dictionary and everything is going to be flat which is going to be much easier for me at least to handle then more structured way we have kind of the extensions. Sometimes in version one you have field X in the extension bag and then suddenly you're in version 1.1 and that thing now marches over into at the top level and that makes for everybody who's writing these scripts and basically just lets them sit there and the events now get upgraded makes those script but that script stuff harder while the serialization stuff the serialized stuff is effectively unaffected because you're going to bind against your 1.1 version and you have type checking and everything. So that's why I'm so basically there's always for anything that happens in memory kind of the reference specification is the core spec and then for everything that happens on the wire the reference specification is then the formatting spec that depends on that and I think the formatting specs they can have bags and they sometimes need bags because it's not expressible in any other way in for instance in Prodabove but for instance the JSON binding has a very clear set of rules and how it can go and serialize all that stuff out without needing it back which keeps it easier for and JSON serializers which are formatting into objects like the one example that I pasted they will still go and put the stuff into a bag because that's the construct that they have so that's why I prefer have it flat. All right, cool. Thank you, Cummins. Kathy, I believe you're next. Yes, so my understanding of this discussion is about how to map the information in the cloud events and to transport, right? How we serialize that. That we're concentrating on how to map the information in the extension I mean the extension information to the transport. Shouldn't we treat the same in a way that that information is in the main spec or it is an extension which we should use the same way to map them to a transport. That's my question because if we use different mechanism and if later information field is promoted from an extension to the main spec then we have to change it, right? Is there someone that would like to answer Kathy's question? That's actually a point that I just brought that I just mentioned is like if you have a field that you start using because you want to promote it effectively as a standard element then the scripting clients which are not relying on serialization will be able to just continue to use it without it marching from one place to the next place. Okay. Now I'm just thinking you know whether no matter which transport or what the serialized mechanism we are going to use for the transport I think it's better to treat the information field in the main spec and in the extension the same way how we serialize them instead I'll treat them differently. So I think that I like the way that a couple of people have been attempting to frame this as like what are our first principles? I think Kalish did that in the final slide which I actually transcribed into the notes so that we would have them here. And I think there's like dissent at least amongst you know somebody who said that it's we have collectively decided that it's not important to maintain path mapping from one message format to another. And I thought that we did want to do that. And then the other question is are we comfortable with a situation where there would be two things flying around in the open in the flat world where there was collision of naming and they mean different things, right? And you just know every implementer keeps a list of the core attributes and it's like well these ones I decide which ones I care about and I implement them differently because I'm paying attention to some segment of the community and not realizing that somebody else has invented a same name extension. But Sarah, question. If you have a bag, how does that change to collision? It means that I know that the things in this bag might collide and the things at the top level won't. It's a way of like basically sorting that list. So instead of every implementer keeping a list of the spec things, right? What's in the spec and tracking that? You have like this clear sorting mechanism, right? So we're just saying it's implicit in the written documentation of the spec which attributes are commonly accepted or not versus having it actually be in the structure. But it's clear with, so we have a cloud events version in there. And then once you have the cloud events version you know exactly which fields are in spec which means you interpret them by spec. The only collision that can happen in that scenario is extensions stepping on top of each other and they would step on top of each other at the top level as well as in the extension bag. Yes, but then if somebody implemented an extension, right? So you have two different implementers, right? Who implement unnoticed to each other, right? They both implement the same extension, right? Because you're sort of encouraging people to use general terms, right? So somebody says, okay, I'm gonna have a logging extension. I'm not proposing that we do this, right? But two different people say, okay, well I have the great idea for generalized logging and they name it the same thing but what that value is of a different type and that value means something different. And then you have a collision, right? And one of them gets picked and then the other one, like it's the work. Yeah, what's the difference between that collision happening at the top level or in a bag that's called extensions? You still have the collision. So if the collisions happen inside the extensions bag we know that those are not official and we can choose to ignore them as long as we honor everything that's defined in the top level spec we expect the base level functionality. But now we're saying people can put anything at the top level and at any time we might decide to interpret it one way for everyone who's conformance to the spec. But there's a well-defined schema for version 1.0 and there's a well-defined schema for version 1.1 and you know exactly what fields are valid so you know exactly what's different and what it must be an extension. So it's fine. Sorry, this is Derek. I think I understand the problem and I think it's been solved, at least attempted to be solved through like what JWTs have done where they say these are the required fields and this is the spec. If you want to add extensions you have to make sure you don't collide. So for example, in this case I would imagine Google if they want to introduce an extension that they want to propose to be eventually uplifted to the spec. They would name space qualify it so that we know it's a specialized event for Google meaning that won't collide with themselves with Microsoft or Amazon or DigitalOcean. And then if there's a vote that says, hey, we're going to take logging and uplift because DigitalOcean, Google and Amazon all have it and we figured out how to make it standard then that feels like the right path there. I might be missing something but it feels like this should solve itself and JWTs do kind of try to address it maybe not perfectly. So I think that like I'm really speaking as somebody like not as Google's perspective, right? Because Google, Microsoft, Amazon like we've got some heavy weights in this space who could without working with our working group, right? Just decide, hey, we're going to determine a top level thing that then this group would then kind of like it would be very hard to say, actually we're going to define it differently than in practice some cloud provider has turned into the de facto standard and then there's no process of promotion. It is a de facto promoted because it is, you know because everybody like doesn't want to be incompatible with the big heavy weight. But it's the same thing. So guys, I raised my hand not as IBMer but as still as moderator. We have 15 minutes left here and I'm not a hundred percent sure people are bringing up new information. Am I wrong on that? And does someone have something that they feel like is brand new information because I'd rather not sort of rehash things that have been said before because I'd like to see who can come to some sort of resolution today because I'm hearing ansiness from people. They want just behind us one way or another. So does someone have something new that they'd like to bring to the table? Well, I think that I had a question about what are our first, like do we have first principles? I think everybody's going to need to decide that for themselves because I've yet to hear someone use that term until, you know, this call. I stuck them in the notes. There's a few things for questions. If I could, and again, maybe I'm misspeaking, but first principles for me from an eventing system where it's kind of like a publish, subscribe type of paradigm is the first principle is you decide the schema and that's required and the wire format for all producers. Doesn't mean that all consumers can't consume in a different format but producers have to all be the same. So if it's a event coming out of Google, Amazon, IBM, whoever, it's by the spec. So it's these fields are required, these are optional and it's JSON on the wire or whatever, but they all have to be the same. I cannot see a world where I have an event producer in Google that now has to produce five different versions so that everyone can consume it. You know, and again, when you produce something, you probably know the first consumer, but you have no clue who else is going to want to view that data later on. So it has to be consistent in my opinion. Okay, so I'm inclined to say that we're probably at the point where people aren't really introducing new material and in terms of making forward progress, I'm wondering what people think about just doing a vote. I have one more question, which is, I missed a few meetings in the early part of this extension's fray and I'm a little confused about what this decision is blocking. Like the people who are proponents of the flat extensions, how is this prohibiting you from adopting the spec? So I believe the biggest roadblock is probably the SDK sub-work group that we have. They're, I've been correct on here, Austin, but I believe they're basically on hold trying to figure out how they want to deal with extensions as part of their SDK work. Is that correct, Austin? Correct. The SDK working group, sorry, sub-group, we determined that extensions were something that we wanted to approach initially in the initial design because we feel that they're very important. And until we know where those extensions are gonna end up, it's hard to move forward on the SDKs in general. Additionally, as a stakeholder with a middleware product, having some clarity on extensions would also be very, very helpful to help us navigate that project and do it in accordance with cloud events. Yeah, I mean, I think like certainly if this is going to change, right, then changing it will help everybody. If it's not going to, you know, like this is the, this is the tension, right? Why there's an urgency of a sub-group to do it. But the SDK thing, I think that was something that I missed that this is where this originated from. And so my question to the SDK group is just like we've talked about the people implementing like wire protocols could track the list of known attributes, right? And create their own bag concept, right? That, you know, likewise the SDK group could take the extensions bag and promote it to the top level. So I don't understand why that's, like why that wouldn't be a great solution to this. Yeah, we, you know, for clarity, I don't have a strong, super strong opinion here. I'm leaning towards having it at the top level because it seems to actually graduate an extension into the specification. There's a good graduation story there and I favor that. It's just some resolution. Some resolution is what we're looking for. And just to be perfectly clear, as I said earlier, any decision we make on any day is not set in stone until we get to 1.0. So we can completely reverse our mind or change our minds and that next week's call we want to do. We have that prerogative. Right, which is why I was a little confused about this because we have a spec that people feel like could implement and there are things that are open which will actually limit, like make it so that this is not complete. And so like I just, you know, like Yeah, but the problem here Sarah is that people do think that they'd like to see if we can come to some formal decision about extensions where in the past it may have been more just sort of implicit because that's just sort of the way it was but we never had much of a formal discussion about it and that's what we're trying to do here. Okay. Okay. Yeah. So. So Doug? Oh yeah, yeah. Kathy, try to make it quick. Yeah, so I'd like to try to get to a vote. Yeah, I know. I have a proposal in last meeting I mentioned it. So I can also present my proposal too. I think my address some of the problems issues people raised like the issue of Sarah raised if I understand her a problem, I mean the question. So how much time is this going to take, Kathy? Because we only have like seven minutes left. Okay. You can do your other proposal and I can present my, the last one. That's the last one to vote. What? Well, I can present the first one. That's the first one. Is it the workflow proposal? No, no, no, it's not. It's, you know, it's how we do this, you know, these popular bags or these extension things. So, oh, maybe I can just present. I feel, if you... Kathy, can you do it in like two minutes? Okay, sure. Okay, go ahead. You can share the screen now. But you gotta... I send you this link, the Google Doc link. You can just... Okay, hold on. Two minutes, maybe three minutes. I know, because we were running out of time with Kathy. I know, because, you know... Okay. Okay, so here. So I think, you know, we should have no restriction to... We should not put a restriction in the spec, say, you know, there's only a single format of key value pair. We should allow the metadata in the format of both key value pair or in the format of, you know, classification bag or property bag. Because there's some situation, you know, we have to have that bag at the root level in both the spec and extension spec. So no matter whether, you know, in the best spec, in the spec Mb or in some extension spec, I think we should allow, you know, these two different formats. Because, you know, the reason I... Maybe I will not just explain. I think one thing... My reason is we give this flexibility. We'll have more people use these work groups spec. Because there are so many different event producer types there, we cannot predict that, you know, every metadata information can only be modeled as a key value pair format. Another quick thing is, you know, the serialization I think I mentioned before. So no matter whether it's in the spec, main spec or in the extension spec, I was thinking, you know, if we can define the same serialization method, that would be good because that avoid the problem of, you know, a backward compatibility problem. Problem when the attribute is promoted from extension spec, extension to a main spec. And the third, I think that might address, you know, on Sarah's question, I'm not sure whether it will. So if a property will be used by large number of use cases, no matter whether it's a property, it's a classification name back or it is a key value pair, I think the work group should define a consistent name for that bag in the main spec. So this name, this will prevent different event producer from giving different, you know, names, random names that result in many property names that actually is for the same information. Also, it will simplify the event consumer's implementation. So instead of having to pass all these different company names. All right, can you go on, go on? What's that? Go ahead. Someone is not talking to themselves. Instead of, okay, I'm gonna continue for three minutes. Instead of having to pass different property names from different event producers for the same type of information, this consumer can just pass that one name. And also there are existing standard particles that define the names for property bags. So I don't think there's anything wrong if there's a need to define that. You know, we have to, you know, we put a restriction saying, you know, no one can define any bucket bags, bucket, you know, for a bucket of information. And so that information, so what time proposes if we see there's a large use case for that bucket, we should define the name. But, you know, the information inside the bucket do not need to be defined. And the bucket itself can be optional. So we can define that in the main spec. So I give an example, maybe you scroll down and, yeah, for example, there are, you know, we need the identity labels. We define this instead of, you know, some people call it identification label, some people call it identification property, or some people call it identity bucket, or other names, we just say identity labels. We just, whatever name, it's not necessarily this name, we can just choose a name, and then people can put those identity information inside it. For a specific event producer, I don't think there will be a lot of, you know, thousands of identity labels, maybe just a few. And then it's up to the, so the event producer, you know, put that, put the information there, and the event consumer, you know, however, how the event consumer is to use it, that's not, we do not need to define in the spec, in the event spec, because only we need to define what was needed for the event consumer, for the event producer, sorry. Yeah, that's it. Okay, so Kathy, if I correct me if I'm wrong here, and I'm not asking this to try to sway you one way or the other, but I believe you originally talked about this as a proposal, but I believe that this proposal is actually consistent with the PR that I was hoping for us to vote on. And the reason I'm saying that is because I only want people to have one choice in front of them and say yes or no, not have multiple choices in front of them. So am I correct in that this is consistent with the PR that we're gonna be voting on? No, it's not. Because I think here, we say, you know, we will allow the, you know, we do not restrict the format to just key value pairs. In the PR, you just say it's just key value pairs at the root level. I think we should allow a property back. So we call it object classifier. I think that's a minor, I consider that to be more of a typo. The intention was not to say it's only main value pairs. It was just extensions in general. But we discuss a lot, you know, saying, you know, we want to exclude, you know, any definition of property backs in the main spec or in the extension spec. So my proposal is we should not have that. We should not have that restriction. No, I don't think that restriction is implied by the PR and if it is, I consider that to be a mistake. Yeah, I don't either. I consider value being value. It can be anything. It doesn't say string value or primitive value. Right. If we, if that's what we mean, we should say that. So that's fine. But I think that's a minor tweak. So let me ask the question. Is there any objection to having over it? Because we only have three minutes left. Is there any objection to having a vote on the PR? I think also another point is that I think we should define the name of the property backs if we think there's a large number of use cases. For example, if Google, Microsoft, Huawei, you know, IBM all use that same information, we should define in the main spec so that there's no conflict. Yeah, that's fine. And I don't think that is inconsistent with the PR. The PR specifically allows you to do that. It's an extension. Extension could be anything. It's not just a single name value. No, no, no, this is not extension. I'm saying in the main spec. Okay. That's fine. Number of use cases, we should define that in the main spec. So I'm not, I don't, I think now we're all concentrating on extension. But I think these issues are not just for extension. It's more for addressing. Kathy, I don't believe the PR blocks you from doing what you're suggesting. I believe it is completely consistent. Okay. I think then we should modify that PR because that PR does not give this information. That's why I gave this presentation. Okay. So tell you what, I feel very uncomfortable doing a vote in 30 seconds. I apologize. We were not able to come to the resolution, but what I want to do is within the first 15 minutes of next week's call, have a vote. I think we've gone back and forth around this too many times and I don't want to do a vote in 30 seconds. I just think that's just not appropriate from a process perspective. We could also like vote by a virtually like the TSC does. Like they have like a presentation and then everybody votes in the PR. Okay. So it's okay. Tell you what, let's do this. I'm here. People actually do want to start a vote now and that's fine. And we're good. Cause anybody on the call is going to let them do an email vote anyway. So if you're on the call and you want to vote now, I'm going to go through the names quickly and you can just say yes or no to the PR in question. If you don't want to vote right now you're going to do through email then you have until how does end the day Friday sound? Cause I don't want this thing to linger. Well, I think, yeah. I mean, I wasn't suggesting that we have a like a two day thing with like somebody who missed this meeting then, well, I don't know. I don't, I shouldn't have proposed a process in the last 30 seconds of the meeting. Well, there's the problem, right? So why don't we- Okay, so take, okay, okay. I don't think we should change process in the last 30 seconds of a meeting. Right, so I said I was going to start it, I'm going to start it. And I apologize we're running a little over. So if you don't want to vote and you want to do it by email, just say that and we'll skip your vote. Adobe. So actually I would like to, if people are okay with my proposal, I would like the PR to incorporate this information before we do the vote. Otherwise we're just doing a vote, you know. Okay, I'm hearing an objection. We can do a vote quickly in next meeting, right? Yeah, I mean, I think if Kathy wants to adjust the language so that it's inclusive of her proposal, we should let her do that. That's right. That's fine. We can just do a quick vote in the next meeting. So Kathy, do me a favor. I make, I suggest it edit to the PR today and then we'll try to get that incorporated over the next day or so and then we will do a vote first thing next Thursday. Okay, sounds good. I can give you the proposal. Yeah, the comments. Okay, I apologize for guys for running over and just I know some people may have left but let me just do a quick roll call for anybody who's brave enough to actually stay on the call. So hold on a sec. Austin, I heard you. Sarah, I heard you. Mark, are you there? I'm here. Thank you. Hot wise, my thing not working. William, are you there? Yep, I'm here. You're on? Yes. Okay. David Baldwin, Michael Payne? David's here, sorry. David, Michael? Yeah, I'm here. Okay, what about Klaus? Yes, I'm here. Okay, anybody that I mess on the roll call? Matt. Oh, hey, Matt. Anybody else? All right, thank you guys very much. And I appreciate it. And I apologize for running over but it's obviously a very hot topic. So, all right, thanks guys. Be prepared for a vote next week. Thank you. Bye.