 Okay, we'll catch up with John later. All right, I'm gonna go in and get started. Skipping the AIs, the only thing I wanna point out, although I don't see Kathy on the call, so it doesn't do any good to do that, so let's skip that one. Okay, so first of all, before we get to the meat of the discussions, we had a doodle poll for the face-to-face meeting in the OSS summit. We only had four people so they could make it, so we are not gonna have an official meeting there. We may choose to just sort of get together just informal, but it definitely will not be an official meeting if we do decide to meet, so you guys don't need to worry about that one. I did start a doodle poll, however, for Kupkan and Shanghai. I think we've had quite a few people sign up for that one, but I wanna make sure you guys draw your attention to it because we actually do have a intro and face-to-face session signed up for us. In the coming weeks, I will start Google Docs or something to try to gather ideas for what people would like to discuss there and who will be there to help present or whatever, so I'll forgot the logistics later. But for right now, there is a doodle poll for people to sign up, so we at least know who's gonna be there. Let's try to get people to sign up by next week's phone call if possible, please, just so we can get an accurate list of people, all right? All right, moving on then, let's talk about PRs. Rachel, would you like to present? For which PR? I'm sorry. For the extension. What other PR is there? Yes, yes. Okay. Hey, real quick, I used the word just for this last week and I didn't really understand the negative context, I don't think. So I just wanted to make sure that everyone knew that I didn't mean that anybody had negative thoughts or didn't do anything. Probably wouldn't have been more accurate. So I just wanted to clarify that. Okay, cool, thank you Dan, I appreciate that. All right, so I thought that was a yes, you wanna share, right? Yeah, it says I cannot start sharing. There you go, I had to stop sharing, so there you go. Okay, can you see my screen? Yeah, it's a little on the small side. There we go, much better, thank you. Okay. So I wanna start by acknowledging that we are slowing down progress, that everyone is very eager to make and I'm sorry about that. The thing that I do wanna say is that we're not doing it to be capricious or because we're acting in bad faith and we just wanna stop the spec. To point to that, Spentures opened up a PR proposing a proto-implementation and that's the open source format that is used extensively inside Google and outside Google. Thomas has created a sample repo that we're going to walk through as part of this demo but also we open sourced it so if anyone wants to go look at it after this or in more detail or read through how it's working, you can totally do that. And we are slowing it down but we're doing it because we think that if we make some changes now, it will be more broadly useful. So that's what we're doing here. And if you can hold your questions to the end, we have lots of people in the call who are happy to answer all your questions and we just wanna get through this presentation as fast as possible. So to start proto means roughly two things. There's a human readable dot proto file. It's written in the proto buff interface definition language and that's used to generate multiple consumer of it, libraries, encodings, that kind of thing. There's an example of what this looks like here on the right side. One of those encodings is the proto binary format and that is the other thing that we can use proto to mean. So to avoid confusion in this, I'm going to refer to protoling, to refer to the language. That's the thing that is human readable they can write. That also generates a JSON encoding and we are bending over backwards right now to define a cloud events message in protoling that will generate JSON as that compliant. And you can read more about how it converts to JSON. And I'm going to refer to proto binary or binary. So we can define a cloud events message in protoling and it will generate serializations for both JSON and proto. JSON serialization I've linked to, it's not incredibly restrictive but there are some restrictions. A source of confusion in previous conversations that we've had has been that the proto binary and the JSON serializations are totally independent. That's not true if they're both generated by a common protoling definition. Only a subset of valid JSON can be generated by protoling. It's pretty permissive but it enforces some guardrails and moving extensions to the top level properties does make it impossible for us to define a spec compliant cloud events in protoling. So we acknowledge up front that this extensions bag is just a slightly clunkier API. It makes the promotion process for JSON LA systems more complicated than it would be if everything was at the top level and we're still asking for this because we think it makes the spec more reliable and more useful and we have a demo to show why we think that's true. The other thing I want to say about proto is that it's used extensively by other CNCF projects like Kubernetes and GRPC. And if we want our spec to be a CNCF project, we think that it should play well with other CNCF projects. Proto is used at companies that have lots of popular APIs and we would love for them to be able to support cloud events. I just, I wasn't sure how many people were using it so I just looked at the public mailing list and there are about 4,000 users and just scrolling through, lots of them are big companies. Publishing a cloud events definition in ProtoLing would let all those companies quickly start using cloud events within their existing proto-based system. And if a cloud events JSON format cannot be expressed in ProtoLing, then every single company that's using Proto internally was going to have this higher cost to start sending or receiving events. So that's all I want to say about that. It's not proprietary. So I'm going to stop sharing and I'm going to let Thomas share to walk through his demo. This demo is using Doug's JSONX tool and it's going to show that we get non-deterministic behavior if we circumvent the ProtoLing's guardrails for JSON. Thank you, Rachel. So one of the things that I wanted to focus on was that I think to a large extent the discussions that we've been doing where we use Proto as an example aren't just about Proto. They're just those guardrails which I think are there for a reason. So I had a Git repo that I published yesterday called versioning is hard. I try to act as three different actors. Our working group, a smart library vendor and an application developer. And I try to do this all on the JSON spec as proposed where it violates some of these kind of guardrails and see what could possibly happen. So if you look at the Git history, basically- Thomas, can you share your screen? Oh, am I not doing that? I'm sorry. I'm being technically enough which is very embarrassing for presentation. It says share screen, where am I? Oh, desktop, there we go. I missed one of the dialogue boxes. Can everyone see it now? Yep, yep. Okay, cool. So anyways, each of these is poking a little bit of fun at our working group. It's all signed off by each commit acts as one person but we can go ahead and look at some of the final product. We have some fake cloud events and we will run the application at version 1.0 of our spec where let's see, we can see that the spec currently says that there is anything with an event ID property is a cloud event and anything else can be present. They're just extensions outside of the scope of the spec. And then this is the simple JSON library. Unfortunately, the JSON that is being proposed is incompatible with the default Go compiler. There has been a outstanding bug that I linked to that has been around for many years. Go still cannot with the default compiler handle unstructured properties. So thankfully Doug tried to work around that and so he has his special parser that I use. And then my application just basically loads it and doesn't debug prints. It prints out the event ID and we're pretending that this is also using the sampled rate extension that we ratified yesterday or not yesterday last week. And said it was either sampled a rate of one in something or it was not sampled. So if you run it at 1.0, it was got about 1.2.3. It was sampled at a rate of one in 10. But now if we check out the 1.1. So what happened is spec 1.1 got announced. It said, hey, we also have an event time property. And by the way, we've also noticed that everyone likes to use sampled rates. So we're going to now formally define it. So the response for that is to just add those fields to the JSON, or the struct in our library. And then suddenly when I try to run that code though, um, let me double check. I may have done my best to correct it. Did you pay homage to the demo gods first? There's the problem. I may not have. Man, I've had my bad luck lately. You're still sending event 1.0.json, did you have to send event 1.1? I wonder if... Yeah, you're getting paid a lot. That's the point is that you shouldn't, this breaks even when you don't change the data payload. But let me double check what's going on. Oh, for some reason. While you work through that, is it useful if I talk about what we should expect as you have any? I can even just edit it on the spot. It seems for some reason my git text got messed up. I didn't have the change in the struct. Lib cloud events, cloud events.go. If I just add the definition for this thing, I think we called it sampled rate. Editing the spec to actually match, or adding the library to match the new spec means that if I run this, it now breaks. My application hasn't changed. My spec changed what we thought was a forwards compatible way. We used the latest and greatest tools from our own working group members. And there was nothing that could actually be done to avoid breaking this application. So whether we actually choose to do this extension or not, I think my request is that either A, we treat extensions separately always, just so it's a very clear idea that it's extension or it's not extension. Or B, at least recognize that when something is in a well-known extension, if we promote it, that there are some formats in some libraries where this is a breaking change and that we would do it as a breaking change. So I will yield my time. Okay, now I'm going to jump back to presenting. Okay, so the thing that we want to take away from that demo is that, oh, I can't increment my size. Okay, so you can see my side's not right. I guess we can. So the thing we want to take away from this is that we need to balance forward compatibility and extensibility and structured data. And we have a proposal for how to do that and that it's flexible for all formats. So diving in for a second into forward compatibility, that's about adding new attributes to the spec in future iterations without breaking the existing event consumers. They're still using the old version. In JSON, this is really straightforward because JSON keys are strings. If the keys are uniquely named, there is no risk of collisions. All the values are JSON primitives. And the future iterations of the spec that add new properties are assumed to be non-breaking changes for existing JSONs. So what do you mean by primitives? JSON primitives? Yeah. That's defined in the spec. So we have a list. Okay, okay. That's okay. That's okay. I can send you a link to it later on. So forward compatibility in proto-binary is a little bit different. Why should event consume, this is a question that we've gotten from a few people. So I want to just address this directly. Why should event consumers using proto-binary not use the unknown fields for extensions? And there are a few reasons. Unknown field doesn't provide forward compatibility because the top-level keys are integers. And to avoid collisions, we're going to have to use a high number integer to avoid that. And so the normal low index integer. So when it is upgraded, we're going to have to flip to a low index integer when it's promoted to a top-level spec. It's very hard for us to use two extensions in proto-binary. And if they're out of the top level, event consumers would have to individually write an additional protolink definition for the base spec and then combine two extensions into a new tag. And if the extensions are in a property bag, it's of unknown fields. They're promoting them from property bags to a known field is a major change, not a minor change because we're going to have to, like we're checking for it in one spot and now we're going to check for it in another spot. So even though it's a minor change for JSON to add a new thing, to add a new property, it's going to be a major change for binary formats. And this is a question I really want to draw people's attention to. Will the working group be incrementing their semantic versioning only when a change breaks JSON only event consumers or when it breaks anybody? So diving into extensibility and that's about allowing arbitrary attributes either at the top level or in a property bag. So things they're not defined. So extensibility versus property bags has, I'm going to talk about the pros here, the pros first. For event consumers using a binary format, extensions can be used without special handling of vendor specific extensions. For example, sample rate is added to the extensions property bag. If it consumers can assign extensions to be a struct and there's a link to how that is defined. Struct is defined, there's a lot of work that's gone into this to making sure that it can handle arbitrary values and that the conversion between JSON and protobinary smooth so we can take advantage of that. And for event consumers that are using a JSON only format extensions can be used without special handling too. So the cons of the property bag, this is what I think you're all very familiar with but I'm going to tell you what you believe anyway. Promoting an extension to a top level attribute is a breaking change for both the binary and the JSON formats. For example, if sample rate is widely used and then it gets promoted to be a top level attribute to be backwards compatible event consumers are going to need to accept the 1.0 events where sample rate's still in the extension bag and then to support 2.0 events the consumers are going to start needing to look for sample rate as a top level property. So to avoid the breaking change, that's a primary motivation I think most people in this group feel for moving away from the property bag to put all extensions at the top level. So if we use extensibility via top level properties for JSON only implementations, this is fine as long as you're uniquely named and the promotion path is absolutely seamless. Event consumers will see no change between the attribute being an extension and being a known attribute but for protoling implementations we can't easily handle top level attributes so there are two work grounds. The first one is to handcraft the proto binary adding the known attributes to an integer key top level attribute and adding unknown properties to a property bag. This is going to require abandoning the built in conversion between JSON and proto binary that's provided by protoling and special casing that for cloud events is going to be a pill fight for every system that we want to support this both inside Google and for everything that's using protoling. So the second work around is that event consumers could drop unknown attributes but that is effectively dropping extensibility so we don't like that at that. So the conclusions and our requests are, conclusion one, if the extension bag is removed the JSON format cannot be expressed by protoling and Google will be unable to avoid fracturing the spec because we're going to have to duplicate the JSON format and it will differ and like so we're going to break into a format that is compatible with protoling and a format that's compatible with the spec. We know that it's a sacrifice for everyone that's in a JSON only system they're going to be giving up the cleaner JSON and the seamless property promotion path for only JSON systems and we think this change will be a good, like we think that in giving that up it also makes it useful to a much larger group. So here's our, here are our requests. We know that if you're not using protoling that this is kind like we're asking you to go out in a limb and think about how things will work in other situations that you're not dealing with right now and we ask that you do that. There's a conflict that we see coming down the road for forced compatibility extensibility, excuse me extensibility and then the ability to structure your data. And so we would like the working group to decide which goals we value and which they're willing to sacrifice. If the working group wants to support or wants to sacrifice support for structs or if the working group has a plan to only make breaking changes or to increment a major version when it breaks JSON only systems that's something we would like to know sooner rather than later. So that's it, that's all I have and we're happy to answer your questions. Okay, so like we did last week I think it's important to have an ordered discussion here. So if you want to raise your hand to say something put a plus hand into the chat and I'll do my best to notice it. And the only reason I put my hand up first is because you guys mentioned my tool so if you can stop sharing for a sec I want to show one thing. Thank you. Can I ask a question? Cathy hold on a sec, let me finish what I was gonna say first then you can, then you're next, I'm like you. I can just find my, I think that's it. You guys can see that, right? Yep. Okay, so since Thomas was using my tool and if I heard him correctly I think he made a claim about how when you upgrade from one version to the next and you add a new optional property that was once an extension, it's a breaking change, there's no way around that. And that's technically not true. In fact, in that repo you can actually see, I have an example where I have a person version one with just name, person version two with name and address with the assumption that for the version one address was extra meaning it was just that extra place where extensions go. And so when you take this, Jason gets parsed. The example shows you parsing into a version one's person and a version two person. And you can access it because the toolkit that I'm using which is the same thing that Thomas was using, my little ext thing, I can actually address, I can actually access address from both structures with the exact same line of code. And you run it, you actually get the exact same value coming out. So my point here is not to say that there won't ever be any problems. My point was just it depends on what tooling that you're using going forward. And that's the only thing I just wanted to point out because I didn't want people to be misled at least for my tooling that there's this problem that it can't be done when it can be done. There's an example in there that shows it. So with that, Kathy, I believe you're next on the queue. So go ahead. So I have a question for you, Rachel. So you mentioned this extension and bag and do you mean the extension keyword we used to have? Are you referring to that? Yes, that is what I'm referring to. So, okay. Yeah, I just want to make sure you're opposing top-level probably bags. It's not that extension, but we propose some attribute which is of the map format. Yeah, so we're asking, we're saying that it is not compatible with protoling-generated JSON to have arbitrary attributes at the top level, but if they are in an extensions bag or in a known property that we have a type for then we can't handle it. So your point is if we put those into a known, well-defined bag, it's better. Is that what you mean or it's worse? Yes, yes. That's what we're saying. It's better? Yes. Okay, I see. Cool, and I think I'm next. I'm not sure if I'm supposed to raise my hand to respond to these comments or not. Yeah, that's not good. So in response to the thing with Doug, I do try to be, if you check that GitHub repo as intellectually honest as possible, I did actually disclaim this in the footnote that Sarah pointed out. So the point of the discussion was supposed to be about structured formats of any kind, not just proto. And there is a work around with certain usage patterns that are okay. I personally would not recommend the one that was just shown, because I'd actually recommend using two structs because now with the example in the futures section, anybody who says event.extras will invariably be broken. You can escape out into a string-only system. This is how some libraries do it for HTTP headers where you either access everything. There's no unknown headers. It's just everything has been developed. I would argue at that point, we're not talking about the trade-offs of the structured system because you've gone to a map-based system again. I think it's an important note though that throughout this demo, we're finding tools at many steps of the way. And though it is, I would like to have a spec that is not possible to use correctly, but easy to use correctly. My personal trade-off that it doesn't have to represent the groups is that I would rather have slightly more verbose, but very, very hard to use incorrectly. Very hard to walk into a trap where you've accidentally made something not more compatible. All right, thank you. Clemens, I believe you're next. Yeah, so I have a few observations. First, what I find interesting is that the entire protobuf discussion is done through the lens of existing tooling. And really not, it's not a protocol discussion. And I find that a little odd because we're really trying to build interop and trying to build that kind of at the protocol level. What I'm seeing is effectively lots of discussions about tooling particularities and idiosyncrasies, specifically around JSON, which I frankly don't understand because we have a canonical spec for JSON. And if the particular tooling that you're choosing is unable to handle the JSON that we have, which is a very straightforward flat structure, then the question is whether the tooling is right. I don't think there's any need for using a... What I see is that you're defining a protocol document and you seem to be insisting on using that particular document to also do all of the JSON serialization work that you're doing. And I find that choice rather odd. And I don't think, since we have a normative specification for how JSON Cloud Events works, it's unclear why that's even made a topic. I would understand if you were to have an argument about how it's impossible to serialize unknown fields at the top level purely in the protobuf binary format, but I don't understand the JSON angle at all. So that's one point. Second point, if you are taking, and I did a bit of the math in a comment to your protobuf submission in the repo, we have currently eight well-defined fields. Of those are four are required. If you are taking those, if you were taking the road to basically making the entire message, not strongly typed, but if you, so position and basically, if you were not doing positional encoding, but if it would all putting it into a dictionary, which make it completely equivalent to JSON, what I came up with is effectively a overhead of four bytes per field. And if you squish it down and really make it positional, then you can also get to, and use effectively the capabilities of the wire format that protobuf gives you. You could get to two bytes overhead per field, which adds up to between eight and 16 bytes, and that's still within the AS 16 bytes padding range, which means if you sent this all over TLS, overall the gain that you have by using positional encoding using protobuf might be completely going away just because you're using encryption. So for me, the question is, why are we going through all these exercises? If we can take the key value pair model, maps arrays and values model that JSON has, and it's also used by several other binary encodings, message pack, XC, if you take the XML version or BSUN, and rather for the rather particular case of a constrained type model that we have, and hard-wired type model as it has, as proto has it and also as thrift has it, why are we making all this effort for really effectively very minimal gain on the wire? All right, I have a comment. I believe Mark here next. Yes, so the presentation talked a lot about the versioning, and what my main question is, didn't really address the difference between optional and required fields where we had felt that optional fields would only require a minor release as opposed to require it that would require a major release and it seemed like the presentation was really just talking about major releases in required fields. Rachel, can you address that? Should I address it now or should we let people, I don't know. No, go ahead and respond because otherwise people may forget what the question was. Just go ahead and respond. Okay, so we focused on required fields because we think that's a use case where it's breaking for everyone. We did. We can talk more about minor releases that only have optional fields. Since I'm next anyways, I can just add on to that. Actually, you're not next. Oh, I thought I wasn't, sorry. But go ahead, I'll defer my slot to you, go ahead. Oh, I missed that you had done it twice. I was just gonna say we're not changing our stance at all about adding brand new from clean cloth optional fields. Those should be acceptable as a minor revision. I would say, for example, if we added something that is already in the well-known extensions list, we are quite aware that this thing does exist and there is usage, and that ratifying an extension, not just adding a new property that we've never seen before, but ratifying an extension is a breaking change. That was my suggestion. And I think that's why we focus so much on major versions. I'll give it back to Doug. Okay, thank you. I'm speaking now strictly from my point of view, not as someone who's trying to run the meeting, but I did wanna point out something that Thomas was talking about my tool. I think you were implying my tool was kind of flattening everything into name value pairs and stuff. You're not able to use structured processing. That's actually not true. It only does that for unknown properties, not the ones that are well-known to the structure itself. Those you can still always access by direct reference to the structured elements. But I guess my question, I have two questions for you guys. One is in the stuff that you guys posted yesterday, it made it sound like you guys were saying the existing JSON that we have in our spec is not compatible with Proto and that you guys are going to be requesting changes to our existing JSON. And I'd like clarification whether that's true or not. And my second question is, a little bit of a question, a little bit of a statement, this notion of ratifying extensions is something I think we need to almost get clarity on because the spec by definition does not know about extensions period, nor do we know about all possible extensions are out there. So this notion of ratifying an extension being, I'm sorry, the notion of creating a brand new optional level property as an act that is different from ratifying an extension is just false. They're both the exact same things. The spec doesn't know about extensions to the spec. Everything is a brand new thing and we can never know for sure what is being used as an extension someplace. So to say that we're gonna add something brand new to the spec and therefore we don't have to worry about existing that existing thing being used as an extension just is not true. Everything could technically be used as an extension. So I'm not sure we can differentiate between those two. So I'd like to get an answer to my original question though, what are JSON need to change according to your guys request? Okay, so it does not as it stands now but if your PR that merges extensions, so extensions can be in top level properties, then it is not compliant. That's why this is coming to a head now. Okay, because the PR that's out there right now for your proto stuff says it's incompatible. It says that what's incompatible. It says the JSON from the spec is incompatible with the JSON generated by proto. I think the, that may have been partially misspeaking on that the, we've been very worried obviously about the extensions back. One case where we would actually still be not able to be fully compliant yet. And I've been trying to pull in the core proto people is data, the fact that data is so variable and that like if it is JSON inside a JSON spec, it has one type of encoding and if it's bytes, for example, it's a byte string. The, we currently got around it by saying if it's JSON like we call it data. And then we had to say something like bytes data for raw bytes. Those are the two places I've seen. Effectively the three things that we can't do in a proto compatible JSON are distinguished between a zero value and empty value. We can't have like multiple indeterminate top level fields from like not just the main hierarchy. If it's just the working group that releases a new version, there's version one and version 1.1 and they were forwards and reverse compatible. As Rachel covered, if there's 20 people adding their extensions, it's very hard to coordinate with proto. And then the third thing was just that mutable shape idea like something is either A or B. So we had to say that data is JSON-like when it is called data and it is binary when it's called bytes data. I'm always speaking up because no one else is in the queue. But that sounds like what you're implying is you're asking us to defer our JSON serialization format to whatever proto does. Not exactly, like I wouldn't, I don't think that that's exact. Sorry, can I answer first, Thomas? Yeah, please. Okay, I would characterize it as writing the JSON that we want and then trying to stay within boundaries that exist because we have found bugs in the past because of them. Yeah, I have been trying to take the JSON spec that the committee has come up with and treat that as gospel and reverse engineering reporting back issues. I don't, obviously my life would be easier if we stayed in those conflements. I know that that's, to do that just because it makes people's life easier is not appropriate in the group. But I think to some extent it may be appropriate to say why were those decisions made by another group and are those lessons that we can learn from? Okay, Clement, I think your hands up. I'll just observe that I find that China would change an open protocol specification to the particular limitations of a proprietary serialization stack and make demands out of that in an open standards forum is highly inappropriate. I appreciate the moral high ground that you're trying to take. No, there's no, no, no, no, no, don't do that. It is not a proprietary format. Let me be very clear, right? We have principles for what's eligible and what's not in this working group. Protobuf, your entire stack might be used fairly widely but your stack is Google proprietary. It is not under the umbrella of an open source foundation and you can change it and you control it as much as you like. And Google does change it as it likes because we got protocol two and you got protocol three and you basically control the destiny of that stack with anybody influencing it. So as such, it is entirely proprietary. There is no moral, there's no moral argument here. I'm making an argument for principles of how open standards are being created. So what you're trying to do right now is to go and say, we're Google, we're great, we're big, we have the stack. And now this group needs to bend its formats to the will of the particular artifact that we're using without even looking at the reality of the wire and the focus on what you really need to bring, ought to bring to the table and that's your binary format. The JSON format that we have here is 100% orthogonal to your tooling because you can use, like everybody else, just another library for doing JSON parsing, nobody's forcing you to use your library. Your library is really good for creating the binary format but your library, if it's not good for creating the JSON format, well, then that's so. But I don't see how the working result of this working group should bend to the needs of a particular library like Protobuf. Okay, so let me just keep going to the queue. Sarah, I believe you're next. I just wanted to follow up on a comment you made, Doug, where you said that having the extensions at the top level would mean that we would never ratify an extension. No. I thought that was like the whole point. I was confused by that. Maybe you could clarify what you said. Yeah, no, it's definitely not what I intended. My point was that there was a distinction being made between us adding a new property in version 1.1 and that thing that's being added being completely brand new versus it being a known extension that's being promoted. And my point was that from a spec perspective there is no differentiation. The spec knows about no extensions, period. All we could do is add new optional properties in version 1.1, whether it's being used or not is unknown to us and it doesn't influence anything in my mind. That's all I was trying to say. Yeah, I think you're right. I mean, in terms of, I mean, you might use a implementation in order to make a case for it being promoted in the spec, right? We're being ratified in the spec, but yes, it shouldn't influence our decision. However, from a technical perspective, I mean, I think this whole conversation is to talk about, well, we expect that people will be using the extensions in order to try stuff out. And then we expect that it will be common that stuff that people have used and have implementations in the wild that have extensions will then later be ratified in the spec and how do we technically deal with that, right? That is, I think, the subject of the discussion, right? I guess, yeah. Because we expect that to occur. Therefore, we have to reason about how does our spec handle that? You know, and so that we need to think through, like either we need to, we could go the route of saying, this is a JSON only spec and we don't support any other things and we're not interested in binary protocols, right? Or we need to reason through these concerns. And I think that Thomas and Rachel have made good efforts to say that these are issues that would happen with any structured binary protocol. And it just makes it more difficult, not impossible to implement. And we have to think about what we value as a community in forming the specification. Okay, thank you. I believe I'm next. So the net effect, though, of trying to make a distinction like what was being proposed here of no extensions being promoted and is that you basically have to treat every single extension, I'm sorry, you have to treat every single new property that gets added. Can someone go on mute if they're typing? You basically have to treat every new property that gets added to our spec as a known extension because obviously someone's probably been using it, otherwise they wouldn't be adding it. And then that's gonna be a major bump according to what you guys are asking for. So any change to our spec in terms of new properties that we added will result in a major version bump regardless of this PR, which you guys are suggesting. Now, the other thing is, the reason I really raised my hand was I'm really confused about why this specification is so problematic because there are a ton of other JSON specs out there that have JSON serializations that allow properties to appear at the top. And I have never heard of one that tries to ban new properties at the top like this or we forcing people to put things in certain buckets or saying you can't use binary pro formats with it. I've never come across another spec like that. Maybe my scope is just very limited. So I'd really like it if someone can point me to another spec that has these types of problems and we're forced to make these choices that we're being asked to make because I cannot find one and I really wanna see another example of it and that way I can see why they made those choices. Okay, who's next in the queue? I believe I'm just right after you. Okay, go for it. And to be clear, I am not trying to put my foot down and say that our JSON must be dictated by proto. I am A, trying to raise awareness of what tooling is. I think that we should have a goal to consider the practicality of implementing certain things, certain features of a language or spec or easier implement than others. JSON is no exception. The other thing is just effectively giving, putting the choice of Google's roadmap on the committee's hand that when we eventually try to add support for GRPC, having published a proto file in open source will de facto imply a JSON spec and we are stuck between a rock and hard place. We cannot prohibit someone from taking our proto and using the generated JSON version and at the same time, we can't without help reconcile them. And so we're trying to put it in committee's hand of which do you prefer. And it's absolutely okay if there's a formal declaration that we understand Google said that this would cause this implication, go ahead. We just want to make sure that that's actually in some minutes somewhere that we're trying as hard as we can to not fracture the spec. Okay. Is there anybody else in the queue? I don't think so. So is anybody you'd like to make any other comments, questions? Okay, we have 12 minutes left. Now, last week's called not Thomas Clemens. You said there was a proof of concept stuff. Do you want to go over that in five minutes or skip it? I would prefer, so this basically made a similar point that you made in your POC that we can be following the words of the YSM Ruby expect more and to have a specification that allows extra stuff. Also, as Tim just pointed out, JSON and XML have been super, super successful with that. It's very possible to make binary formats. To make binary formats also use that. And so my prototype, I don't want to spend much time on it. I pasted the link, basically illustrates that you can have an extensions bag if you need it in a binary format and still make that compatible with having no extensions really in the core spec and have the tag format, including the binary ones, deal without an extensions bag and not having extensions back as formalized is actually allowing for better extensibility as proven in HTTP, as proven in JSON, as proven in XML. Okay, any last comments or questions? Okay, in that case, what I'd like to do, I'm sorry, go ahead. Yeah, there are two comments in the chat, one from Steve and one from me. So Steve was asking, where is the prototype to JSON need coming from because the spec currently requires JSON format support and that needs answering. And I'd like some context around the poor footnotes in the versioning is hardwrapper, which doesn't just basically say that this could be built using a map. Again, I'm not familiar with Proto, so I might be wrong, but I'd like to get some clarification on that. Thomas or Rachel? I didn't. Could you restate that? Sorry, the connection's very good. Sorry. So if you have the chat, there's a question from Steve, the second to last message. Where is the Proto to JSON need coming from? Okay, I can take that. So if we define a message format in Proto lane, then it will let us take incoming JSON format cloud events, handle that easily and then convert it to Proto binary. And this is not just like an internal need, this is a thing that we think is useful. Does that capture what you're looking for? Yes, I think so. Okay, and Vlad, may we have a library on your question then? Yeah, the fourth footnote in the Thomas and Rachel, it says the library would have to be built off a map. Wouldn't that fix the issue you're saying off or wouldn't that I'm not understanding it? So the problem is not that it is impossible to create a correct library, it's that the intuitive thing that many reasonable developers would have done will put them in a trap. So the problem is not like, if you look at the repo, there's a git commit with every author, the persona labeled in each one. And it's really hard to say like who was the bad person or who was the junior developer who made this mistake. And I'm trying to, as I say, raise awareness about the fact that these things, it is a good tool should be easy to use and hard to misuse. A good spec should be easy to implement and hard to implement in a bad way. And I think that we can do better as far as making sure that we have a spec with fewer rough edges around the library implementation. Also with respect to the Jason question, this is actually a fairly common use of proto. Like we even internally use proto for in-memory structures because we know we can hand it off to anything, even a sidecar server. And so the proto compiler, the code generation that it has generates objects that can be read from both proto binary and from JSON. Okay, so I think that we're basically done plus we're almost out of time. So here's what I'd like to do. We're gonna kick off the vote right now. I'm gonna go to the list of companies that have voting rights, ask if they'd wanna vote right now and you're basically what you're doing is voting yes for the PR, no on the PR. This is the PR to basically remove the bag. It's 277, right? So a yes vote means you wanna approve the PR, a no vote means you don't wanna approve the PR. You can choose to abstain. If you are either not on the call, obviously, or you would choose to defer your vote until later, just say so, that's fine. We'll give people who cannot vote now or don't wanna vote now until beginning of next week's phone call, new Eastern times when the vote ends. But for people who wanna vote right now in this call, I'd like to go through the roll call, okay? So Adobe, is anybody from Adobe on the call? Yes, I'm here, Doug, and I vote yes. Okay, Alibaba, is someone from, is Ryan on the call? Okay, Rachel or Google? I vote no. Okay, Huawei, great. Sorry, yeah, I was on mute. So is this a vote for what, for remove, the move that- This is for PR 277, which removes the bag. And then you're also saying that people could vote asynchronously if they wanna absorb what's happened. Yes, if you want more time to think, you have until noon Eastern next Thursday to vote. So you don't have to vote right now if you don't want to get that. You can defer until later. Okay, I would like to understand more about the concerns. And then, you know, so this just removed that extension bag, right? I suggest, Kathy, this is just 277. I suggest you don't vote now and that you'll get 277 and decide, because I don't want you to feel rushed. I would suggest that you take time to review the PR. Is that okay? You have until noon next week to vote. Okay. IBM is gonna vote yes. Ken, I'm sorry, Chris, or someone from JS Foundation? Okay, Microsoft? Microsoft votes yes. Dan or NAIC? I'll defer. You'll what? Differ until later. I'll vote after thinking about this a little bit longer. Okay, got it. Someone from Nats or Sinidia? We'll defer until later. Okay, Nordstrom? Eric? Steve, okay. Oracle? We will vote, but we will defer. Okay, SAP? Yeah, we will defer. Okay, Collins or Service? I don't think they're on the call. There's a double check. Anybody from Service? Okay, I think Mark had to leave early, but let me just double check. Mark, you still there? Okay, Vlad? I'll defer. I'd like to look a bit more into the process, but I'm leaning towards the S, but I'll defer for now. That's fine, defer's fine. Shivram? I'll defer, I'll leave it in the morning. Okay, so just to be clear, we have until, or you guys have until noon Eastern next Thursday to vote. Please, what I'm gonna do is I'm gonna take the voting that's here right now, put it into a comment inside the PR, from these, what is that, five companies, and then if you guys can vote by putting an LGTM or a not LGTM into the PR itself, and then we will close it down next Thursday at noon Eastern. Sound okay to everybody? Yep. How could you put this link into the meeting minutes? The, you don't need to link to this, this is in the minutes already, you know the top is the attendance tracker, but what I'm gonna do is I'm gonna take the current voting and stick it into the issue itself, or the PR itself. Yes. Okay, so with that we're almost out of time. Obviously we don't have time to hit another issue. Let me go ahead and do the roll call again. So we actually had quite a few people join larger than normal. John Ballester, are you there? And if you're typing, can you go on mute? So John Ballester, are you there? John Mitchell? I'm here. Okay. Zun Zun, Shang? Yes. Thank you. David Lyle? Yes, I'm here. And Joe Sturman? Joe, are you there? Okay, Matt Rikowski? Here. Thank you. Klaus, are you there? He is, I'm here. Okay, Kathy, I heard. Erika? Erika Diaz? What about Rocky? I'm here. Excellent, I love that name by the way. Renato? Renato, are you there? Hey, what about Ying? Ying Li? I'm here. Thank you. What about Luciano? Luciano? And there was a, someone on a call with Chris. Chris's iPhone, I think is the thing. Chris with a Y? Are you there? Okay, is there anybody on the call that I missed from the attendance? Oh, I'm sorry, Doug. Doug Miglore? There you go, thank you Doug, I gotcha. Okay, anybody else? I missed an attendance. Okay, we have a whole two minutes left. Is there any really quick topic people would like to bring up? Okay, in that case, I have one thing. Kathy, just a reminder, you have an AI to open a PR, upload the workflow doc to the serverless working group repo. Oh, yes, I'll do that. Okay, cool, thank you, just wanted to remind you of that. All right, in that case, we are done. Thank you guys very much. Very exciting. Thanks guys. Okay, bye. Thank you.