 Hey Eric, how's it going? It's good. How are you? Pretty good. And Randy, are you there? Hey, yeah. Excellent. This isn't your first time, right? You've been here before. Yeah. Okay. A couple of times in the past and just last week. Okay, that's what I thought. I just want to make sure. Cool. Yeah, I apologize. You know what? I meant to send out a note reminding people that we now have a password. Hopefully that didn't mess you guys up. Caught me for a second. I have to go to the specs. Yeah. Busy busy day. Hey, Tommy. Hey. Hey, Timmer. Hey Doug, how are you? Good. And Randy. I'm sorry, not Randy. Yeah, Brian. Sorry. Sorry, getting you guys mixed up. Hello. Hello, sorry. Mr. Mark, you there? Hello, Matt. Yes, you're there. Mr. Clemens, you there? I have arrived indeed. Excellent. Hi. So this is Matias. Sorry. And we're so excited that you're here, Clemens. Hey, Lucas. Hello. Morning, David. Good morning, Doug. I knew someone was going to ask me. What's the password? 777777. Somewhat. Yeah. It's the first time it's actually done this to me. It's actually required a password. They just turned it on like. Earlier this week or late last week, something like that. Yeah. I'm not quite sure why. Hey, ginger. I was very confused that the link disappeared from the read me page because that's what I've been using. What did I do? Just point you to the agenda doc instead. Ginger, are you there? Yes, I am. Thank you. Cool. All these new security features. Yeah. It's wonderful. I have to admit, I keep hearing about them turning on security features or all these other conference calls and stuff because. You know, bad things happen at times. I get to be on one that actually does something weird. Just once I'd like to actually see it for myself. Not that I actually really want to. Just got to keep morbid curiosity. Kind of more than anything else. I have. I haven't zoomed on once and that was the most disgusting thing I've ever seen. It was terrible. I was going to say, I'm sure we could get somebody to zoom bomb. We need to live in this call up a little bit. That would definitely do it. Yes. I'm not, I'm not going to tell you what happened, but it was, it was awful. It was traumatizing. I'm not going to tell you what happened, but it was, it was awful. It was traumatizing. And so I'm very glad that they're doing that. Interesting. So offline, you're going to have to tell me what exactly you saw. Cause I don't obviously want it on a real zoom call words, especially where it's recorded. But like I said, morbid curiosity. I just. Cause I'm expecting the usual stuff. I'm expecting usual stuff like. Say it again. You could probably Google it. I'm sure there'd be lots of stuff. No, see, see that'd be different, right? Cause that, then you're going to get rid of some really sick stuff. But, you know, stuff that you got from a CNCF call, I was hoping wouldn't be too bad. You know, the normal stuff you might expect, you know, swearing or just interrupting or even, even porn you might almost expect. But coming to this, I was like, you're talking about stuff that's even more disgusting. It was terrible. I don't want to think about it. Okay. I apologize for the slight diversion there. Let's go back. There are, there are, there are Reddit groups and fortune groups where kind of open zoom chats are being posted. And the kids just go and storm them. Wow. Okay. Let's get back to the fun. We're at time anyway. So Lance, you there. Yes, I'm here. All right. And Michael. Michael. I apologize. I butchered that I'm sure. Michael. Okay. What about Thomas? Hello, everybody. Hi. Hello. All right. We'll circle back around. That's stuff we get later. Let's get on with the fun stuff. All right. Community time. Anything from the community. People like to bring up. That's not on the agenda. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. That's not on the agenda. All right. Not hearing any. Excuse me. We do have an SDK call this week. Let me just double check. I don't think we have anything on the agenda. But let's just go see. Yeah. Nothing on the agenda. So if we don't have anything by the end of this call, we may just cancel that call. Although I do wonder whether this. Question here from Grant as an SDK issue or not. So we'll talk about that one later. Discovering interrupt. Not this week. Okay. Okay. That's a reminder. We did agree. November 2nd for interrupt. So please take a look at the interrupt doc itself. Help fill it out and start coding away. Right. Timer. Anything you want to mention from the workflow side of the house. Yeah. Thanks Doug. From the workflow side, we finally finished all the logo stuff. We updated the website. CNCF pages are updated. And then we're going to go through the documents. And then they're going to come up with the technicals also for saying that the logo looks like an open source USB logo. That was funny actually. But other than that, from the spec side itself, we're kind of trying to enforce open API. For some sort of a service invocation. So we're going through that. And trying to figure out. If we should start preparing or not for the cube gone any stuff. All right, before we jump into PRs and stuff, any other topics you want to add to the agenda? Or should we talk about before this? Let's jump into it. One minute. Okay, so this one is for me. I don't want to vote on it today, even though it was put out there before Tuesday, mainly because I think it needs a lot of thought put into it. Basically what I did is, first I tweaked the asynchronous text to make it a little bit more generic. There was a little too much logic in there about what to do in certain situations, and I found myself repeating text that really was already described in the synchronous response case. So I basically just said, the async should look almost exactly like the synchronous case. You're just getting the results from a different endpoint. So I tweaked that a little. Hopefully I didn't mess anything up. So please take a look at that. I did change it so that POST can take a list of services instead of just one. And I'm very nervous about this because I think doing a POST one at a time, especially if you're doing some sort of like mass import, is just a real pain in the butt. I think having the ability to do a batch upload is very important. However, that introduces a whole bunch of interesting problems like what if one of them fails? Do you kill the whole thing? Or do you just terminate or just say that one failed and let everything else go through? I took a very much a all or nothing kind of approach to it because I didn't want to get into partial air reporting type stuff. But even aside from that, you then have to figure out, well, how do you tell the caller what the various IDs are for all the things that were created? So I have a mechanism to return that in the response but I had to worry about whether the response is gonna be too large or not because then you get into pagination. So I decided to go with just returning the list of IDs in the same order which the POST came in on. So lots of interesting choices I made. I'm not convinced that they're all necessarily right. So please look that section over in particular but I did get some feedback on it. I also explicitly pulled out the support for import to be separate from the normal PUT and POST operations. I think before this PR, I basically said, you could do a PUT or a POST and then you could put like a query parameter there. And inside that same description I described how you handle import versus a normal PUT or POST. And I found that I was jumbling up the text a lot. I thought it was a little bit hard to follow. So I decided to pull out the import logic meaning the query parameter into completely separate APIs. So that way we can talk just about import separate from a normal PUT or POST. And I think that makes it easier to read. If people want, I can obviously scroll through this and talk to anything in particular but let me just go in and stop there and ask if there are any high level questions and keep in mind, like I said, I don't want to vote on this today. I think we need a lot more thought process put into this besides just this quick review. So let me pause there. Nothing? Okay, in that case, I'm assuming that means most people did not have a chance to review it because I doubt everybody is violently agreement with what I wrote there. So I'm going to assume you guys just need more time to review it. Clemens, you keep going off mute. Is there something you want to say? Yeah, I looked at the associated bug but not the PR. I think this idea with the import is good. I'm generally leaning towards pulling these sorts of... Walking up to a service and changing 500 things. It's a little strange to me and I would rather want that service to go to the source and pull. So I'm not sure what I like the whole notion of kind of this bull push at all but I will admit that that's taste. And so I think the solution you have here to say there's a way to go and do inserts where you are assigning numbers and then you do effectively replication where you keep all the numbers. We're applying giving you something and then you go and take ownership of it or another service giving you something and you want to go and retain the entire shape of it. I think that distinction is good. Yeah, I... But I have not reviewed all the text. Okay, that's fine. I didn't notice your comment about the push versus pull. And I thought that was interesting but I hadn't thought about that. Do you think that in all cases where there's going to be the equivalent of some sort of batch upload of stuff that the discovery endpoint will always be able to reach out to the source of that bulk? It might not, yeah. Well, it might not be. That's the one thing that worries me about that because it certainly would be easier if you could pull it, right? That you don't have to worry about partial success and partial failure type stuff, right? Okay, but something to think about is that it is an interesting approach I hadn't thought of. Yeah, okay. Any other comments, questions? Okay, so I'll just assume people need time to review it, which obviously was my plan. So please, in particular, read over the batch stuff. That's the part that worries me most. I think most everything else is fairly straightforward. At least I think if I remember correctly. Okay, I'm not hearing any other questions, comments. We'll keep moving through the agenda. Doug, could I ask one thing? Of course. Something that's sort of been rolling around my head since last time. Can you sort of describe the thought process between the post approach where you have this kind of non-item potent, server backends assigning the identity to versus like a put approach where you have more of an item potent, you can do it over and over and over again and the identity is stable. And if you've got distribution, it's very easy to reconcile versus multi-master updates in a kind of a post world. Yeah, so my mind, so let's ignore the import case for a sec. In my mind, I keep it very simple. To me, post is for create and put is for update. Okay, because if you ignore the import case, then when you're talking about a brand new service coming out of the discovery endpoint, my assumption is that it does not have an ID yet, so it's a brand new thing and you're just sticking in there. And therefore, post is the right way to go. Put is when you already have an ID associated with it, so you're just gonna be updating that resource. Now, import throws an interesting twist into that whole thing, right? Because with import, you wanna keep the ID that you had before. And that's why as part of my PR here, the difference between put versus post for import, I think is very, very slim. They're almost the exact same thing, I think in most cases. And that's something else that I wasn't 100% sure I would be perfectly honest. And the reason I kind of did it that way is because I was trying to make it as easy as possible on the user of the system, right? So for example, somewhere in here, I talk about how on a post for, yeah, I think on a normal post, not for the import case, you can include an ID and an epoch value, and they'll be ignored, right? Even though they're gonna be ignored, I allow them to be in there because I wanna make it easy for somebody to do a get to one discovery endpoint, maybe basically an export, and then turn around and do a create on this other discovery endpoint and not have to go through and modify the entire chunk of JSON to remove all these fields that are just gonna get ignored, right? I want their life to be easy, right? So I'm not sure I'm completely answering your question, but those are the kind of things that I don't want anything in my head as going, as I was writing this up. And to your original question though, to me, put versus post is create versus update. That's what really comes down to you for the most part. Does that help? I have a slightly different view on that. Like I agree that in general, that's how they work, but when you think about like, where is the entity being created? Who is responsible for the creation of the entity and who's identifying it? If it's the client that needs to be able to maintain stability of that ID across a set of masters, then the notional object is being created by the client and of course put is item potent. And so I can put that idea over and over and over again. I don't end up creating multiple things. Even when I'm doing that across multiple masters and in a distributed scenario, I just wonder if that wouldn't simplify some of the challenges for reconciling and things like that. And yeah, you know. Yeah, so that's interesting because I think I want to put in here, I think I do support that notion where basically what you're saying, if I understand it correctly, is the client kind of picked the ID, right? And I think you can do that today with what I have here, even through a put, but you have to tell us it's an import, right? Because by default, I assume put is more of an update. So if the object doesn't already exist, it's going to give you an error saying 404. But if you give us the import flag, then we're basically saying, okay, yeah, this ID doesn't exist in the system, we'll create it for you and we'll use the ID you passed in, right? But I think what's interesting is, if I understand it, you're kind of touching on something that's a bigger topic, which is either where's the source of truth or who actually owns these things, right? And we haven't actually touched on that yet. And I think that's a good topic because I don't know the answer. I've been kind of assuming that each discovery endpoint thinks it's the source of truth, at least for the stuff that it knows about. I'm not 100% convinced that's right. Yeah, interesting area of thought for sure. Yeah. And of course that then goes into some of the stuff I mentioned in Slack around the stuff, right? As I was coding this up, I kind of did Scott's little sample that he showed us last week or the week before, right? We had a ring of discovery endpoints each treating the next guy in the chain as upstream, right? And you can see he's running his list and you can see services propagate through the ring. Well, depending on how you choose to do the propagation or who chooses to query who at what point in time, you can get into really weird situations where somebody deleted a service from one endpoint, but then that service doesn't get deleted through the entire ring, rather get recreated because the one you just deleted are from ends up querying another guy that says, hey, what do you have? And he says, oh, I don't know about that one that you have because I don't have it, but it just got deleted, so he ends up recreating it, thinking the other guy recreated it or the other guy created it fresh, right? So you get into really weird situations and I don't know what to do about those things yet. So those are the things I wanna talk about at some point. Okay, any other questions or comments? Okay, so as I said, please look this one over. I'm not so much concerned about getting this one soon because of anything other than I want people to know what to code up for the interop because I know people are very short on time and very, very busy. So I'm trying to keep this spec changes down to a minimal or get them in as soon as possible. That's my reason I'd like to get this one in sooner rather than later. Not because I wanna close on the discussion, so please do get a chance to review it if you can. That way people can start coding it up. Okay, thank you all with that one. Lance, I hope you're okay. I wanted to talk about your question because I thought that would be a good one. We didn't have a whole lot on the agenda anyway. You okay with talking about this one? Yeah, I'm fine. I added it to the SDK agenda, but we can talk about it here. Okay, did you have an issue or just make whatever it's SDK? Because if there actually is an issue, then there may be a spec issue to fix. So, okay, I'll let you talk about it. So this is the issue for everybody that's looking. So the situation is that it's possible to receive an event that has a database 64 property, indicating that the data itself is binary and has a data content type of application JSON. So is it legitimate? Is it against the spec for an SDK to recognize that it's binary data, decode the binary data in JavaScript that then it is just a buffer, which can be a string. And then if I know the data content type is application JSON, I can parse that JSON if I'd like to and turn it into an object. And is that legitimate? Is that purely an SDK question? And would it be contrary to the specification to actually receive an event that has a database 64 and a constant type of application JSON? Jamie, your hands up. Yes, I was trying to understand what Lance was talking about in the chat yesterday. So I think my thought is that an SDK would need to support that. But having said that, I don't think an SDK should ever produce events like that, because that doesn't seem to be in the spirit of the spec. Right. So whether that's a spec guidance, I don't know, but I think for completeness, it should accept stuff that looks like that. That would be my opinion. I stumbled on this literally because a bunch of the original tests in the SDK, the JavaScript SDK were written, but with binary data that was actually just JSON that had been converted to a binary buffer. Scott, here's your hands up. Oh, sorry. Go ahead, Clement. I forgot you were trying to speak earlier and then we'll get to Scott. Sorry, I'm not good with the hand raising. Yeah, so that's a legitimate way to encode anything that is not encoded in the same encoding. So I mean, there's, JSON needs to be in UTF-A, et cetera, but you could have the situation where, but it's representable in all kinds of different character sets. So you could have a situation where the outer and the inner character set don't match. And the same is true for if you want to carry XML data, that might also be while that's text, if that is encoded in a different way with a different character set and you would also carry that as binary. So I think the combination that you mentioned in the beginning, like it's binary, but it's a text format. So I need to go in and first run that through a character set decoder. Did anybody else lose comments? Did we just lose it? Yeah, comments, comments, just so you know, comes you cut out for about 15 seconds. Something is wrong with my network. So let me say that again. So if you find that the binary, that you have binary and you have an encoding for the payload that or data content type that indicates a text, then you run that through the character encoding or decoder, presumably UTF-8, unless the content text says something else and then you deal with the text. So that's a legitimate representation just as data is. Okay, Klaus, your hands up. Yes, probably not about application JSON, but in general for other types, an SDK might not always be able to determine if it's binary or text. So if it gets a message in binary mode and has to determine then for forwarding it in structured format, it sometimes may have to guess or apply some kind of heuristics or something. And I discussed this with Ellen, I think last year around that time. And so there might be cases when you have to be ready to receive events where the payload is either in binary or in text form, depending on the heuristics, the according SDK has applied. Scott, your hands up. This feels similar to the question we had maybe six months ago around, is it like, how do you do a wrapped binary thing with a JSON event inside of it? Is it anywhere close to that? Or we're talking about like actual binary data inside the payload. I think it's interesting because I think in this case, it is text data, it's just base 64 encoded, right Lance? Yeah, I see. But your use case is interesting, Scott. I hadn't thought about that. No, we did think about it. We had a discussion. But I think the correlation is interesting. I hadn't thought about it. In this case, I think the SDK would reject it. It's not a valid cloud event and it's not encoded in the correct way. What makes it invalid? It doesn't follow the normal parsing rules. Why is that? No, it doesn't. I think it's fine because data base 64 does not contain JSON. It contains binary. And then you are, from a client, the client is just giving you a byte array and says, here's the content type. It really leaves it to the application what to do with it. Yeah, it seems like this, I think Jim might describe the best one. He said it's legal, but really funky in my words. But I think Lance, I'm not sure you got a direct answer to your question about what should an SDK do with it? Let's pretend that the event comes over in Avro. Then it's clear. As soon as you have a proto or an Avro event, there is no question what the solution to this is. Because if it's binary, then the content will all... Claments, but you cut out there again for about five seconds. Okay, so if the event is Avro or proto and the payload is binary and it says content types is application JSON, you know what to do. Right. And like in the spec, it says, if the data is binary, there must be a database 64 property. Then in the next sentence, if I remember correctly, it's the next sentence, it says if there's a data property and data content type that is of a type that the SDK knows how to parse, it should be parsed. So as soon as I convert from base 64 into quote binary, I then have a data field and a data content type field that I know what to do with. It seems legit to me, but it was just a weird case that I came across when I was kind of going through the JavaScript SDK tests. So no one's hands up, so I'll raise mine. It seems to me that an SDK that receives this at a bare minimum should treat the data of the event as binary. The fact that it has a data content type of application JSON is interesting, but does not change the fact that this thing came across as binary data and therefore if the SDK has the notion of a binary blob just being passed on as a binary blob, then it should, as the bare minimum. Whether it then tries to be smart and say, oh, because the content, data content type is application JSON, I'm going to try to parse this as JSON and then pass it on as a text thing or as a JSON object. I think that's a nicety that it may choose to do if it can, but if that for some reason fails, I think it needs to drop back to be straight binary. But then I think you also need to be very clear with your readers or your users that you're going to be doing this magic under the covers for them and almost offer them the choice of not doing the magic. But I don't know. Jim? Another side issue here. I believe for us to claim compliance with this spec as a service provider, we have to support JSON format. We have that, so I think one of the sort of ground rules. So I guess it's very non idiomatic for me to expect us to be able to process JSON payloads where JSON is not in the data, it's been funcally put into some Bay 64 scheme for some reason. I guess it just, it sort of unravels the string a little bit from an event handling perspective. And how, what does being JSON format compliant actually mean? I guess in that case, the mean I understand every single weird way that an event might be represented. So Lance, let me ask you this. I'm wondering whether, I'll rephrase this, so we keep talking about this being sort of an odd case, but I'm actually starting to wonder whether it's not. Because today, if you use the data attribute and you put JSON in there and the content type is application JSON, everything's fine. But when the user gets that data, it's going to be a JSON object, but there's really no guarantee that that JSON is formatted the exact same way that the user or that the client sent it. And by that, I mean spaces and everything, right? So what if the use case here is someone wants to guarantee that the exact formatting for their JSON payload, space for space, new line for new line is the exact same, but the receiver's going to get. Therefore, they don't want any middleware knowing that it's JSON to do any weird funky formatting on it and they want to make sure it gets passed through as is byte per byte. Therefore, they're going to pass it as binary. Is that a valid use case for this scenario? Yeah. And in that case, I think I should not do what my inclination was, which is to parse it as application JSON and decode the binary, parse it as application JSON and then have a basically an object as the data. Yeah. So in that case, the thing you get back is a string of the base 64 encoded data. But what we're really talking about is the payload as a JSON string and the cloud event wrapper knows nothing more and it's up to the consumers to understand how to turn that back into whatever it's supposed to be. I would actually say in that case, you give back a buffer. The SDK wouldn't try and turn it back into a string. You're really saying this is a byte array and here's the byte array and then here's the content type that was given for it. Yeah. If it's in the data field, you can do that if it's in the database 64 field. Well, that was the only way you... See, I've always thought from an SDK perspective, if it's in the data field, that's the JSON value. So why not just give back a JSON value? Well, the JSON value in that case would be the string of the base 64 encoded binary block. Well, I mean, again, I wouldn't consider it a string. I would just consider it a JSON value. It's an object, yeah. But it's invalid as part of that data field. It has to be a valid JSON object. It can't be bytes because it has to be able to go between structured and binary format. Yeah. I mean, it has to be a JSON value. So it could be a string. It could be the word true. It could be, it doesn't have to be an object. Yeah, that's right. That's right. I'll observe that there are overwhelmingly server plumbers in here. And we try to be really smart. There's an intent expressed by the publisher of that event. And the publisher says, this is binary. And then they use the data content type as the hint for what that binary contains. And that hint is for the ultimate receiver of that event. And I don't think the middleware has any business in fuxing with that. But would this be preserved in all cases? That's converted to an HTTP binary and back then to structured. Yeah, that's why we did the basics. We did data underscore basics before because we have no other way of distinguishing true binary in JSON from strings. And that's why we had this. It's an annotation. Yeah, sure. But in this specific case, if you have this JSON as database 64 and you send it on in binary mode, it's then still this expression of sending it as binary is still preserved. Yeah, absolutely. Because we're converting from, we're converting this into binary in our kind of in memory model. That's why we have types. And then if you're sending this on as HTTP as an HTTP binary type, then you're just mapping the binary into HTTP binary. And then you also back out as such. At an HTTP binary, it would get content type application JSON and the A64 decoded value would be in the body. So. That's correct, yeah. Yes. But then this, and if you would then convert it back to structured, this wouldn't be binary anymore. It would be just in the data attribute usually. Are you talking about a different scenario? Well, that's an interesting scenario. Because when it comes in in binary mode and it wants to go out in structured mode, you've got two options then. You can either just jam it into the binary field or you can look at it and go, oh, it's JSON there for I can drop it in the data attribute directly. I think that's this translation use case. But then we can lean back on the JSON. You cut out again. Hey, Clemens, you cut out again for about five seconds. Yeah, I'm having network issues that are weird. Has Microsoft had any issues this week? Sorry, that was a dig there. Yeah, for that, however, we can lean on the JSON spec and say, well, JSON is exactly what's in the spec and whether you encode that as binary or whether you encode that, like the preserve character by character thing and with the right indenting is not JSON. That is, it doesn't preserve the new lines and et cetera by default. You can reformat, it's still the same document. And I also feel like the scenario the class is talking about there is almost out of scope for us in the sense that it's really up to that piece of code that's doing the conversion from binary to structured to make that decision on its own and whether it made the right choice or not isn't for us to decide. Yeah, well, my point was that the choice is lost over this chain. But that's up to that person that wrote that bridge, right? We can't control what kind of logic is in a bridge. All we can control is something like Lance's question, which is, do I treat this binary blob as a JSON string? And I've convinced myself that no, they asked for it to be passed along the data as binary. Therefore you need to pass it along the binary to the receiver. That doesn't exist, right? There is no binary blob if it's in the data field. No, no, I'm talking about Lance's scenario, which is it's in the, it's in the, it's in the, it's in the base, I'm sorry, database 64, whatever that thing's called. Incoming message with database 64 set. I think I'm leaning towards that too, Doug, and I need to go back and change my PR. That like, well, someone mentioned earlier the intent. I think it was you, Clements. Like what is the person or the system that's creating an event? They're creating an event with a certain intent. And if that intent is that the data is binary, it doesn't matter what the content type is. If they express it as binary, we should leave it as binary. I think that's the safest of nothing else. Yeah. In the GoLing SDK, if you had a structured message as with a database 64 and you converted that and tried to send it out as a binary message, it would use the contents of that buffer and write it out as the payload of the body. The contents of the binary data. Yeah, because the base 64 encoded data inside of database 64 would get turned into bytes in memory buffer. That buffer would be flushed out as the payload of the body of the binary message. And that sounds right. Now the receiver could turn that into a structured message. And if it happens to be JSON, it doesn't know that the intent of the original sender was that the data that is basically 64 encoded, actually I want that to stay binary. It would convert that to likely data with a JSON object. Because we don't have a signal to say, actually this OG event was a binary database 64 thing. Because on the other side, after it goes binary, it's gonna look at the data content and coding and try its best. Ryan, your hands up. Yeah, this might be a bit of a rabbit hole or a bit too pedantic, but I was trying to think of an example. Avro is an interesting example because Avro actually has two representations. It has a JSON representation and it also has a binary representation. So I'm wondering if that's an interesting use case to look at where you might have receivers that for whatever reason or producers that can't operate on the binary representation versus the structure JSON representation. You think they wanna comment on that? Okay, Gem, your hand is up. I think this is a good discussion and I'm wondering if we need, is this an SDK issue or is this a general best practice issue? Because not everybody's gonna use SDKs, yeah, rightly or wrongly. So do we need to sort of codify these sorts of cases somewhere? My opinion, I think this would be excellent information for the primer once Lance decides what the right answer is. Well, I mean, it sounds like we sort of agreed that if the client says send it as binary, then you send it as binary. If a receiver gets binary, then it's presented up to the application as binary. Yeah, those SDKs that are facing the client or the application code are not gonna try and monkey around with representations to that extent, yeah. And then when you have these translation or transformation scenarios of structured to unstructured or whatever, structured to binary or other, sorry. That's where these sort of principles, I think come into play. Okay, I'm not seeing anyone else's hand up. So to wrap this up, Lance, would you be willing to take an next item to write a paragraph or so for the primer? Sure, yeah. That'd be excellent, thank you so much. All right, so let's go down here. Perfect. Any other questions or comments about that topic? That was a good one, Lance, I liked it. Got his thinking. Yeah, thanks for all the good commentary. Yep, okay. Okay, this one, this one at first, Grant you're on, right? Yeah, so Grant, when you first opened this one, I thought this was gonna be a trivial thing that I could treat as a typo. Grant, why don't you introduce this one? Cause I'm not convinced which way to go on this one. So I'll let you talk to it. Yeah. So following the base 64 confusion and dialogue, which I think was great, I was trying to understand it as well and just reading that paragraph, let me pull it up. It's on the screen if you can't see it. Yeah, cool, yeah. So it's under the handling of data and so it reads if the momentation of the type of data is binary, the representation must be stored in this data base 64 key. Otherwise, if it's data, then it must be in data. So I thought it made a little bit more sense, especially in the first case, where data is not in my code quotes because I think it's trying to represent data in terms of just the concept of a cloud event data field, which might be in base 64. And so, yeah, I made the PR to remove the quotes because we also use the code quotes right after in the member name. That's pretty much the summary. Okay, yeah, I gotta be honest with you. I don't know why this one, I need to bring this up because I think you're probably right, but I wanted to hear from other people who are more English major type thing, like I'm gonna pick on Scott because you've done this stuff in the past. So Klaus, your hands up. Yeah, so I think that's actually just a leftover because data was an attribute earlier. And I think in the last weeks before we did the 1.0 release, I rewrote some of this to make it clear. Maybe I just, it's probably really just a leftover. And I agree that this change makes it a bit more clear. Okay. Anybody read this and disagree with removing the back texts around the word data in those two spots? Okay, thank you all. Like I said, I don't know why this one got me concerned, but I wanna make sure I didn't just blindly accept it. Okay, just double check. You open this, when did you open this 18 hours ago? Say what? Yeah, I'll wait till Friday to make sure no one else brings up any issues with it since technically it was opened up after Tuesday. But if no one mentions anything by Friday, I'll go and emerge it. Okay, so thank you for noticing that. Now, I know Slinky is not on the call, but Clemens, I think there was some back and forth on this issue. Do you wanna summarize where things are with this one since you've at least been part of it? Yes, I actually made a comment today into the PR. I imagine I did some work. There we go, there we go. That was this, oh, he already made change, amazing. So I said, I think last time we discussed this, I said this section seems a little iffy, and I still think it's iffy to have different sub-protocols for different encodings. But when I looked at the W3C interface, that's that link that I have in that comment. If you would click that once, if you do me the favor. I'm sorry, this one? Yeah. Okay, sorry, I was distracted by something, okay? So you can only, the WebSocket interface as defined in the browser where I find the WebSocket interface most useful, you can't give any of the fancy other headers and you can't say, here's the extension and that's what I would usually use. But you can really only give a protocol. So they have the WebSocket interface, it's in the browser, it's dumbing this down quite a bit. And so that's really our only option to go and make wishes. And so you would, in this interface, you would say, I'm willing to use the protocol cloudevents.json, or yes, cloudevents.json, cloudevents.avro. And then the server will then negotiate whatever it can. And so that's ultimately what the constraints are for, for the change. And so I agree with Slinky that using the sub-protocol is right. Okay. And then, but for, oh, okay, can you go to, because he's updated this five hours ago. Which section should I go to? We're, it's fairly far down right there. Yep. So frame type, text and binary. So that's what I was looking for. So now I'm, now I would be happy with the spec as it is. That was my only, the only objection that I have. The rest is basically just the necessary things you have to go and say to, you know, make them epic work. It's, it's really just ultimately, it's just, you know, put structured events onto a web socket frames. And the rest is all just explaining where it is. Okay. Thomas, you made a comment here. Yeah. Did you want, did you want to talk to your comments at all? I know Slinky's not on the call, but did you want to talk to it? Yeah, I'm happy to. So when I read through it, actually today, sorry about that. I was wondering why, first of all, what do you mean by all implementations? So was it meant to be the client and the server side or the intermediary or whatever is meant by this? But I think this is a phrase which comes from the other protocol bindings as well. And then the second thing was, why does the JSON event format is necessarily to be supported? Or why do we explicitly say this needs to be supported? But maybe we can also say something about this. Clement, do you have any opinion on that one? No. Okie dokie. Oh, wait, hang on. So, because we, I think we mandate that JSON must be supported anyways. In the main site? Yeah, yeah. For cloud events, you have to support JSON. Okay. Well, if that is somewhere, then I'm fine with this. And Jeff was a bit confused about the all implementations, but probably it's meant to really decline the server and then I'm fine with it. I think we have this exact same clause in the main spec that we can go and look it up. Yeah, I was a bit confused. Everybody needs to support JSON. I was a bit confused because here we're talking about web sockets and might be a little bit the different level than just the normal HTTP where you would expect text, but here we can choose to switch to binary. And then, but yeah, I'm not so concerned about this. I'm not so concerned about this. And we're really mostly talking about frameworks and we're not, so that might be the unclear piece. Okay. So I don't think you have to use JSON in your application. No, I understand that every implementation needs to implement it. So it's actually ready. And for interoperability point of view, fully in. And then the second comment was really about the JSON batch format. So there it's written. It contains the web socket message, contains a cloud event. And then I was thinking, hmm, JSON batch format which is defined in the JSON format is this supported as well. And probably this formulation needs to be adapted a little bit. So it could either contain a cloud event or an array of cloud events. Okay. It sounds like that's more of a clarification kind of thing, right? It's a minor thing. Okay. So I guess for you Thomas and you Clemens, if these two questions here Thomas that you opened up are addressed, it sounds like both of you guys are okay with this thing moving forward? Sure. Clemens, you're okay with it moving forward? Yeah. Did you sell reservations? I couldn't, I wasn't 100% clear on whether you had reservations or not. I'm fine with it. Okay, cool. All right. Anybody else on the call? Have any questions or concerns that we need to relay over to Slinky? Okay. Cool. Making forward progress, that's good. All right. Tell you what, since we're running out of time, let's skip this one that I was helping me be able to get to. Grant, this issue that you were typed in here, is this a SDK question or a spec question? Just trying to figure out which phone call we should discuss it on. That's not a spec, it's SDK. Okay, so we're gonna say that for the next call. Yeah. Okay. Okay, in that case, since we do have a whopping five minutes or so, who opened up this one? John opened up this one. So, okay, I'll let you guys read this right here. So, to me, I don't actually think we need to do anything in this spec in this space because the spec in pretty much all the key places say, if the, either the value here, if it's present, especially for optional ones, oh, I'm sorry, no, no, for instance, right. For optional attributes, in almost every single case we say, if it's present, it can't be things like a non-empty string, which implies it can't be null. And all the required attributes are defined to be values that can never be the null value. For example, timestamp can't be null, right? It has to conform to some RFC or something like that. So, I'm not sure this is actually an issue, but I wanted to bring this up because I think Clemens and Klaus, you guys, and I, ooh, Lance too. You guys all had comments on this space. So, I wanted to get a brief discussion on this to see what everybody's thinking. Does anybody wanna go first? I do, I do. Okay, go Scott. So, there are some bad actors in the ecosystem that do send down payloads that have JSON-nil values. Clemens. So, we're gonna go ahead and test grid some time. When I was young, when I was young, yeah. Well, Clemens, we don't wanna go that far back in time, okay, but go ahead. So, see, when you have strongly typed schemas that is records, like in a database or in the program language, the way how you say there is no value in this thing is not. So, that's how I look at it. Like, you create a database table that contains a cloud event. The only way how you can say that field has no value is by putting null into it. Well, we fast forward 40 years. Only 40, okay, go ahead. No, sorry. Like, it's, I don't know how to handle nils in properties because the cloud event spec says it must be a thing. It must have a value. And it's not the string nil that I'm getting. It's the JSON nil, which doesn't translate. Like that translates to null pointer in go. Yeah, but you, since you can treat, if the value is absent, you can trade the indicator null as you're pulling the value off the wire, you will see that in the JSON, there is a null value, which means you can go and ignore that field completely. You don't have to map it into anything in your go memory representation. The go Marshallers don't know how to do that. They'll do that for pointer types. But if you're trying to Marshall something that's a string and it gets a JSON value of nil, it just blows up. So come in. Is that what you're having as defective? Jim, your hands up first. I'll think to nap my own. I just, I'm just interested in an example here because nil is a string. So it is a valid value. It is not semantically valid, but it is a string. No, there's a special JSON nil, n-u-l-l. Right, but we're talking in what context, what attribute value are we talking about here? Well, in the case that I've run into it, test grid sends, sorry, event grid sends certain extensions that it expects, but that doesn't have set as some nil values and my SDK had a little trouble with that one. So, go ahead, Jim. Sorry, I'm just pushing this. So as an HTTP header, would the string of nil or null? I think it was structured. In the structured. If it came in as a header, that would actually work because it'd be the string again of nil and I could just treat it as the string nil, but it was coming in as a JSON object nil. Okay, well, that's an invalid, isn't it? Because it's not a string. That should fail to pass, I would have thought. That's exactly it. Well, according to the spec. Well, okay, wait, wait, my hands up. So let me ask the question to Clemens since we're picking on you here. What does this sentence mean to you? If present, it must be a non-empty string. To me, an attribute with a value of null or nil, whichever your favorite language is, is present. I don't think it's present. Really? Null is a way to say there is no value here which means it is not there. That's my read of the utility of null is to say there is literally nothing here which means you can ignore it, but it's still there for structural reasons. Okay, let me get one more of this. Ryan, go ahead. Go ahead, I saw it. No, go ahead and finish Clemens, it's okay. So if you have a strongly typed object or if you have a database table, right, you can't make the property go away in the strongly typed object and you can't make the column go away in the database table. But if you say nobody said anything here which means the field is absent, you use null. We have now a special case in some, and the same is true for would be true if we were expressing our events in strongly typed in Proto or in Avro, any of those models which prefer strong typing, we made all the choices here because we want to have the flexibility, but otherwise just null just says it's not there. Okay, Ryan, you're gonna get the last word on this one. Yeah, I think the more restrictive we are in our client implementations, the more assumptions we're making about how all of the upstream producers work. And I think just for robustness sake, I would prefer to make them handle these cases as gracefully as possible, unless there's a really good reason that we can quantify that something like, this is going to create more garbage. I'm not convinced of that. But I just prefer to earn the side of handling these cases gracefully than being draconian. Okay, and with that, we're gonna have to call it a day on that one. Obviously it's not over yet, we'll have to discuss it because I'm either way, I think we may need to put something into the spec or primer to talk about this particular case, whether it's to reinforce what some people believe is already in spec or it needs to loosen things up to allow for null to be interpreted as empty. I think something may need to be said. So that's a good thing that someone brought it up. So let me go ahead and do the roll call and then we can move over to the SDK call. So let's see, I heard Grant. Okay, so Ken was vanished. Matthew, are you there? Yes, are you on call? Matthew? Okay, I heard. He typed it in chat that he was here. Oh, excellent, thank you. Appreciate that. Manuel, are you there? Yes, I'm here. Excellent, Daniel? I am here. Excellent, and Yuja, I'm not sure how to pronounce that. I apologize if I'm butchering it. Yeah, that is perfect. Yeah, this is the first time I'm joining. Cool, well welcome. Which company are you with, by the way? Sorry? Which company are you with if you wanna be associated with a company? Yeah, yeah, I'm working for Freedom Mortgage. Freedom Mortgage, okay, cool. I'll figure that out later, thank you very much. And there was a phone number where they're gone, nevermind. Okay, did I miss anybody for roll call? No, Doug, I think you need to moderate the next presidential debate though, because you're pretty good. Lord, let's not get into politics. It was interesting though, I'll give them that. All right, so if you- If you handle things better, that's all I'm saying. I don't know about that. You guys are much nicer, put it that way. Okay, so if you're not interested in the SDK call, feel free to drop, have a good rest of your day, and then, actually I should ask while we're switching over, where's my SDK thing? Here we go. So do we have any other topics to talk about on the SDK call? Because if not, we can end the call really quickly. Anybody have anything? I can move over the topic. Oh, gosh darn it, I forgot. Sorry, Grant, I completely forgot about you, I apologize, here we go. Did it take too long? Yep. Okay, why don't you- We have one governance issue. What was that, Clemens? We have one governance issue, probably. Do we? Oh, that's right, yeah, your issue. Okay, we'll talk about that later. All right, Grant, why don't you go and introduce your question. Okay, yeah, so just for context, I'm working on creating lots of samples for using Cloud Events with a new API. And in our documentation, we have a lot of copy and pasteable samples, different programming languages, so folks can get started. And yeah, some of these samples use the SDKs, some don't, various different reasons. And I notice with the SDKs themselves, some of the RIPOs like JavaScript, Ruby, Python have full samples in the start that you can just copy and paste and get started, especially if you don't know about Cloud Events or it's not the focus of the product, but some SDKs don't. So I was wondering like, do we want to consistently like have samples in our readme? Should I point developers to our readmes or is that really not the best place? So yeah, sort of trying to get thoughts and then maybe we can add PRs so have the samples if we wanna do that. Anybody wanna comment on that? Is this request attempting to bypass the PR and issue process grant? Because you've raised issues in the SDKs and you're not getting traction and this feels like you're trying to go around that process. No, I mean, I'm wondering if like other people, other like, I don't know, other companies have links to, so like I raised an issue like with the Go SDK and I think I created a PR of like, there's not a sample you can just like copy and paste like the other SDKs. So I was wondering like is this something that's truly like not useful for other people or I'm not trying to bypass the process, of course. I'm a little confused. I'm very confused by the question. This sounds to me like it's a question of what's expected in our readme's and what can, Daniel, you're cutting out there, unless it's just me. Daniel? Okay. Daniel, you there? Okay, hopefully Daniel will be able to come back. I guess I'm confused. Can Google expect that the readme's will consistently list X, Y, and Z so that Google's own documentation doesn't have to do that. I think that's what I'm hearing. Okay, so Daniel, for about 30 seconds there, you went silent. You may wanna repeat it again, sorry. Hang on just a moment. Okay. Yeah, I think that like the last bit was sort of good summary. Like, well, where do we want these samples to live? So like, so Google's gonna have our own documentation of like how to get started with a Hello World basic minimal sample or can we have like some of these Hello World samples be in the SDK themselves? So again, but I think that's where I'm a little confused. Are people trying to like do PRs to add samples to the SDKs and they're getting pushed back? I find it hard to believe that SDKs would reject samples. That's why I'm a little baffled. Oh, Grant is making issues to ask to produce those in the readme. Sorry, can you hear me now? Is this better? Yeah. I guess what I was saying is what I'm hearing is that we're asking whether there's a specific set of things that we can expect from all of the readme's so that when something close like I said Google asks or directs developers to our readme's and says here use Cloud Events for these reasons it can expect that the readme's will contain what are the Cloud Events here examples and lists of A, B, and C so that Google's own documentation doesn't have to repeat that. It can expect that the samples are. So is there some kind of consistent list of things that we can expect? So that's what I'm hearing from Grant. Is that correct? So it's more like do we have like a standard for our readme's rather than yes we can request specific things in individual pull requests but if we don't have an overall expectation then individual pull requests can go out of date they can be inconsistent. Can we coordinate all of this I guess? Okay, so actually I hear somebody, I don't know the pace of this but let's take a look at this, yeah. So yeah, I created a get up issue and then provided like some sample code of what down below like of what would be nice but I guess it depends on like what we want. I mean, do we want? But I guess should be assumed some proficiency in like the programming language or like getting started. If you go scroll down a bit, like more tangentially if we can have like, I was thinking like this type of snippet and then readme that would solve for go. There's a whole directory full of many, many, many examples that are full that compile. I guess with the samples directory, like there's no instructions of how to use those samples. Yeah, it needs more documentation but I don't think that this particular copy and paste is helpful because you're new to go and you don't understand and you're not gonna get it from this because you need to go in it and all this other stuff or you already know go and you don't need something that looks like this. So, Grant, are you suggesting that while we have, picking on the go SDK, while we have samples in the go SDK and as Scott said, maybe there's some additional documentation needs to be created, are you saying that while those are nice, it would be better to have in the actual main readme full snippets of code itself? I mean, maybe for the go SDK, we point to the samples HTTP folder and there's no readme's or getting started guides in the samples folder. So maybe we just point there. It doesn't necessarily have to be in the readme, but. I mean, I can imagine like having a consistent way for new developers to get started in five minutes with the SDK to be something that we sort of expect for every language. Okay, because I can't. No, no, no. We have so much documentation. We have a whole website that's dedicated to the go SDK for cloud events. Go to the, Doug, can you navigate to the main readme? Come on, there you go. And then scroll down to the going further. And then click on, look at the complete documentation. Bam, click that guy. Okay, I guess I didn't. And then if you go back to the readme, there's dig into the G doc. Cool, that's cool. And then check out the samples page. So I don't know what more you need. We've tried really hard to make it easy for people to get started. I guess that's where my confusion kind of comes from too. What is, because, Grant, I'm trying to understand what's missing either from this or if it's that you've come across other SDKs where they're not as thorough as what Scott is presenting here. Because I find it hard to believe that if you open up an issue, I'm sorry, if you open up a PR to add additional documentation someplace. I can't imagine one of the SDKs would actually say no, unless the argument is, well, it's not necessarily appropriate to put all that into a readme as opposed to a dedicated samples directory like Scott was planning to. I could see an argument there, right? You don't have to read me to get too big or something, right? That makes some sense. I'm just trying to figure out what the next actionable thing here is that you're looking for. Yeah, I guess I didn't see the, they're like getup.io page going further. I mean, in terms of the action I was proposing, which is right now in the GitHub issue, not in a PR was just having that sample. But I guess we have the sample and the going further. Yeah, plus the having this text is full of foot guns for go because you need to understand how to get it up and going. You need to have modules or you need to have it locally or you need to turn modules off. And I don't want to explain all that. So what's the next step here, Grant? I mean, I was mostly looking for a discussion I think we're going to have like separate tutorials and stuff for, it depends on each language. Yeah, I mean, what's interesting though is, I mean, there is one thing I remember for you, for sure whether it was you or Daniel said something along the lines of should there be some consistency across the SDKs and in general, you know, consistency is nice. Sure, whether we need to be formal about it and say in the SDK.markdown document says every SDK must have XYZ in this particular order or this particular format. We could certainly explore that as an option if that's what people want. But I'm not sure whether that's reducing the freedom that each SDK team wants to do the right thing for their own project. Now, we come across one SDK that is slacking and they have poor documentation, poor examples and the thing is completely unusable except by the people that wrote it. Then I think that's a separate issue when we should bring that up and say, this guy, you know, say, you know, guys this is not being maintained properly something needs to happen here. But whether we need to consistently across the SDKs that's something worthy of a discussion. I just don't know how people feel about that. I mean, do people feel like we need to have that level of consistency across the SDKs relative to documentation and samples? I don't think they need to be the same. Okay. Grant, are you gonna say something in there as well? Yeah, I mean, they don't need to be the same. Like every language is very different. I guess, like from a user perspective, if we link all that, like I guess when we're preparing docs we were just linking to the SDK readmes for getting started. And I don't think that would provide like a uniform experience right now. But I mean, the more I think about this, I think, I don't think there needs to be probably any changes. Okay, because my suggestion would be two things. One is if, as you said, you guys wanna point to the SDK readmes as the place to go for documentation so you guys don't have to duplicate it inside Google and that makes perfect sense to me. So if one particular SDKs readme or samples that we wanna call it, isn't good enough for a novice to come in and I think that's a problem in general and we should get PRs opened against those SDKs to beef up their documentation and samples. I think that's given, right? Because every SDK needs to be good enough to be used by a novice user. I think once the SDKs are at an appropriate level where they're all good enough, then I think it's fair to come back and say, hey, I've been looking at all the SDKs and they all kind of had the same information but they presented in a very different way. And this isn't a language difference, this is a stylistic difference or some have samples embedded in the readme, some have samples directory or there's just, and it makes it kind of hard for a user to bounce between them because they're structured so differently that it makes it confusing for people, right? And at that point, I think it's fair to come back and say, I think we should have a consistent approach and here's my proposal to be consistent. What do you guys think about being consistent on all the SDKs? But I think it'd be fair to have that discussion after we do the first step, which is make sure each SDK at least has appropriate level of documentation in their readme, right? So if I were you, I'd focus on that first so that you guys don't have to duplicate it in Google, then come back and say, look, the inconsistency is really annoying from user perspective because that's a concrete thing that we can deal with. Does that make any sense? Yeah, I mean... Because I'm not hearing a whole lot of excitement about out of the gate mandating consistency across SDKs. And I think if you wanted to get there, we may be able to get there at some point in the future, but I think we first need to take the first step of forget about consistency, do we even have the right documentation, right? Even if it's inconsistent? Well, we made a choice that every SDKs, it has some minimum level of stuff, but it's idiomatic in the language and how those modules are presented. If everything had to start from, okay, let me teach you about Java. Every library had to do that to be, the ecosystem would be insane. Yeah, and I don't think, hopefully no one wants that, but at least if one SDK went really farther than all the others in terms of educating people, like for example, let's say they did that insane thing and I says, hey, I'm gonna teach you Java, right? Then at least we can look at that SDK and say, man, those guys are nuts that they did that. There's no way in heck we're gonna do that for go. Or we could look at that and say, hey, you know what? We're getting tons of praise from Java. We should do that for all of them, right? But at least then we can have a discussion and compare the SDKs because they're inconsistent and say, you know what, let's make them all consistent and what they did over there is really, really good. And we have concrete example of something to look at. Right now, I don't feel like we're at that point to even have that discussion. I'd rather focus on the immediate problem of is the documentation in any particular SDK good enough? And if not, let's get someone to open up PRs to fix it. And so Grant, that's why I'm coming back to you and say, look, if you found as you're doing your work that some particular SDK, if their documentation is not good enough, open some PRs against it to get that fixed and not necessarily worry about your higher order question of consistency yet. We can work on that later if that becomes an issue once you complete your exercise of all the SDK documentation being good enough for that SDK in isolation. Then make any sense at all. I mean, yeah, I guess for the, not to pick on the Go SDK, but is it reasonable? Do other folks want to have, we're not expecting anybody to teach Go fundamentals, but just to like have one tutorial or one page that like completely sets up an application and starts a server, I don't think there is this. Make a tutorial as a separate markdown and we can link from the read me. I'd be happy to accept it. Okay, I guess like some of the other languages have the tutorial ish thing inside the main read me, but. But do they have this like 20 examples of how to use the SDK? Scott, I'm not picking about the SDK itself. I'm just, I'm really just trying to get this use case. If you want traction, here's my advice is PRs over issues gets work done. Yes, I know that my, I was first, I have a specific, the specific code that I'd add in a PR in the GitHub issue. But I think there was like, like my proposal was to add that, although I don't, as he said, I don't think we want to. Well, I don't know. I would be fine adding that specific sample and the command line steps right below. Except that command line step isn't correct because now you also need to tell them that they need to make that into a certain file called hello. Yeah. Yeah. You need to admit your project and write it in a file. Do it inside the checkout of the go SDK. There's just like, there's a ton of gotchas that somebody that is trying to test out, go for the first time is going to hit and someone that's used go for more than two tries would never even think about it anymore. And you need to have go from the start. So I'm not using, I use go very rarely. And yeah, I'm not sure this would help me all that much because I would rather have a snippet that explains how that works. And then I would, if I didn't have a sample to look for and I would always look for the samples, then I would go and just create a baseline, go app somewhere else and then go and try to put that in. But I, yeah, I would always go for the samples and see what the samples are. But I would look this to be informed but then would always jump to the samples. So Graham, maybe you should do a PR to show what you're thinking of in terms of how to change the go line SDK, whether it's just a large change of the read me or whether it's a tutorial directory that really hands all the person from nothing to a for an go program. And you point to that from the read me. You know, that's your call, but maybe a PR would help solidify the discussion and make it less abstract. Yes, sounds good. Okay. I think. Okay. Yeah. Cool. Thank you, sir. Yep. All right, Clemens. Let's talk governance. Yeah. So I finally found someone who can help with the C Sharp SDK who is unfortunately sick and couldn't join today. But so our George Love works in the Azure SDK team has volunteered to help out with the C Sharp SDK because John Skeet fell. So we fixed this organizationally. Now it comes to the governance question. I would like to give up my committer status and give that to Josh because it's basically just a swap within the company, but I'm not sure that works with the current governance rules. And the question is, are we Apache or are we not? I would prefer if we had a rule that allows us to swap places with the company rather than and have the vacation rule, et cetera, rather than binding this to people. And of course, if three people from Google or three people from Red Hat are working on a particular SDK, because of the level of engagement, because they fulfill the criteria, that's perfectly fine. But then it should be possible for any of the three of them to go on vacation and nominate someone else who can go and do the work or if they get sick or whatever. So that there's still three people from Google being able to go to work on this. That's the goal that I have here. Okay. My tactical goal is for Josh to be able to work as a committer, basically immediately taking over from me. Okay. Now, I know Slinky can't be on the call, but I know you and he did have a conversation in Slack and I apologize, I didn't get a chance to read it. Could you channel your best Slinky to see what that conversation was like between the two of you? There was, yeah. So he wanted things to be a little more like Apache. And I think he mostly misunderstood my intent that I had. Like I didn't want to make it like one company, one developer, I think that's what he understood. And because he said, you know, I didn't have, I've had a hard time finding traction with others. And now if we have multiple people from Red Hat then that's what it's gonna be. And that's, I think that's very reasonable. And I didn't want to preclude this. What I wanted to, what I don't like about the Apache model is that, and this is what happened to the Apache project is that some VC funded company shows up and rotates their entire staff through the project. And now, even though only four people are effectively working on it, they have a super majority of 50 people with committer status who will go and basically just dominate the project and votes. And so that's something that I want to avoid. Hmm, interesting. How can you avoid that last scenario? That's interesting. In that you, that every committer slot needs to be, needs to be earned like here. Yeah, but once you earn it, you basically have it, right? So I mean, maybe it's interpretable. You said, I thought you said, you know, they'll go through all the developers in rotation and they'll spend just enough time to get maintainer status and then they vanish. Yes. Right, I don't know how you avoid that. Well, I'm not sure how I can avoid that, but I would like to avoid that. Okay. So let me ask you this. In your particular case, what was the guy's name, Josh, right? Josh Lev. What would be, and I'm not necessarily pushing back on what you're suggesting. I'm just playing devil's advocate here. What would be wrong with saying, okay, Josh, I need you to take over. Go ahead and start submitting a whole bunch of PRs as necessary to get the job done. And you come in as the, basically one of the only maintainers in there, I guess, John does too, but between you and John, you'll review and approve the PRs and in short order he'll become a maintainer because he'll feed them. The whole point is that I'm too busy. Yeah, okay. I thought you'd say something like that, okay. So the point is that if I could be reviewing PRs in a timely fashion always, then we wouldn't be doing that switch. We're literally doing that switch because I need to get this off my desk, like now. Yeah. Okay. Anybody else wanna comment on this? Cause it sounds like Clemens is basically asking for a change to the governance model to allow for this type of situation. No one wants to comment? I'll pick on somebody if I have to. In this particular case, does Clemens have any backup in the SDK? There's John Skeet, I think is the guy's name. John Skeet from Google. See what's weird to me about this Clemens is, I kind of see, I think it almost depends on the SDK, right? I think if there's an SDK that has lots of different people and lots of different companies active in there, I could see a lot of tension with your proposal. Because typically it's not company-based activity, but your SDK is different because for the most part it's been Microsoft. So it makes perfect sense that you swap out one Microsoft person for another, right? And that's what I'm struggling with. Yeah, I have the same, I have a similar problem, but I'm also seeing that there's a date, there's a danger I see in general. Like there's some brokenness that I observe in governance elsewhere. And that's not a problem that we can probably solve here today, certainly not. Where projects accumulate maintainers, which then also translates into kind of voting rights, kind of on unconfessive issues. Where if you can stop the project successfully and you can have 80 people on it, just by voting people through, everybody has then retains maintainer status and all of a sudden they have a super majority where they can cancel out particularly smaller entities. And smaller entities might quite well be because amongst us middleware people, right? There are no 200 developers in Microsoft who are caring about these kinds of events at the infrastructure level. It's probably 20 or 30, right? So we don't even have a VC funded startup which is making, eventing or messaging their only thing and our 2,000 people, right? And so even though we're the trillion dollar company, we run far tighter ships on those things. So there's a natural ill distribution and then individuals are even in a worse situation because they try to contribute, but then they get steamrolled by companies which are just rotating their entire staff through it. And so my question here is as perpetual across everything, it's like, how can you avoid that sort of abuse because it keeps happening? Yeah, I am kind of amused by this because the idea of a company like Microsoft being swamped by a little startup is just ironic. It is ironic, but it happens. I know, I know it's just funny. It doesn't only happen to us, right? It happens to Red Hat, it happens to Google, it happens to Amazon, it kind of happens to everybody. We're like, whoa. Yeah, no, I totally agree with you. It's just amusing to me because in our past standards day, go ahead, in our past standards days, it was always the exact opposite, right? It was always IBM, Microsoft, the worlds would dominate those groups and now we're hearing about little startups doing it to us. It's just amusing, but I also agree with you that I've always thought that there should be some rule for maintaining your maintainorship, right? Like, you earn a certain amount of PRs means you get to be nominated to be a maintainer. Well, in order to keep that maintainorship, it seems like you should have a constant stream of PRs going as well. You can't just vanish and hold on for five years. That doesn't seem right either to me. Yeah, and I think there's some TTL on your... But the TTL, I mean, this needs to get complex, right? Because sometimes software just sits there and it's done. True. And then how do you measure this? Right. Okay, so let's get a little more concrete here in terms of the problem. So it seems to me we may need to take the situation into account and I don't know how we're gonna do it in terms of changes to the governance stock, but it does seem like the situation you brought up is valid and unless someone on the call or in the group, I should say, vehemently disagrees and wants to say, Clemens, you're full of it. If Josh wants to be a maintainer, he needs to do this time and do PRs, unless someone's gonna say that, then I think we need to figure out some way to accommodate this type of situation. And there's two things about that that I think we need to think about. One is changes to the governance stock and how to deal with the situation immediately. Because as you said, you need to get this off your plate now, right? And you may honestly need to wait for governance stock change. So let me start with the first question and I need people to be brutally honest here. Is Clemens' request invalid? And I'm gonna pick on the current maintainers that I know are on the call. For example, Scott Lance, I don't know who else has maintainer rights. Daniel, I think you might as well, right? Is his request just insane and we gotta say no? I think he's during it. The new maintainers, this is our guidance, but there's also an escalation path and for an SDK like C-Sharp, where it might be a little less easy to find C-Sharp enthusiasts. And the bus factor is now one. It does make sense to promote somebody out of a trusted circle. We actually have an escalation process in here? Maybe not, I'll have to take it a close look. Okay, okay, so thank you, Scott. You don't think it's insane, at least in this particular case. Lance, any thoughts? I tend to agree. I mean, I don't think it's insane. There needs to be some way to deal with the situation like this, but I am curious about the escalation path that you mentioned, Scott, that it's in that document. I could have sworn we had something, but I can't seem to find it. I thought it was like maybe if there's not any... Well, there's this. If the thing is not maintained. If it's not being the criteria, we have to go over to new maintainer. Yeah, I think it's somewhere basically in here, but I'm not even sure that's quite the same scenario, because we're not saying that the project isn't being maintained. It's just... Clements, you just need to walk away from the project for a month or two. Abandon it, and then it will be not maintained. That's one way to do it. Oh, man. So, okay. One last person I think might be maintained. Daniel, you're a maintainer, aren't you? Yes, I am. I'm the only maintainer of Ruby right now. I think it's... I think the... This makes complete sense to me. As... In my case, I am a maintainer because Google as a company and as products have an interest in cloud events, and so it's my job to make sure that we have a reasonable SDK. And so from a company's perspective, it's the company that has the interest, not the individual. And so one engineer could replace another engineer in that role and just kind of have the same role from the project's perspective. So that seems to make a lot of sense to me. Yeah. Okay, so let's do this since we ran out of time and I'm getting really hungry for lunch. Let's do this. Just to make sure... Okay, so I would say that's right. So technically, we are not following the rules here and I'm hearing that people are okay with that because this is not only a special case, it makes sense, what everyone will call it. So let me do this. Let me post a comment in the SDK Slack channel saying or explain the situation, saying at least for this bigger case, it makes sense. And if I don't hear any objection by say end of date, Friday, that we're gonna go ahead and make Josh a maintainer in the C-Sharp SDK. However, we also need to then look at changing the governance doc because I don't like the idea of the governance doc not being accurate to match reality, okay? And so I'd like to then have us work on an update to the governance doc to allow for these types of situations. And it could be as simple as, hey, we realized there are gonna be some weird cases that we need to address, like, and here's an example, and we're gonna address those by this way. And maybe it's as simple as just a vote among the other maintainers to say, yes, this is a special case. Kind of like what we just did, right? But at least then there's a process that we can follow when something like this does come up again and we don't have to reinvent the wheel each time. Does that sound fair, Clemens in particular, since you need to bail on this quickly? Anybody else have a comment on that or disagree with that process? Okay, I'll take the action item, as I said, to at least get the quick switchover happening through the Slack channel. And then we'll talk about how to change the governance doc later. All right, cool. Let's go back over here. Where is SDK? Okay, anything else you guys wanna talk about on today's call? Anybody else? All right, cool, we are done. Thank you, guys, and we'll talk again next week. What's for lunch? I have no clue. Food at this point, I'll take anything. All right, thank you guys. We'll talk again next week. Have a good one. Thanks, Doug. All right.