 Hi, Manuel. Hi, how you doing? Great. How are you? Pretty good. So those office hours, do you still need stuff? I would love to have volunteers, yes. I'm gonna sign up for one or the office hours. So we recently got notice of our times and I looked it up to make an entry in our meeting minutes now. It seems to be an RSVP reservation process with the Linux foundation, you need an account, you RSVP, you get this accepted and then, yeah, it's like a shopping basket. So I'm a bit slow now. Yeah, so I don't like the process of office hours, but it seems more like an organized one hour meeting. So are you going to treat it more like a fly by random questions answered? I could probably attend, unless there is really good talk going on in parallel. Yeah, I'm not aware of the sign up thing that you're referring to, because to be honest, aside from asking for those specific times, I haven't really done anything with it. So hold on, let me see. Well, tell you what, since when is it? It's not till May. Tell you what, let me do some investigation, but are you willing to sign up for like one of these two slots or do you need to still figure out what's going on that day for you? I haven't gone through my scheduling yet of talks, but I think I can make room for one of those. Okay, so yeah, okay, let me do some investigation and see what the process is like to actually sign up, because to be honest, I thought last time it was more just sort of a random stop by kind of thing where some of us agreed to be there. Yeah, yeah. So let me do some investigation and find out. Let's see. David. Good morning, afternoon. Hey. Yo, Tommy. Yo. And Matt. Hello. Oh, I completely misspelled that. There we go. That was still way off. Geez. That was neat. Hold on. If I misspell it, it might try to correct it. Yeah, but what's interesting is for split second there, it showed me an alternative spelling with the right spelling. Oh, I can't make it. I can't do it anymore. Let me try it again. Hold on. There it is. So if I hover over it, yeah, see, it knows. That's amazing. I've noticed it's getting, it's adding some interesting features like that recently in terms of spelled shacks or auto correct and stuff like that. I've noticed them every now and then they sneak in, they've changed. It's kind of neat. Sorry, I'm easily distracted. Where are we? Ginger. Hey, Doug. Hello. Jesse. Good morning. Good morning. And Klaus. Hello. I know I've seen it before, but who's Linux basic? Hello. Hello. You want to tell us your real name maybe in the chat and I'll add it to the attendee list. So I have problems with my audio. Sorry, say that again. I have problems with my audio. A moment, please. That's fine. You can also just type it into the chat. That works as well. And let's see, Eric. Hello. Hello. And Mr. John. Hello. Hello. And Lucas. Hello. Slinky. Hello. And Scott. Morning. Hello. Thank you. Thank you very well for that. Lance. Hello. Christoph. How you doing? Good. How are you? Good. Hey, Doug Christian here. Oh, there you go. I saw someone sneak in there. Okay. Hey, Christian. Thanks. All right. Another 30 seconds or so, then we'll get started. Lucas. The other Lucas. Are you there? Homequest? Lucas. Are you there? Excellent. Okay. Good. All right. It's three after when I go and get started. 18 people. All right. Let's see. Okay. Ooh, I just realized I completed this PR. All right. Community time. Anything from the community people want to bring up this line on the agenda? Nope. Okay. Cool. Moving forward. We had no SDK call last week because there are no topics. I do want to have an interop call this week, even if it's short. So please, if you're doing interop stuff, please hang on the line after this call ends. We needed to have some talks. Office hours. Still looking for volunteers. May well indicate that there's a process involved in actually signing up for that. I was going to do a little bit of investigation to see what that involved. So thank you for the links there. Whoa. Thank you, Scott. So anyway, don't need to do anything there other than if you do know you can sign up for one of those particular days, it'd be great if you could stick your name here or just let me know. And I will put your name there. We do need some volunteers. And again, if you're interested, you can see Remy slides and video that he submitted for our stuff. Let's see. Tim or I don't see him on the call and he hasn't reached out to me. So nothing to help it there. So before we get into the bulk of the show, let me go ahead and accept these changes from Scott. So Scott, maybe you can talk about your process you wanted to suggest we might use for tracking changes. Sure. Do you want to share something or just talk to it? It's just we could go to those links if you want. So day to day I work on Knative, but we pulled the CNCF Kubernetes process for generating release notes. And it's so we happen to use a GitHub action to go and extract that between two hashes in the repo. Oh yeah, I see you're preloading. The thing that's most important. Okay, so stop here. So to remember to do this, we make a little comment in when you're composing the PR and then you see the little tick, tick, tick release note. Not that one. Yes. So this is the magic piece that the script knows to extract a little code block. And it basically is going to assemble those into organized areas. And so it's as simple as if there's something that's interesting or important for a user to release to know about. You write that text inside the release note block. And then if you go to the tab one over this script from Kubernetes goes and runs and extracts this. And now there's some pieces that it assumes like there's special labels that Kubernetes uses to go and organize the output a little better. Those are optional. We can ignore those. We don't have to use labels. But we could use this script to go and slurp up all those release notes that are in PRs between two hashes. It's hard because you have to run it locally, which is why if you go to the next tab over, Knative has adopted this GitHub action that we could go and copy. It runs between two hashes and it does some default lookups that may or may not work for cloud events. But it's a manual run. You give it some hashes and it outputs an artifact, a markdown artifact. And maybe, sorry, I could find a result here one second. So it's the output is fairly messy and it requires a little bit of editing because it's optimized for... Is there a screen usher bringing up? I'm getting you a link one second. Oh, okay. Cool. So where did that on the document again? So here we go. So as an example, this is a run from a Knative eventing. And if you scroll it down a little bit, there's an actual markdown file you can download and edit. If you expand the release notes, it kind of shows you the script running and there's output there, too. Not that important, but basically use this as a starting point to write release notes for releases when we go and actually do releases. Knative does them every six weeks. And so some of the action is optimized to kind of look up the hash, the two hashes that we should look at and then it extracts using the Kubernetes script, the code block. And that's basically it. There's not a lot of magic. There's a lot more magic you can use if you use the labels that Kubernetes uses. They're also prow based. So there's like slash kind labels that get applied, but those are optional again. So just to be clear, it sounds like the biggest thing that we're looking at here is this type of mechanism versus the prefix thingy and the commit. I think that's the biggest change between the two different proposals. Is that fair? Yeah. My only hesitation with the, what is it called, like conventional commits is that it's fairly magical and you have to remember or kind of understand how the parser is going to work. And this is literally just anything that's in this release notes block gets exported by the tool. And then you're free to go and edit the release by hand to make it look pretty. Okay. Josiah Cressy, does the bot pick it up only in the first comment of the PR or any comment in the PR? It looks for, there should be only one release note block. But the difference here is that this is on the pull request comment, not the commit. Right. But it is just the first comment in the commit in the PR, I assume, right? No, it doesn't look, well, you could choose to do that, but this is a tool that interacts with GitHub, not commits. No, I understood what I mean is, if you, if you, if I put this section in like the third comment in a PR, I will let you notice it. I think if there's only one of them, yes, because you also, I mean, if we smash it, then there will only be one, but I'd have to check it. It may be smart enough to extract more than one, but I don't, I don't know. Usually it's in the body of the PR, not the commit. No, I think, yeah, maybe I wasn't clear. I was saying comment, not commit. So I was like, what the PR itself? No, it must be in the, it must be in the, the main body of the of the commit. The thing that actually goes into the Git history. Right. Okay. Hang on. Cool. Sorry. Before we were saying it was in the PR, not in the commit. Now you're saying it needs to be in, in the commit. Well, it is what's in the Git history. I understood Doug's question of does it need to be in sort of the main PR comment, or could it be in the conversation as it were? Right. It has to be in the, the original body of the, if you go and look at any PR, like in any repo. Right. The initial comment. Yeah. And is it okay for it to be, presumably it can be supplied later or edited. It doesn't have to be in the, the main PR comment when the PR was first created. It could be in, yeah, we can, we can change it as, as the PR revolves slightly and effectively it's something to review as, as well as the, the commit. That's exactly right. You, you can edit the, the body of your commit message. The summary part for you, you know, if you go and you make a PR, you can say the little right box under the title, the first one. Yeah. GitHub doesn't seem to label that, but it's, it says a comment, but the first comment within the PR. All right, cool. Okay. Any other questions for Scott? I mean, is everything clear in terms of the way this works? It sounds like there's lots of options available to us, but at the bare minimum, it's adding this to the PR comment and then running a bot that extracts it. I mean, and those, the bot stuff as well, the action is optional, right? You can run the tool locally. It's just, you have to run it correctly and you have to make sure that you have all of the history pulled down locally, which is the, the niceness of not doing it locally. Yeah. Yeah. I'm kind of assuming we don't need to talk too much about that in the sense that that's sort of my problem in the sense that I'm probably going to be running the release notes. I'm just trying to figure out for the average PR creator, what do they need to do? And it sounds like the difference is this section versus a prefix on their commit. I'm intrigued as to how it would do it locally because what you have locally is the Git commit history, not the PR history. The PRs only exist on GitHub. So I don't. I think it uses the GitHub API. So you have to have a GitHub token to go and run the script because it goes and looks at the Git history and then picks out the commits that landed that are associated with the commits in your raw history. All makes sense now. Thank you. Yeah. Yeah. Okay. Any questions on this particular mechanism? Okay. Does anybody have any questions about the mechanism that's currently checked into our repo, meaning the conventional commits thing where you have to put the prefix on the PR commit message itself? There we go. Here's an example from Slinky for release notes. So here's how the release notes appears. And so the bot will pick it up. All right. So I guess the question before the group is, does anybody have a preference between those two mechanisms? I quite like the fact that the release note can be discussed on the PR without it having to be part of the commit message. That feels quite nice. On the other hand, it does make it slightly harder to, if you've only got the commit history. Yeah. Does the changelog link to each PR? Scott. So if you go back to maybe the last tab that has the result. This one? Yeah. And then click on the release notes and it might download something, or actually, sorry, expand release notes. This little green checkbox. You can click that. Oh, there we go. And then expand display notes. Display notes. Oh, there it is. And then, so this is what it makes. So it's the commit and then this context, it's the author and some other stuff. So that's what these things are turning into. And the enhancement API change, those come from the labels. If there is no label on the PR, it gets uncategorized. But it sounds to me, because I think the main point of Lance's question was sort of the linking mechanism between the commits versus the PR. And I think what you're saying, Scott, is that regardless of how it happens, it does go to the PR to get the information, whether it actually starts out and only looks at the PRs or within the range of the Shahs, or whether it goes from the commits in the range of the Shahs and finds the PRs. Either way, the PR comment is the source of truth for the data that's going to get extracted. Right? That's right. That's right. Does that answer your question, Lance? Yeah. I also like using labels for the different categories like API changer, bug or regression, because it is a finite set of stuff. One of the difficulties I have with the conventional commits is, I don't remember, is it supposed to be docs or doc? Or is it, you know, is it supposed to be chore or source? You know, I just don't always remember what the tags are supposed to be. So the labels, that's a nice improvement, I think. I'm glad to hear you say that, because I know you guys use the conventional commits in your JavaScript repo. And so I like getting your input, because that's somebody who actually uses the other mechanism. That's good. Yeah. I mean, I really like it. And I like some of the other stuff that it enables. But yeah, it's not perfect. Yeah. Okay. Anybody else want to chime in? I see slinky city per release notes. Anybody else want to speak up? Okay. I'll raise my hand. I actually could not ever get the conventional stuff to work until yesterday. And then I finally figured out what I was doing wrong. And I had to admit, there was some nicety to it of just adding a little label in front of my commit message. That was kind of neat. But at the same time, I think I'm leaning a little more towards this. I think because what John was saying, which is it's nice to be able to separate out the commit from this text, especially if the text wants to get, needs to get updated later on after the PR has already been merged and stuff like that. Having it in the PR itself gives you that freedom to wordsmith it later and tweak it. Or even, you know, right before you actually generate the release notes, right? At the last minute, you realize, oh, we renamed something. So really, we should update the release notes everywhere to say this credit, you know, to use the right terminology. I like that level of freedom. But I don't know. Okay. So John says you before the release notes. Yeah, we're both sitting in the release notes. We could too. That's a good point, John. Okay. Does anybody want to voice an opinion strongly one way or the other? Or I think that actually refers to this. We already got people expressing a preference for release notes. Does anybody want to advocate for keeping what we have today, which is the conventional tagging thing? I don't think Grant Terman is on the call who has previous to being, I wouldn't say a particularly fierce advocate but has spoken about it before. So it may be worth making sure we have his input. Yes, I was definitely not going to do a finalized vote on the call here. Rather, if we choose to head towards the release notes thing that Scott is presenting here, I was going to send out a note and let people who could not make the phone call get a chance to voice their opinion. And obviously, Grant would be one of those folks. The one caveat I will make is that the release note tool is made for Kubernetes. So it's not a copy and paste that result into our release notes. It's a good starting point. We'll do some minor editing around like you can see the documentation link is to kates.io, things like that. We'll have to change that stuff because it's part of the script, but the logic that the script does wouldn't have to change. Okay. So the script that actually does this is something that we could basically check into our repo. No, it's a go-lang tool that runs. It's not a script. But we have access to the source code and we can create our own copy of it. If we needed to, yeah. I would recommend not and just doing the legwork of doing modifications to the resulting document. Okay. Well, at least we have the options. That's good. Okay. So back to the question. I've heard lots of people saying they like this mechanism. Does anybody want to speak in favor of keeping what we have today? Okay. So I'll tell you what, any objection then to making that our current tentative agreement contingent upon no one in the community screaming at us for choosing this option in particular? I'll double check and see if Grant's okay with it. I'm not sure why it's not working. Okay. Not hearing any objection. We'll go with tentative approval. Cool. Thank you, Scott, for that presentation. All right. Before we jump into PRs and stuff, is there any work item that I forgot to add to our list, PR or not, before we, that we should talk about before we get to the PRs? All right. In that case, let's jump into it. All right. I know you made a simple minor changes in here based upon my comments, but I don't think it was anything substantive. Is there anything else you'd like to mention, John, on this one before we ask for final comments or questions? As discussed, this is currently headed to one zero one, but we probably want to move it to master so that it can be part of one zero two branch when that is cut, I think, but we can discuss that when the, when we've got the meat of it done, as it were. None of the changes that I've been making have changed the intended semantics at all. They've just been clarifications. Okay. Does any, I know this has been out there, I think now for about three weeks. So it definitely has had time to gel. Anybody have any questions or comments for John? Okay. Any objection to approving? Okay. One quick question then. Clemens, welcome back from vacation or wherever you were. Did you get a chance to look at this? Cause I know you're the, I believe, I remember correctly, you were one of the original main authors of the HP protocol spec. Did you get a chance to look at this and you're okay with it? So John, John made the change. Yes. I shall, I shall trust John, shall I? Good answer. I would be happier if you didn't, just cause this is all quite sensitive stuff. If we get it wrong, we're kind of screwed. So I would be very happy if Clemens, you, you could look at it in the next, you know, a couple of days. And if we can get a tentative agreement, if Clemens is okay with it, then it could be merged in a few days. That's absolutely fine. I promise I will look at it tomorrow, between now and tomorrow and we will know tomorrow. That would be fantastic. That makes me happier than Blanket Trust, which I never feel like quite deserve. Okay. Cool. All right. Do we have tentative approval from the group then? Okay. Not hearing an objection. Can I say, can I say one thing? Sure. Of course. It honors me that I, that I, that you guys treat me like that. Thank you. There you go. Make it even better. Yes. Yeah. I'm, I'm, I'm just as flattered as John was just about me giving him trust, you know. Yes. Yes. John will quickly learn that's a mistake, but yes. All right. Cool. Thank you, John, for all the work on that. I really appreciate it. So just, just before we go into the next one, if there is approval, which branch do we want it in? Oh, yes. Thank you for wondering that. So to refresh everybody's memory, oh, I want to say maybe a month or month and a half ago, we agreed that rather than rolling out brand new release version numbers, every time we find a typo, we were just going to merge those typos directly into the latest release, which as of right now is one zero one. However, we agreed that anything bigger than a typo should warrant a new patch release number. So I think according to those rules, this one should be targeted towards master and then we need to decide when we're going to create a one zero two release. Please, anybody speak up if you think I'm remembering correctly in terms of the process we agreed to. Okay. So let me turn that around just to make sure. Does anybody believe the change we just approved is in the category of a typo that should just be blindly merged into one on one. I don't think, I don't think it'd be fair of us to say to typo. It's pretty, it's pretty big. I think the next one, given that it's to do with the primer, so is let's say informative rather than normative, that might be treated differently. And maybe we need to be update the governance rules to decide whether the primer gets special treatment, given that it's informative. Okay, we have that discussion in a sec. Hello. There we go. All right. Anything? Let me hide comments. John, anything on this one worthy of mentioning? I think all changes recently were relatively minor as well. So the most significant change in the last week after I think it was Eric's excellent comments were restructuring it a bit so that aside from anything else, the paragraph that's in the center now event producers are encouraged to consider versioning from the outset and document it. That's now much earlier than it was, which makes me happy. And then type and data schema are called out into separate subheadings and each dealt with thoroughly rather than doing both of them lightly and then both of them in more detail. And I agree with Eric that it's now clearer. Okay. And the change down here is just changing URLs. That's good. Okay. Any questions for John or concerns? Okay. Any objection to approving then? Okay. Not hearing any. So now we can go back to John's second question is currently is targeted for 101. Since these changes are just to the primer, if you exclude the URL changes into the spec, should we push these changes directly to 101 or have them wait for a 102? Anybody have an opinion on that? I would vote for 101 myself. Okay. I don't have a strong opinion, but I was leaning actually a little more towards the other way. But honestly, I don't care that much because as you said, it is just the primer. The only reason I would say that is because this to me doesn't say typo. But as you said, it is just the primer and it is clarification. It's good clarification, but still it's bigger than a typo to me. But like I said, I don't care enough to go. I agree. I'm happy to take an action. I need to look at the governance doc anyway. And I'm happy to take an action to draft a PR for us to consider for ironing out a few things like if it's only informative rather than normative, then it sort of counts as keep it in the same branch. Yeah. Yeah. Actually, so if you're going to PR that change to the governance doc, then I'm okay with going to 101 because that's where we're going to be and I'm okay with that. So you convince me. Anybody else want to chime in on that? Raises the question next time of does the governance change itself have to go into 102? Oh, man. Actually, what's funny is it's funny you say that because the governance change that I pointed you to, that's only in master. It's not in 101. Right. Which is why you probably didn't see it. Indeed. Yes. So that is an excellent question. So yeah, we should probably resolve that at the same time as well. All right. So anyway, back to the original question. Since this has been approved, does anybody object to it going into 101 directly? Oh, Scott, your hands up. So what we do in the Golang SDK is we have a release 1.0 branch, say, and then we make patch releases off of that that are tagged. So it kind of avoids this. We would have a version one branch that you cut patch releases off of, but it's a little different because it's not just text, it's code. So people have to import it. So maybe we maybe we got a free footgun here because we called it a 101.1 versus the 1.0 seed branch that gets patches and maybe tagged with releases. So let me ask you about that because and this is more of a, I guess, a get question. I don't think it's a GitHub question. I think it's a good question. When you did that, did you create a 1.0 branch or tag or no, actually, I'm sorry, you said you just create a 1.0 branch. So did you ever create a 1.0 zero? Yeah. So you can look in the Golang SDK. So what we do is we make a release dash 1.0 branch because we want to we want to be able to service bug fixes on individual minor bumps, but we don't have we want to like figure out what where it was branched from and then make a new fork and all this other stuff. So the simpler process that we use is you make a branch that's labeled the major minor, but not the patch. Right. From that branch, we label the patch version, so v1.0.0. If we make bug fixes, they go into the branch, the major minor branch and then the new release is cut from the major minor branch again. Right. But I think I remember why I ran the issues here. I think the default branch, I'm sorry, the default thing that's shown here cannot be a tag, can it? Doesn't it have to be a branch? It doesn't need to be because you have the v1.0 seed branch individual releases. So basically, you're always pointing people to the latest version of the major minor of this spec. But you can have you can have released versions using tags so you can go back in history. Got it. And I think the reason I ran into issues and this is probably my own mistake is I created a 1.0.0 branch and a 1.0.0 tag, which is technically legal. It's just really confusing in terms of which one you're actually referencing when you issue a command. And you're saying basically you avoided that because you never created a 1.0.0 tag. You only created a 1.0 branch and your first tag after that was 1.0.1. Yeah. Our convention is to use release-major.miner with no version to avoid any actions that may run on branches that have or branches or tags that have the v prefix. Okay. But that's just what we do. Now, it's a good thing. I'm glad you mentioned it because I need to go off and think about that because I'm not happy with the way we have it today. So let me go off and see if we can tweak things the way you just described it and see whether I ran into any issues. So thank you for that. But to the overall question, going back to the higher order bit here, anybody object to merging this into the current 1.0.1 branch? Okay. So John, I think you actually still need the other PR to catch it into master. Otherwise, we lose these changes the next time we cut a 1.0.2, right? Yes. So is it simpler just to put it just to go on to master? I'm very confused now. It feels to me like maybe our default branch should be, yeah, I would support the going to major.miner as a branch rather than a branch for every patch release. But I'm happy to make as many PRs as we want for the moment to get this in. Yeah, I think the final thing I'll say. So the process that we would use would be you make a PR to go into the mainline branch. So in this case, I think it's called master. Make the PR for master. And then you make another PR that cherry picks that commit into the release branch, the major, minor branch. Either way, it's two PRs as of today anyway. It would be two PRs. Yeah, there's no way around that. And hopefully if the change, you know, isn't conflicting with the cherry pick, then it just works out. It's a little bit more work for you to make the second PR targeting the major, minor branch if there are conflicts. But I think the process should always be all big changes land in the head first, and then they get cherry picked into the release branches that they apply to. So that would that would also go for the government stocks that we were talking about. So makes sense, John. I think so. So for right now, I will create, I would suggest that you, Doug, don't merge anything at the moment. I will create a version of this PR for the master branch. Probably as a new PR, I seem to remember that retargeting things in GitHub may be tricky. I'm not sure. I will wait for that to be merged, then pull everything, then cherry pick to create another PR for the one zero one branch, and then we can work out what to do in the future. Yep. And yes, I just to echo Scott, I would be very happy if we renamed from master domain as well. But that's the same. Yeah, that's an R2 do list. It's just that we're waiting for the GitHub to have the right tooling in place. And now that they have it, we can do that. Cool. All right. So what I said down here was it's been approved and hold on, I meant to say, John, John will appear for both master and one on one. And then I'll wait until that's done before I do any merging. I will create it for master. If you could then merge that, I will pull and then cherry pick. I think is probably the it's possible that I could do the cherry pick off my copy, but it seems easier if it's against one that's been merged into master already. Whatever makes you happy, John. Yeah. Okay. All righty, cool. All right. Are we done with John's? I believe so. All right, let's go to Jim then. Jim, where are we on this one? I can't remember. Well, I just I just wanted to check actually if if John accepts this, does that mean Clemens accepts it as well? I just want to understand what the chain of trust is. Anyway, I don't know if you heard it, but right as you're coming online, Clemens made some like groan or something in the background, or is that a wow, Clemens? I can't tell what that sound was. I thought XML was like wow. Yeah, that's what I thought I heard. I thought I heard a wow. Yes. It was, you know, there was an open issue and I did open the original PR on April the first, but that, you know, you can read into that whatever you will. Is it really? It is. Oh my god, you're right. That's hilarious. I didn't notice. Oh, sorry. I was a little bit too subtle there. And actually, you'll see and you'll see in that PR my attempt because I thought the conventional commit stuff applied at the PR level as well. So I'd actually tried to make that work anyway. Yeah, so John and Slinky, and I think somebody else had left some comments on this. There are a couple of things that I need to tidy. There's some of the commentary and this sounds a bit redundant. I know given we're talking about XML, but I had taken a very terse approach to this in an attempt to keep the, you know, the payload sizes down to a certain extent. It's just a model I've used in the past. If that, you know, if people don't like that, I'm more than happy to change. I don't think that should prevent anything from moving forward. But I think I'm, and John, you know, feel free to jump in. It sounds like that this may be, that might be a discussion more in this group, you know, rather than trying to just go backwards and forwards on it on the PR structure. Absolutely. Yes. So there were a couple of nits which I've addressed. Some of them, one of them, I think, which I've got my head around, is my mental failing on XML schemers. And that's probably because it's just so long since I've done one. So there are maybe some clarifications to do in the schema itself. But aside from that, I think, you know, structurally, I think it's basically, it should be in a reviewable state. I mean, John, did you want to comment at all? Only to say my knowledge of XML schema is far inferior to yours. I still don't quite know what the excess NE is going to allow. I guess what I probably do is take the schema and try it against a bunch of XML. So I think the only bit of the schema that I'm uncertain around is the XML element that includes that excess NE. I don't quite know what that will do. Can you have just text? Does it have to be an element that can then do anything? And I don't know what that ends up as in an SDK either. I think I understand the overall aim to be a little bit like the data property within JSON that lets it be an arbitrary JSON token. I think that a subtle difference in XML is in terms of what that can be. I think it can only be we would have to make it only an element or there's no sort of one token type, I don't think. I think you're right. That was probably my mental lapse because it is meant to be any element, not any random construct. Right. I think you need to have, I promptly remember more about XML schema than I really want, but so the choice is great. And I think you need to have on choice. You're making either a bin or txt and then you're doing or an any, which means you don't have an element that's named XML, but you put the any right there because that's already being an exclusive choice by way of being within the choice. I think that's how I remember it. And then on all the other complex types, just for future extensibility, because we might be adding something, if you don't add the attribute and any definition there, you can't expand the type anymore. So we'll basically have to go on the cloud event, for instance, in the complex type, there needs to be an any at the end of it so that we can go and add stuff later. I have to go and look up how that actually, we all have to go and look up how that actually works again. Wouldn't that make stuff valid that contains utter garbage? I would have expected that when we want to add stuff in the future, we modify the schema to say, okay, now this is allowed, and it wouldn't be valid in the old schema and it would be valid in the new schema. I'd ask that to be okay rather than we'll just allow anything and it might take on some meaning later on. I have been on this matter. I've had exactly that stance in my early years and then have to scoop a Sam Ruby that that's really terrible and I shouldn't be doing those things. So I have to go and reread where I was there from a mental state, but I have the Sam Ruby perspective of be very permissive of what you allow there, because that's going to make you happier in the long run. John, you're going to make us revisit our entire extension mechanism, aren't you? Because that's what extensions are for, right? Yes. And we've got that flexibility for extensions or arbitrary. Well, almost any XML document is a valid cloud event. It just has to have a few little bits. I don't know. I tend to be against the be permissive with what you accept just to go back because I suspect that's a rabbit hole. On the XML element in the cloud event data, I think we do want the XML element itself, because otherwise, if someone wanted to represent an arbitrary XML element that happened to be called TXT, we don't want that to be confused with a text element. Yeah, that was the intent. So it was meant to be an enclosing element of anything else you'd want to put inside it. Isn't that a namespace problem? Say that again, sorry, Thomas. Is that a namespace problem? Well, namespaces, you'll notice I've veered away from completely in this, because it's a different ball of wax altogether. Because there is a clause in any you activated synapses that I never wanted to see again. There isn't a clause in any that it must be from a different namespace, which means you can go and avoid that clash. Yeah. To be honest, this schema itself is not in a namespace at the moment. It probably should be. Yeah, it need to be. So that's a change for me. It was worthy coming on the call this week. Yes. With practice of... So I personally wish that XML namespaces would never have been invented, because that would have kept XML simpler. But since they have been, I think we need to use them. Yeah, that makes sense. Yeah, so I think the bigger discussion for the group is really how terse do we want to be? And I know some of you may work for cloud providers that like to charge people as data goes in and out of their environment. But for me, it's always been an efficiency play. And as I said last week, I rail against human readability sometimes, because they need to be humanly understandable. But it's computers that are going to be interpreting it, not people. So, you know, would I use one letter or 25 doesn't make any difference at the end of the day, except from an efficient perspective. So that's my position. But I'm, you know, I'll go with the crowd if that's what you want. Yeah. Sorry. And I notice you've highlighted that other one there, Doug. I should remove that. That was my poor attempt, because originally, I had specification version called out specific explicitly as an attribute or an element rather. And then I mentally went, oh, well, there's got to be three more. But obviously, that's nonsense, because it could be any three. So I should just make that zero, basically. And then it becomes that. That's why I highlighted it, because I thought that was your kind of indirect sneaky way of saying, hey, there are some required ones. I just can't figure out an XML how to make it which ones are required. But we know there have to be at least three, right? Well, yeah, without listing them all explicitly, there's no real way to do it. So I mean, I'll just, I'll make that a zero. And then it'll just be a usage model to make sure that the right ones are there. Okay. Okay. Slinky, your hands up. Yeah. So for me, I tend to agree with the performance with efficiency discussion. But like, for example, the attribute names should not be spec. So like spec ver, for my opinion, should be spec version. Like, I prefer I prefer that we cut some bytes somewhere else, but like keeping the consistency of the attribute names in my opinion is important. So to where they're called out explicitly use, use them using common language. Yeah, okay. All right. Because I think the specification version is the only one that would fall into that category. Yeah, spec version. Yes. Yeah. Yes. John, your hands up. Yeah, potentially bonkers idea. Yeah. Could we have two schemers, one with sort of one, the readable version, one, the abbreviate even further. So call the data elements BD, sorry, BT and X, and declare that they are absolutely equivalent. Something has to be one or the other, but you could have the element being CE instead of cloud event and really, really compress it down as much as you want to and mandate that any spec implementing this must treat them the same way. And you specify whether you want the verbose or the concise version when you format things. So you're asking for two XML schemers? Two XML schemers that we would keep absolutely in sync. And, you know, as I'm saying this, it sounds kind of crazy, but equally I can see it having quite significant potential benefits in terms of, right, I'm doing a debugging session. I will turn it into the slightly more readable form. Okay, now I'm back into production. I will go into concise mode. So that I just put in my SDK writer hat on now. I'm trying to understand what that means to an SDK. Slinky might want to comment on this as well. I mean, essentially that would mean I would, in the Java SDK, I'd write two format providers that would produce different formats, but both consume, produce independent formats, but consume both formats. Is that what you're trying? Possibly. This is an idea I've only just had. So the number of holes in it, it may be more phishing net than watertight. Slinky, your hands up. It sounds incredibly complex. More complex than it should be. My question is simple. Is this something that you tend to do when using XML tooling, or is this something that nobody does? So if it's idiomatic to do that with XML, then that's good. I don't think I've worked with any XML documents that are abbreviated to this extent, and I don't think I would want to. Right. And since I can't raise my hand, let me do it here. I'm leaning more towards what Slinky's saying. Well, I can definitely appreciate, Jim, your desire for compactness. As somebody who almost never uses tooling, including back when the days when we were doing soap stuff, I'd never used a whistle to whatever generators. So I hand coded everything. The idea of asking me to code up to be able to spit out and accept multiple types of XML for the exact same document just sounds like my head's going to explode, and I'm going to get pissed off very, very quickly. Likewise, if we're going to use, when the cloud event attributes appear someplace, I think we have to use the names that are in the spec. If for no other reason, what you're doing by doing this is you're going to confuse people because what if they had to find an extension called spec for? Right. Yeah. Okay. Yes. So that one, I accept. If we maybe look at the actual markdown document and see whether the produced formats or the examples actually look, actually, that one needs editing doesn't make for that long. Like this. Yeah. So that's what it would end up looking like. This seems to be out of sync. I thought I'd push changes that synchronize this a bit more. Why wouldn't the elements be ID and time and type? Right. So this is what I've struggled with. I could add those. We would still end up with a bag, I think, to hold all of the extensions. And also, I guess I started down the road of creating all the different element types to represent all the value types that we pass around. And this, I just refactored it into one flat thing. So I didn't have to look at it and go, oh, what sort of element is this? Is it meant to be a string? Is it meant to be a number? Is it meant to be a timestamp? But you can do that. So in the schema, you would go and effectively constrain the elements? Yes. Yeah. You absolutely can. But then when you get to extensions, you come back to this model again. Because now it's very much like what I ended up doing with Protobuf, which I'm still not entirely sure I was happy with. You've got two different models. You've got a model for an attribute that is defined by the spec. So you know what type it's meant to be. But then you have these other attributes, which are extensions, which are untyped. So to be able to not, and I'm a big fan of not losing type information, you end up with your extensions having to decorate them say, oh, by the way, this is what type it is. Yeah, but that's the, so XML has XSI type for this. So there's the XML schema instance name space and the schema instance spec. And the XSI schema instance name space gives you a type system that you can apply to any particular element. So the default is at the text. And then you can go and constrain them, you can constrain them further down. So you make an element called foo. And then you have an attribute there, which is XSI colon type. And then in that attribute, you declare these XML schema simple type that that field shall have. Right. Yes. And I guess mentally, I think that collides with my efficiency thing. Yeah. Yeah, I get it. I get it. Exactly equivalent to what you have in JSON, right? In JSON, you have implied types by the way how you format them string. And with XML, you have an explicit type system you invoke effectively on an attribute to declare what that text is supposed to be. It runs long, but that's XML, right? Yep. Yeah, I get it. Does it differentiate between URI and URI reference attributes types? Yes. Cool. So I'm not sure I followed where you guys landed on that, but I'm wondering why it doesn't look like this. That's essentially what they're saying, but they, so for instance, the ID element will be decorated with an attribute that says XSI colon type equals XS string or something like that. So only extensions would have to do that. Yes. Anything that's not defined would have to do that. Yes. And so there's an implied type. They're implied they're all strings, but if they're different, then you would have to go and that's the same with JSON, right? Everything is a string unless you make it otherwise. Yeah. So I mean, I can go with that just a small tweak. So if people think that's a good direction. It looks more natural. It does definitely look more natural from a document perspective. Yeah. It's definitely more structurally following the Cloud Event model. From an aesthetic perspective, it looks a little odd having Cloud Event being Pascal Caste and then ID being all lowercase. It's kind of ludicrous, but yes, I suspect that would feel more natural when writing documents. Would everyone be happy enough with that? Yeah. That's fine. Well, I like consistency and the fact that we lowercase everything anyway for ours. That seems consistent to adhere to, right? Yeah. Okay. I know you, Jim, you said that is there all machine readable anyway and screw the user. I actually like the idea of a user being able to look at the output from JSON and then XML and be able to say, oh, I can naturally see the mapping back and forth between the two. I think if anything, it's only going to make life easier for someone who actually wants to code this up. Just like. No, fine. But I would still be looking to put all the extensions in a bag. Oh, you're going to kill me. So you want to bag around this dot, dot, dot. Yeah. Well, I mean, otherwise, what's a parser meant to do? How does it know whether it's looking at? I guess what we're really saying, and maybe it's what Clemens was saying, anything after data maybe, you treat as an extension. And if it doesn't match the model you have for an extension, then it's illegal. I would be reticent to differentiate between built-in attributes and built-in optional attributes and extension attributes for the same reason we've got elsewhere that they might be adopted later on. So the ordering, I would prefer to see all attributes come before data and include type information where appropriate. Do we reserve data as thou shalt not use an extension attribute called data? Because I know it used to be an attribute and isn't now. Oh, I see what you mean. Yeah. Jason, what we do in Jason, we can just as well do here, right? I don't think there's a need to go and do things much differently than we do in Jason. In Jason, we just throw it onto the object. Exactly. I don't like the idea of treating anything special because otherwise, all we're doing is revisiting the entire bag extension discussion all over again. So in the Jason model, you have a separate tag for binary data. So what you're saying is have a tag for binary data and have another tag for data, which is either an XML document or text. We have a type. So the difference between Jason and XML is that in XML, we actually have a type system here that it's base 64 data, base 64 binary. So exercise type. Oh, you would use exercise type for that. Okay. Yeah. So in exercise type, there is a bit, I have to go read this stuff again, but yeah, there is a base 64 declaration. Yeah, there is. There is because I've used that already. Okay. Let me mentally, let me have a cracker rework in that. Okay. Okay. Technically, we're out of time. Good discussion. All of a sudden showing up with XML in this time was not what it was expecting to say. Yes. Okay. Seaborn next, Clemens. So please comment on the PR itself directly. And then we'll see how much work Jim can get done for next week. Sorry, Doug. I would just say maybe hold off on the comments until I've had a chance to iterate on it. Okay. Sounds good. Okay. Did I miss anybody for attendee list? I don't think so. I think I just, Simon took off before I got a chance to ask him. Okay. In that case, if you are not doing discovery, you are free to go and have a good rest of your day. Otherwise, please stick around if you're doing the discovery interrupt stuff. Bye, everybody. Okay. We lost everybody we're going to lose. So where are we with respect to actually doing testing here? To be honest, I'm getting a little worried. I know, I'm not sure. I'm not going to imply people are slackers or anything. I know everybody's really, really busy. So people haven't had time to do it. For example, I know Scott wants to do stuff. And I picked him about yesterday, but he said he's really, really busy on Knative 1.0 release. So that's keeping him way too busy to work on this. But anybody else want to chime in in terms of where they are relative to coding up their stuff so we can think about starting to test anytime soon. So I can say that. I don't have influence published in this document yet, but I have a implementation that is wrapping effectively an Azure, complete Azure resource group. And you can discover all the events that are being raised from those Azure resources, and you can go and subscribe to all of them. So I have a effectively complete share with Discovery API and subscriptions API around the native eventing capabilities of Event Grid. So that's something that has been sitting wrong for a while now. And I just haven't been able to go in and publish the endpoint because the last work is I need to go and create a resource group and I need to create a thing that actually does stuff. I need to go and create some blobs and so I need to make the thing active so it actually raises events. So I need to write an app for the ship. So I haven't done that yet. I will promise that I will do that at the beginning of next week. So I will publish the endpoint and you'll be able to go and interact with it. The thing I'm worried about, frankly, is access control. So I'm not sure yet how to deal with this. Do we have... I don't know how much we can constrain those things. Yeah, we haven't talked about that, but the other thing I was going to ask you is it sounds really cool from a random implementation perspective, but what does it mean in terms of testing? Because one of the things that we talked about in the past for a testing perspective is to get a little bit of consistency in terms of what people can expect in the endpoint itself. So will you be able to have discovery endpoint services that match some of the stuff we talked about in here? I am exposing effectively a real thing that has this discovery API and the subscription API. It's effectively proving how that may look in the product. Right. As long as the services you can do are not fixed to be real as your services and you can put in there something like a ping service or something like that, then I think we're good. You will be able to hook up a custom topic and that custom topic can then go and raise whatever events. Yes. Okay, cool. So I said, I have all the basic code. I have some stuff to finish, but I've done a substantial amount of work. And so I will promise you with that I will have that available for consumption by this time next week. How about that? Okay, cool. Okay, that's great. Yeah, because I need to do some more work on mine, but I think mine is pretty far along and I would love to have somebody else to hook up to and start chatting with. So that'd be good. We can maybe sync up next week. Yeah, cool. Okay, anybody else on the call want to chime in? I don't think we have any of these folks, unfortunately. Okay. Okay. Okay. Is there anything else we need to discuss then? I'm hoping Cummins to be honest with you that if you and I start having our implementations talk to one another, I'm hoping that puts pressure on everybody else to try to find time. So that'd be good. We can do that next week. We're going to pressure them all at the compliance. That's right. I could make some political statements, but I won't. Okay. Okay. Okay. Anything else people want to bring up relative to the discovery interrupt at all? Otherwise, we will adjourn really early. Yeah. All right, cool. Say it again. Which means I'm going to get to go to the next meeting, which is already going. Yeah. Okay. Cool. In that case, have a good day, everybody. Talk to you again next time. Bye. Okay, bye, everybody.