 about that probably shouldn't be applicable anymore based upon other changes to the spec. So yeah, I'm sure there's plenty of stuff. Yep. All right. All right. Three after. Let's go ahead and get started. Let's see, update team. All right. Community time. Okay. Anything from the community if you want to bring up? Nada. Okay. SDK. Scott or Clemens. I know we have a call scheduled for after this one, but is there anything you want to mention on this call? Or anybody else from the SDK work? No. All right. It's in progress on the conformance invoker part of the test, but that's about it. Okay. Cool. If you guys have any questions about that, feel free to join the SDK call right after this one. Incubator status. Okay. We're still scheduled for September 17th. I have the proposal PowerPoint deck is still here. I have not uploaded it into the agenda yet. I was going to do that later this week. I believe we're technically ready to go. We do have three end users, but we can always use more. The chart looks kind of small with just the three. So if you do have any you want to mention, please let me know. Other than that, I do believe we are ready to go. So fingers crossed. I did create an outline doc for the two sessions at KUKA North America. It's pretty much what we agreed to before. So please feel free to look at that, edit it as you see fit in terms of adding topics. And of course, if you do want to talk to one particular section, stick your name there. Don't hesitate putting your name next to something that already has someone else's name there. We can work out who's going to, you know, how we're going to arm wrestle throughout who's going to do it later. But just having a list of people to choose from would be nice. So feel free to add your name to that as you start thinking about these things. We still have time to work that out. It's not till November. I think that's it in terms of administrative stuff. Anything else before we start talking about PRs that you want to bring up? All right, cool. In that case, Clemens, which PR would you like to talk to? The one you just opened or the one that was there since Monday or Tuesday? So there's two that are related. And I think after the discussion we had on the PR, I was convinced, which occasionally happens, that we should pull that entire concern into the JSON encoding. And to make it not as weird as it would be if we were introducing a extra attribute just for JSON, I chose to change the attribute. So I would like to talk to 492. So what I did here is I effectively removed data encoding completely contrary to what we said before. So I removed that entire concept. I changed, so in mapping, so this is in the JSON format. What I've changed there is basically I'm saying if it's a string, then it must be called data and there's the extra rules for clean JSON mapping, which also applies to that field. For binary, if it's binary, then the member name in the JSON object must be data underscore basically. That's the way how we distinguish between the two. If it's data, it's either in line JSON or it's a string because those two cases are indistinguishable. And if it's basically for encoded binary, then the name changes. How I got there was that the comment that Evan made on the prior PR 491, he said, well, if you would do this for other text formats where we have the same problem, then you would probably use an annotation in YAML and you would use an attribute in XML, which speaks to me. And so I'm like, okay, I don't want to have something that is, if we make something that's specific for JSON, which in effect is, then it should be somewhat similar to the function of an attribute in XML rather than having introducing a whole new attribute that is inside the events that kind of sits in parallel, but describes data just in the JSON case. That seems a little strange to me. If it's not a general thing, but really only applies to JSON, the JSON coding shouldn't be adding extra things in it. So that's the solution for this. If you scroll a little bit further down, that's how this would look then. Basically, yes, like that. So instead of data, if it's basically for encoded, it's simply called data underscore basics before. And by implication can be either the one, one or the other. And then in our main spec, then the entire concept just goes away. So data content encoding is gone and the related comments as well. So it really becomes a purely becomes something that is inside of the JSON event format. And there it's a, it's a, if you will, a flag kind of thing on the data field by naming the data field accordingly. If it's basically for, if it contains basic support data. So question. If, if I get a JSON cloud event, how do I know which of the two to look in the data or the database 64? They're, they're mutually smooth. Sorry? They are mutually smooth. We can only have one of them. Okay. Yes. And, but sorry, I think the question still stands. How should I know which one I expect to be there? Is there a mapping from a content type or something like that? Um, if you, well, if you get a data underscore basic to four, that implies that you, that you have your data is binary. And that's basically for coding. And then you have to, yeah, I mean, you still have to, you still have to reconcile that with whatever the content type is, right? So if your content type is text slash XML, semi colon, char sets, absolute, then you have to go and take that basic before turn into binary rather than the absolute decoder and then decode the XML. Anybody else have a question or comment? So I'm generally favorable on this one. In the specific case of JSON, which is where we live all the time. It's better if you don't have fields whose type can sort of randomly vary. So this would probably facilitate the task of mapping this into pojos or other equivalents. So sounds good. I have a question if the most raise my hand. I think you may need to add a little bit of text in here to make it more explicitly clear that you can't have both data and data underscore base 64 because I don't think it says it yet. And so I may put both in there, but I want to make it clear that having both in there is an invalid cod event, right? Okay. Yeah. But the other thing is that my very concern, because that's just a wording change, that my very concern is, I guess it's related to what Tim was saying is, since obviously I haven't looked at this till just now, and obviously even having encoded it up, from a coding perspective, I am actually concerned about the exact opposite of what Tim said. I wonder whether it's actually harder for people to not know where to look. Especially if you're writing a generic cloud event processor because now you have to look for both and I understand some of the concerns of having a sort of a modifying property that rides alongside things. But I don't think that's that weird of a thing because you kind of have that already with things like HTTP headers, like your content type tells you how to interpret the HTTP body, right? And that can vary based upon the value. We don't have two different types of HTTP bodies, one for binary, one for readable text, that kind of stuff, right? So I'm a little nervous about that. I don't think it's necessarily a showstopper, but I have to admit it does make me a little nervous to have a changing attribute name based upon its type. So the alternative to this is 491, which is the rename we talked about and that is basically renaming the data content encoding into data encoding, simplifying that. So that's that change. And that basically removes all the references to the prior. I couldn't get up even more than we discussed where I removed all the references that could choose people to the contents, what is that called? Content transfer encoding of SMTP, which I was referencing because of basic before. And there was like content encoding and transfer encoding. People had all kinds of ideas. So I had, I just removed all that and just point to the basic before encoding standalone RFC and then basically clarify that in the reference that in the JSON spec. So this is where this is what we have right now in the spec, but you know, make it a little bit more tight and related only to the data fields without people getting, without there being a chance of people getting confused about the content, the various transport fields. So I'd like to ask some folks who have actually influenced this stuff. And I guess in particular the SDK guys, what's your thoughts on on the two PRs? Don't make me name names? Because I will. Okay, Scott, I'm gonna pick on you first. Yeah, I already have to do kind of a funny dance to understand if the data payload is either the encoded version or the uncoded version. So having two different places that the base 64 encoded data is supposed to live actually makes it a little easier for me to implement this. Okay, that's good to know. Thank you. One question I do have though is the SDK will probably help you decode the data into the binary version of whatever it is. And I find it useful to cache that. And so I've been sticking it back in the data and I have a flag internally in the in-memory object that says this currently is in the state of either encoded or decoded. And then if that event goes back out, the codecs know that it should turn it back into base 64, not based on content encoding type and the internal state that doesn't get sent. So is there a similar mechanism in this? If I have both data in base 64 data, which one do I notice? If you have, well, you need to know whether your data is binary. I think that's the decision. The encoding is what you do when you flush it out to the wire. Look, Scott, let me ask you a question about that. Do you plan on exposing two different attributes to the user or the SDK or just one? I have to read this a little more carefully, but I've been exposing helper methods that the user gets to ask. Please give me either the raw version or a version that's been perhaps un-martialed into some other structure. So, Clemens, what's your thought on this? How would you expose this as an SDK user? In the SDK, I would make the difference between the two go away. This is just a wire thing. So if data underscore basics before shows up, I basically code this and stuff that into a byte array and the byte array becomes the type of the data property. And then if data shows up, then it becomes an object graph. In the simplest case, it's a string and then otherwise, it contains JSON data. It's a JSON graph. So it's either it's a set of JSON objects, or complex or not, or it's a binary data. But you can tell effectively by saying, data is and do type inference, et cetera. So that makes it easy. So I want to make sure I understand the flip side. So the user has a cloud event object that there's stuffing bits of metadata into and they stick something inside of data. You would dynamically check to see whether that data object has binary stuff in it, or just straight text and make the decision as to whether to use data versus data underscore base 64 based upon that. If you give me a byte, if you set a byte array on data, and you render as JSON, I will stick that in data underscore basis four and make it base four. So you said you walk up to you walk up to the cloud event object, you go to the data property and you assign a byte array to it. Oh, right. Then under the covers as that gets flushed out to the wire, that data gets basically four encoded and gets put into the data underscore base 64 field. If I read that back in that event, I read that through the JSON encoder, the JSON encoder goes and takes a look at the document. We'll say data underscore base 64 will run the basic before the coder over it gets a byte array and that's the byte array that I assign back to the data property. That's all I do. That detail difference, nobody sees that detail difference. If you go into the, I have this raw collection inside of the cloud event object that is all the attributes in the raw, that raw thing will be a property called data that contains the actual, the serialized data. So that assumes then that your data attribute is somewhat dynamic because you can handle any different type and what if I give you a string that has characters in it that aren't valid JSON, would you then automatically convert it to byte base 64 as well? If you give me data as a string, I would, well, I would first follow, I would follow the JSON escaping rules for that string. Okay. So, I mean, the JSON encoder does that for me. You can go and put whatever you want in it and then there's rules that JSON encoder, that JSON will do to make sure that you can represent your string. That's true. Okay, I forgot about that. Okay, never mind. So, it's really just, this is literally just the flag on data for transport purposes that says that basically indicates that the content of this is basically core. Right. And we're deleting the envelope flag for base 64 encoded, right? Correct. So, that's the, that's the, that is this chain. Effectively, that envelope flag, the prior PR 491 is where I teamed down, effectively, that envelope flag to allow to make sure that it refers to data only and people don't get confused. And this step here is effectively doing away with that all together and just making a, an indicator, if you will, as a suffix on the, on the member name in JSON and making that effectively JSON only concern. Right. Because, and this, this is what, and if you, can you go into 491 into the comments? Oh, the comments. You went Evans, right? Yeah, exactly. So, Evan says, right, rightly says, if, if for hypothetical XML encoding in data for data, if he would probably then use encoding basics before, and there are actually in XML, as Tim will tell you, 400 ways to declare that the data inside of there is for basics before. And so that's, that seems to be the, this, this strikes me as the right way to do this for XML. And then YAML also has a way to, to go and declare this as an annotation. So, it seems like if it's not a problem for those two text formats, then we should also solve that problem just as, as much locally for, for JSON. So that was my opener. See, what's interesting is when I was talking to Evan about this yesterday, my assumption was that if we ever did come up with an XML or YAML encoding, that we would say that the data content encoding flag would appear as a property or as an attribute here in XML or you encode data this way in YAML. We wouldn't necessarily have to force another top-level property. Yeah. And that's, and that's why, that's why the top-level property is, is gone, but I need to go to put it somewhere. Yeah. It's just interesting because with, with your news, with your latest proposal, what you're basically doing is making it a transport specific concern. And the nice thing about defining it in our spec was that you at least have consistency at an abstract level. It just may appear differently at the serialization level. And we're saying, nope, we're not even going to find out the abstract level. It's completely a serialization problem. I was, I was going down that path commenting on, on, on Evan's concern. And then, and, and one of the, one of the arguments is obviously that there was a time where when everybody thought that XML would be the last text-based format that anybody would ever use. And then everything became XML and run by XML rules. And now everybody seems to be a big fan of Jason and there will be the next thing. So, you know, having something that is, you know, more abstract and then helps with the basics before case in general might be useful. But first, I don't see that coming. Really? Yes. And the second, this is not so bad. Okay. So I, I was, yeah, I convinced myself that the abstraction is not necessary. Okay. So let me ask this. I know Clemens, since you bothered putting together the second PR that has the database 64 attribute that you obviously prefer that one. Scott, I'm assuming you're LGTM in the comments or in the, in the chat implies you prefer the second one. Anybody else on the call want to speak in favor of the original PR, one that just did the name change for the most part? Okay. I'm not hearing anything. Does that imply that everybody on the call would prefer the second PR? Okay. I believe since the PR was just open today, it'd be unfair for us to actually prove it today. However, last chance. Is there anybody, I still think maybe a little bit smarter text that says you can't have both in there, but I think that's a minor typographical thing. Are there other things relative to this PR that people would like to discuss? Or is everybody pretty much, yeah, this is the way to go. Isn't there something related to data ref? Like the rain check data? Does that go in? Or is that still pending? Rain check. You mean the deferred retrieval stuff? Yeah. I think that's an extension. Yeah, hold on. Yeah, I think we may not understand you. So what if we made base 64 encoded JSON and extension data ref? Oh, there it is. I can't see. So I'm sorry. Say it again, Scott. You're proposing possibly what? I'm proposing make base 64, data base 64 the same pattern as this data ref extension for JSON. Well, then you're ripping a lot of things apart because we have the data content type, which describes data. And now you're saying all binary data goes into an extension. So then the question is, what does data content type not refer to? Like, I don't think that works. I think data content type goes away. Yeah, but then you still can't describe if you have a string, what that string is. You can't signal that. Not data content type. I meant the data content coding. I guess I'm confused, Scott, as to why we would make this an extension when sending binary data seems like a fairly core use case. Yeah, I think so. That's the part that I'm talking about my head around. Scott, why do you think it would be an extension or a better outfit as an extension? Because they look like the same thing to me. Or at least the same technique. I think I may need you to elaborate on that. Data ref is a reference to something that's outside of the upload. That is very different from, you know, a data, this is still the data field. The underscore basically 24 is effectively just an attribute on the data field that says inside this data field is basically four. It looks like a different field, but conceptually it's the same thing as having an attribute on an XML element. So do any of the other serializations need to be changed at all to support binary? No, because they all know binary. Jason is the only, Jason is the, even though it's popular, it's the weakest possible serialization format that anybody can imagine. So it doesn't, it doesn't do binary. It also doesn't do dates. Interesting. Okay. That's why it's popular. Yes, because it gives people work. Okay. For what it's worth, it's, if we remove this ability for binary mode to send a base 64 encoded binary data, then it removes a lot of testing that I have to do out of the SDK. So that's nice. Cutting features always removes testing. Yeah. Okay. Okay. So I think what I'm hearing is close Clemens first PR, consider a second PR, and have to wait till next week to formally approve that or vote on it. Is that what I'm hearing from the group? Already on the way to close the PR. Oh, okay. Okay. Cool. Okay. Am I correct in that it, does it close or what can we do with these guys? Do you remember Clemens? Correct. I guess not, not the tricky one. I guess these two, these two go away completely. Number two. Okay. Okay. Okay. Okay. In that case, but I can ask, are there any other topics related to Clemens PRs they want to, they want to bring up? Okay. In that case, let's talk about Evans PR clarifications. Kind of where we left off on this one. Okay. He has some link errors. However, Clemens, I thought you took an action item to mention that this one would be impacted by Tim's PR that we adopted last week. Well, we had this, we had this whole discussion around the data content type stuff, but all of that is no longer coming on. This is the data content encoding. That's wrong. Yeah. So this stuff, yeah. So, okay. There's a lot of changes that need to go here if we did up number two. That is being affected. And then there were, and then there was, I think some stuff with the character encoding stuff. Yeah. I think he did some, this stuff right here. Yeah. Because Tim clarified the non-principle of what principle is, and I think this also is in conflict. Okay. So I wonder what, how much of this change, let's scroll down once more. Yeah. So clearly, I think my guess is that we, meanwhile, address most of the things that are in this PR. Okay. I don't think it's necessarily useful for us to walk through it right here. Could you comment on this? I'll talk to Evan after this call saying where it looks like we're headed towards your second PR today. So that's going to impact obviously the data content encoding. But can you make a comment on why this section might need to change? Yes. Okay. Yeah. Okay. Cool. So I think this actually still applies and updates for comments number two. Okay. Cool. Anything else? Sorry about that. Not a problem. Okay. I know you're busy. That's fine. Thank you. Now, Kristoff, yours, I apologize. I only briefly skimmed it since it was so new, but it seemed like yours was more syntactical fixes. Is that true? Yeah. That's a good way to describe it. So we removed map from our type system and we still have it in all the event formats. So I moved that and we added the UI. So I also added this to all formats. And then I did one more thing in the spec in that I inserted another header that's called type system. We had this before. We removed it, but all the event formats basically say please refer to our type system. And for example, the JSON one even links back so that links are relative links that doesn't really work anymore. So I think it makes sense to have a header there. Or that's just a minor thing. I can also remove it. Then I would fix the link in the JSON format. Yep. That's it. Okay. This is like, since this is strictly syntactical, did anybody have any concern? My only concern is I'm wondering if that should be three instead of four. But that's a minor thing. Yeah. I wasn't sure about a seeder, but we have the protocol and stuff. I can also make it three. Yeah. Just because it's under this one, right? So that's why I think it should be three. Yeah. Makes sense. Okay. But did you say that the relative Ahrefs and the other specs are broken and the link checker did not catch that? Well, I think the link still works. It just doesn't point you, it just points you at the document itself at the top of it and not into a support of it. Ah, got it. Okay. So I don't know if the link checker does catch that. Okay. Do you want to, do you want to update those links so that they actually point to this section instead? They do now. Oh, I'm sorry. With the same title. Oh, got it. Okay. Got it. Okay. In that case, any objections then to making this three and then adopting the PR? It seems strictly editorial. Okay. Cool. So, whoops, wrong one. Okay. Cool. Um, this one's just never going to go away. Hold on a sec. I think this one needs to be updated based upon all the stuff we talked about today relative to Clemens PR. Let me just double check. Yeah. He's needs to do some updates so we can't even, we can't do it. Okay. So let's, let's skip that one. All right. So let's talk about timelines. My hope is, or this was my hope that we resolved PRs today. Obviously, we're not there yet. So this may be pushed out by a week, but I was wondering if maybe on next week, we could approve, let's say we approve Clemens PR and maybe Evans PR about the tricky use cases stuff and this clarification in the binding, which I don't really think changes anything normatively. If we can get all those PRs in there, say by like Monday or so, so he'll have a chance to review it. What do people think about voting on next week's call to approve that as 0.9, which is technically release candidate one for version 1.0 and then give us two weeks to resolve any outstanding issues that people find as they review the docs. Start a vote on September 19th. Do an offline vote since it's obviously a very big decision. Close it on September 26th and call it done. And then we can make an announcement on KubeCon that obviously gives us all of October. If for some reason we can't meet these deadlines to push it out, but I start off being aggressive. What do people think? Let's do it. Any other comments? I have a super minor comment. We wanted to call it, let me check on the road, 1.0 minus RC and not 0.9. Otherwise, I fully support this. 0.9 is very weird because we're jumping from 0.1 to 0.2, 0.4, 0.9, 1. It's very much in favor of 1.0 dash RC1. There you go. Okay. That's a minor change to me. Any other comments? It's too easy. Okay. Cool. In that case, looking at other issues, Kristoff, I think you said this one obviously would go away if we approve Clemens PR number two, right? Yeah. Okay. Webhook, I think Clemens, you and I still need to talk about that. I don't think that changes our core spec. So we don't need to worry too much. And the SDK, that's going to be talked about in the SDK call. I think, let me just double check here. In terms of issues for version one, Clemens, your PR addresses this one completely, right? Okay. I guess we already talked about that in your mind. I'm going to assume it's true. So I think your PR addresses all three of these. That one. That one. Okay. This is the big one then. Okay. So a long time ago, Thomas from Google asked basically, you know, what is, what's the criteria for going 1.0? And as of right now, this may be old, but I believe we have at least these implementations out there. And I think actually the list has grown now. If you look at the incubator proposal for our work, I actually list quite a few more things in there. So this list is actually quite short in comparison. For example, I know Red Hat has some products, Oracle has one. I think there's a couple of others out there. So my question for the group is, does the current implementation of the spec satisfy people's conceptual definition of exit criteria? Or do we need to add things to the list? Should I assume silence mean people are okay with? Is there, is there the possibility of a way of demonstrating interoperability or something like that? I mean, in fact, that all these implementations exist is a fine thing, but do they actually talk to each other? We do have demos that we've done in the past. I'm not to be honest, I'm not 100% sure how much they show interoperability because it does show everybody can receive cloud events and process them correctly. Do you have something sort of demo in mind, Tim? Because we could look to put together something for KUKON, granted it's short timeline, but we could try. I could probably volunteer to serve as a large-scale source of cloud events in the not too distant future. That's a tease right there that I like. Go ahead, someone's going to say something. No, that's great. That's great. That makes me happy. But it would really be 1.0 only, right? I mean, so a lot of these other things are not 1.0, so I worry a little. Yeah, I would assume people, go ahead, Thomas. Yeah, so on our side, we've been, we're effectively waiting for 1.0 to lock so that we can start updating our stuff. So from the product side, there will be action this year, but I don't think we'll rush it because now we're getting serious about the implementation stuff, and that's part of it. So, Tim, when you say that you'll be able to generate cloud events, is there a particular product you have in mind that's going to be generating the cloud events so that we know what type of events are being sent? Let's just leave it at, I think we would probably be in a position to generate large numbers in a way that would be easy to flow to any destination that wants to look at them. Yeah, the reason I'm asking for a little more specifics, if you can't, that's fine. The reason I was asking that was because if we did want to put together some sort of demo, we'd have to know the shape of the events coming in so people can know what to do with them. As opposed to just, yes, I got it, I can parse it, right? So, okay, if you can't say anything, that's fine. Well, I mean, we are, so yes, I mean, go look at the things that flow through EventBridge, which is a product, so an existing product. So the assumption is that assuming cloud events stabilizes and assuming, this is not a promise, okay, I'm not making a promise, yeah, and assuming things go the way I'd like, we would probably be able to provide all of those, of which there are millions per second in cloud events format. And there's a couple of hundred different payloads. Okay, so I feel like there's two questions lingering still. One is, do we need more to meet exit criteria status? And two, do people want to look at a possibly new demo for Kupkan? I think more valuable than a demo for Kupkan would be more work on the conformance test framework. Yeah, I actually agree with that. Okay, and how can we move that along? I know Scott, you mentioned someone at Google possibly helping you, but what other things are you thinking of in terms of asking for help? I mean, if people have more experience in writing conformance test frameworks than I do, please come help me, because I'm making choices that you may or may not agree with, so I need help. Okay, so obviously if you're interested in that, please join the SDK call right after this one. So let's go back around to the first question I asked, exit criteria. Do people think we need to add more beyond the current set of implementations that we know about, and possibly the conformance test suite that Scott's mentioning? I'm going to assume silence means people are okay with current state of things, in which case I'm tempted to close this issue. Does anybody disagree? Okay, let me ask it more formally. Is there any objection then to closing this issue with the assumption that our current state of implementations and testing and stuff is sufficient for exit criteria? All right, not hearing any objection, I'm going to do it. Start on here. Okay, I'll fix it. Thomas's issue to close. Okay, in that case, are there any other topics people would like to bring up? Should we define a process for 1.1 or what happens if we discover an issue with 1.0 after it's officially launched? We can start that discussion, sure. Let me just do one quick thing though. Ginger, are you there? Ginger? What about Klaus? Yes, I'm here. Okay, Javier? Yes. Okay, and John M, is that the same John Mitchell? Okay, Ginger, are you there yet? Yeah, I was, this was John. Yeah. Oh, that's what I thought. Okay, cool. All right, so Ginger, if you come back, just ping me through. I'm here, I'm sorry. Okay, I have to get into one. That came right when you asked me. Okay, not a problem. Okay, so let's talk about the other issue that we just brought up, how to handle changes going forward. Can I make a remark or two on that? Yeah, please. I've, we've discovered that once you actually start shipping these things at scale, the inertia that develops, the inertia develops, huge inertia develops almost instantly. And aside from adding fields, new fields, it is insanely difficult verging on impossible to change anything. So, you know, please do not, you know, relax our vigilance in the hopes that oh, if there's something wrong, we can fix it in 1.1 because the chances are substantial that we can't. I agree. Yeah, and that's why I think the two or three week review period that we're going to have here is critical. Yes, but I believe the overall question is still a good one. And honestly, from my perspective, I don't think we can necessarily lock down too many things until we actually start seeing issues. Because if all the issues are, you know, syntactical in nature, obviously we can do those as long as they don't change anything normatively. If someone was to find something significant enough that we thought we can't change this without breaking back its compatibility, then we're just going to need to decide, okay, do we ignore the issue or do we go for version two? I don't think we can make that decision until we see how big of an issue it really is. So I'm inclined to say, well, we can't answer too much until we get there, until we see the issue itself. I'd like to know that. There's never been a, there's never been a JSON version two. And there is a XML version 1.1, which is universally ignored. So, yeah. So what do those people think? Is there some sort of formal process or something we need you to find now? I mean, we already agreed that we're doing sem-verse that kind of limits our process a little something's done. So glad that was you that asked the question, right? Yep, that was me. Yeah, is there something, was there something specific you're hoping to to define? Nope, just wanted to start a discussion around it, basically, to make sure that we know where we're getting into after we release 1.0. And it sounds good to me, like we get a version out and we're going to actually get some usage and we're going to see what this develops into. Yep. But I don't know, the only other comment I can make is do we want a longer, I know preview period where this is added as supported by multiple tools? And then we see, is there any point in that? I don't really see it. So I don't know. That's why I want to start the issue to see if other people think this needs to be discussed now or not. Sorry, I was on mute. I was assuming that on September 19th, we would vote to start the vote. And if people have concerns large or small about starting the vote because the spec isn't ready, I expect them to raise those by then. And like I said, we still have a whole month before KubeCon to work through any of those issues. But this becomes a very important step for the working for the entire working group, right? Now's the time to look at the spec with a fine tooth comb. Because as Tim said, making changes later is going to be darn near impossible. So it's on everybody to do a good thorough review. Okay. In that case, we can end a little early. Unless there's other, is there any other topic people want to bring up? Okay. In that case, I think we're done. Oh, I did want to mention one thing. Chris Anacheck ping me earlier in the week, asking about our version 1.0 status, basically hinting that they would love to make some PR noise around this in the not too distant future. So there are obviously our people within at least the CNCF, aside from our little group here that are very anxious for this thing to go forward. So I thought that was a very nice little sign that it's not just us on this call that are interested. There are other people who definitely want to see this thing move forward. So I thought I'd mention that. And the CNCF is all eager to make noise around this, which is great. All right. In that case, unless anybody has anything else, I believe we're done. And we'll resume the call again in about 10 minutes for the SDK call. All right. Thank you guys. Okay. Bye, guys.