 All right, we'll catch back up with those guys later. Everything just step away for a second. It's three after, so why don't we go ahead and get started? Oh, OK, got to Tim. Let's pesky mute buttons. All right, let's see. Where are we? All right, community time. Anything from the community that people want to bring up? I don't see anybody new, so I'm guessing no. OK, move forward. Notice key updates since we didn't have a call this week. Just a slight update for KubeCon China. Kathy is not going to be able to make it, so I will be presenting both sessions, unless somebody else happens to be there who wants to join in. But I haven't heard it from anybody, so I'm assuming that's not true. And the slides are not quite yet available, but they are due June 10th, so I'm hoping to make that or do that this weekend. So then you guys can take a look at them and make sure it didn't go off the rails. All right, so TOC call this week. The topic of what three independent end users meant was brought up. And the agreement I got from the TOC was that it is users of the product that implements our spec. It is not just three implementations of the spec itself, which is kind of, I think, what a lot of us expected anyway, but it's good to have a confirmation. And I asked if they actually are going to use the honor system and say, yes. Is it going to have to say, I have customers, or do they actually need names? And they said, ideally, they want names as proof. If for some reason the names are confidential, then we can work out some mechanism to get them that information offline, so it's not public. But they do want confirmation that it is three end users of products that implement the spec. So, oh, and the other question was, somebody at Kimberle was asked if there's a version requirement that we have to reach before we go to Incubator. And the answer was no. The government's documentation doesn't say anything about version numbers for any of the levels, including being a graduated project, even though I think there's an implication that you should at least be one point though, but the government stock doesn't actually say it. But for Incubator, we're okay being beta off of where you're going to call us, okay? So, anyway, any questions on that? What does, what does that mean? What does user mean? Does that play, does that production, is that? It says, it just says they're using it in production. Okay. I'm sure we have people who use it in production. Yeah, I'm pretty sure we can meet that criteria. So, what I was gonna do was wait until after we get 0.3 out the door and then come back around and ask the group if we want to try to go forward with being an Incubator project. But I figured we should wait at least until 0.3 since that's our next milestone, okay? Just dug the announcement I sent about Adobe using it. This is actually a production thing. Yeah, I assumed it was. That's what I was hoping. That's what I was so excited about your notes, yes. I mean, we've been shipping event ops with it and I'm sure that people are using it. So, it's just a question of, who are we giving them? Exactly, yep. All right, cool. So, like I said, I think we can revisit this going to Incubator status after we get 0.3 out the door. I think it makes sense to wait. And with that, before we jump into PRs, are there any other high-level topics people would like to bring up? All right, in that case, Clemens, I believe you are up. I was... I'm trying to remember what, did you make any significant changes? I did not. No, this one. Yeah, I guess I have one question for you. This must be settable aspect. Yep. That to me, as I was reading it kind of implied we were dictating what SDKs have to do. And I was wondering whether you would consider that to be out of scope for the Cloud Events spec itself? Well, what I mean with settable is that you should be able to... So, effectively, what this type system definition that I have now implies is that there's always conversion. Or there is a canonical way to express a value of that type. And we have a canonical string representation for every single one of them. And, but you should use the most... The best native representation for SDKs but also for protocol mappings. So, the time value should be a timestamp and MVP. So, setting means broadly how do you map from a field over here to a field over there. And it's not necessarily being prescriptive about how your API should look or how your stack should look. But effectively you should always be ready to set whatever that value is in your native stack with a string type and then be ready to do that conversion and do the check at that point. And you should always have a way to go and convert that value that you have there in your native type system back into that canonical string. Okay, I think that helps me. So, that's what I mean with that. What I'm not restricting is in that text event is... And I think I have a comment about this. It's perfectly okay for you to go from a Go SDK which has a timestamp to MVP, which has a timestamp and never used the string type at all. That's fine. It's just that when you have a mismatch, when you can't map, then you should be able to go to a string from that string to the other native type. So, for instance, the example that I brought up in the comments was Unix epoch is used, widely used to represent timestamps. And of course, the C sharp times then has an epoch that starts at a different date, of course. So, one safe way to go between those two is to go and map to ROC3339 and then map back and then you don't have any issues with that. Okay, that helps. Thank you. Let me just check and see if there are other comments and left in the issue itself or PR. I don't think there are any unaddressed. Take a look at the comments. So, just for people who haven't read this that are on the call, the objection was that there are quotes around strings in headers, specifically in HTTP. And what this here does is it does in a very wordy way, does away with just simply using JSON as the type system, but introduces a type system that has the effect of the strings being exactly the same as they would be in JSON if it would use the date time and a string and a number, et cetera. So, and actually using, I'm deep linking into the JSON number definition and I take the integer part of that number definition for that to be a number. And the overall effect is that while all JSON rules stay intact in terms of how you formulate a message, you can literally use a JSON encoder to go and turn your strings into the right format and back to turn your types into the right format and back. The effect of that is that I make the codes go away. That's basically it. Do you want to talk just for clarity sake, for people who may not have read it, do you want to explain how we deal with extensions then? Yeah, so extensions are, so you would define the extensions as with the proper types, the types you'd expect that they have an integer for your sequence number thing. And so you'll define extensions with the proper type. How extensions are generally how attributes travel on the wire is not so important if you have a way for your type abstraction to turn it back into the right type. So what you would do is in your, I expect that if you care about an extension and if you care about the semantic of that extension, you will have an expectation about that type, which means that data arrives to you either in the right type, natively mapped from let's say a message and has a type system like the integer maps to an integer from A to P to go, or it arrives as a string and then at that point you pick up the string, but since your extension expects an integer, it will go and do then a conversion from that string into the integer and the rules that we set here, that we say here this is the wire type description, we basically mandate that that string is convertible into the integer. So effectively the conversion into the native type system of the programming language happens at the edges, but we're not caring so much about it being the right type on the wire. Right, and just for completeness sake, I want to explain what happens for unknown extensions at the receiving side. So for unknown, oh, so let's say you have intermediary, the intermediary doesn't know, right? So you, the message shows up in, you have an intermediary, the intermediary gets an event by HTTP and sends the event onwards to the HTTP. And then you have a receiver and the receiver now wants to go and evaluate the extension. The way that works is the event comes, you have a field that comes in and supposed to be a date comes in at a string over HTTP. You copy that because you don't know what that is, you copy that in as a string into the MPP message. Even though it ought to be a date, but it travels as a string, you send that to the consumer, the consumer now goes and walks up to the MPP field, expects it to be in timestamp, but now finds it to be a string and it should now be able to blindly apply the conversion rule for daytime from spring to daytime and that should work because we have specified effectively for that extension that that ought to be a date. Someone, the publisher has put it in there with that intent and it has been converted on the way because they have the HP on the way, but even though it shows up on the MPP as a string, the consumer should be able to go and convert that then. All right, cool, thank you. All right. And so, let me do one more sentence. The MPP itself, we're actually gonna take that exact same mechanism and we're gonna push that down into the MPP spec so that in the case of MPP, the stack is already gonna do that conversion. So if you're gonna speak to, if you're gonna ask for a field to be a daytime, but the wire type is a string, we're gonna go and do exactly the same conversion. So we're effectively using that. So I'm taking that same mechanism of putting it into two places now. Okay. All right, thank you for the summary, Clemens. Any questions or comments from people on the call? Nothing at all? Okay, in that case, we'll ask the question. Is there any objection to adopting this poor quest? All right, a little too easy there. All right, it isn't approved. Thank you so much, Clemens, for your hard work on putting this one out there. Thank you for accepting this and that's a PR that I'm a little proud of, I have to say, because it's a bit of a trick, but I think it will work well. Yep, I would agree. Yeah, good job, good job on that. Yeah, those quotes were annoying. So thank you. All right, so I believe just for clarity's sake or for completeness, Scott, I believe you think we can close out this poor request. Is that true? That's right. And then I know, I don't think, I think this was from Alan, right? No, it's Adam. Adam, sorry, Adam. Yeah, but you think it's all this too. Okay, excellent, cool, okay. 396, so can close. Is there anybody on the call who disagrees with being able to close that PR and 396 as well? Oops, geez, I can't type. All right, and I hear any objections? Let me just make sure I have the numbers. All right, 396, yep, okay. Cool, excellent, we'll do that. All right, Clemens, size constraints. That has been a long road. Yes, it has. You want to bring us up to speed on where you are? Let me hide the comments just for a second. Yeah, exactly. Go ahead and summarize. Yeah, so the last changes that I made were today. I took two sets of comments. Hang on, let me just go. One second. So there was Christof and Eric made suggestions. And I took some of those. So first of all, I called the size limits instead of size constraints, it's kind of within the, I find guarantees a little strong, but constraints maybe doesn't say that. So I called limits because I also speak about limits. In the second line here in the first paragraph. And then I, Christof found the constraint of the publisher redundant and I agree with that. So I took that out, so that was like 447 down there. So there used to also be a rule of the publishers out that I now removed. There was an objection from discussion from Eric saying that the at least rule here for intermediaries were inclusive and were saying you have to go and do forward events of every size. I think if you, so in terms of what I think this means normatively is that if you support 64K, you are compliant, you are conformant. And, but I don't know how to express that better to say, you know, you have to go and support 64K, but it's okay to support more. But then there's a for receivers or for consumers, that's the next line then, is that because you should accept events of a size of at least 64K and that's a practicality because there is some devices maybe which are interested in using cloud events which will have a problem with having those sizes. So I'm making this a little bit more leading, but I'm effectively, producers should be able to publish events up to 64K safely, which means they will get into and through intermediaries. And then it's still up to each particular consumer when they want to go and take those events. But the middleware will not go on strike if you publish events of 64K. And then it's up to a consumer whether it wants to go and deal with events that happen to be larger. And that's the effect of all this. You can, we can, as a producer, if you set an event with 64K, the intermediaries will not stop you. If only the last mile may complain because they only have four kilobytes of memory and they can't deal with it. But that's the last mile's problem. Okay, Eric, did you want to just speak up about your concerns about the wording? Well, I guess most simply, I think we agree that the first statement there above the comments includes amounts of 64 kilobytes. My reading is that it also includes amounts that are greater and there's a must on the forwarding of those events. So it was just a wiggle on wording and I think that it's easy to correct for. Well, what you seem to be reading is intermediaries must forward events of the size of 64K or greater, which is not what this is saying. Well, that was my perception of what it was saying. If I can be even correct, I'll be happy to be corrected in my English because that's not my native tongue, but I want to effectively, I want this sentence to be to set a lower balance. Okay, yeah, yeah, or rather, right. My understanding is that to get the intent that I think you mean that you would say it should be of size at most 64 kilobytes. So the requirement would be for all events up to 64 kilobytes. And then there would be an addition of kind of, you can do what you want with anything over that. If we change the end of the sentence to be of a size of 64K or less, would that clarify it? That would work fine. Is that still consistent with what you wanted, Clemens? Although you'll want to capitalize the beat. Yeah, we can get that, that's okay. Just for you, Eric. Yes. May I just ask a question here? This is not it. I really don't know normally what's the size of the event should be. So whether it really 64K, 64 kilobytes should be the maximum or minimum. Any, can you guys give any examples to know what is really happening in the actual world? Forget about the virtualization, even lack of systems. What are the size of the event that we usually come across? For what we have in Azure on the event grid, which is all the events that are being sent when you created your blog and there's a new, the VM has deployed and there's a queue available, most of those events are like 1K, 2K, 3K at most. Fine, sir. And those sizes are mostly due to very long URLs that are in them. So I thought, I mean, my general thinking is 64K bytes is really more than enough. But I think based on what you said, maybe that's true. I just wanted to see really whether I'm thinking correctly or not. I apologize, I didn't see who raised their hand first, but on my list, Tim, your hands up first. Tim, you have the top mute yet? There we go. On AWS's central event bus, our limit is 256K currently. And we do have customers who run into it. One flavor of events is ABI calls and some ABI calls can take immense lists of arguments to describe and so on. No matter what number you pick, it will be an irritant for some people. We may like to think that events should be small, but sometimes they're just not. So. Okay, so just to point out though, nothing in here I think prevents anybody from sending very large messages. This is just trying to guarantee some base level of interop, I believe is the intent, right? Right, Clemens? Yeah, and I think, so Christoph came up with this and I think that the goal was to have a baseline of what would be supported by everybody and that was the intent of it. Like an event can be 10 megabytes if all the infrastructure supports it. Right, okay. Christoph, I'm gonna pick on you, even though your hand went down. Is there something you wanna say? Post a link in the chat where that's the original issue that has like a couple of different protocols or products and their size limits. It's a bit all over the place, but I think for 64K is basically the lowest. So we're safe on that. We don't exclude any technology. And that's the kind of the point where I don't, where I would fuel size guarantee would be a better name because it more tells us that this 64K is not a lower limit, but it's just the lowest guarantee that we can give. You can still go above it, but there's no guarantee for you. But that's just me. Okay, Tim, your hand is still up. Did you wanna say anything else? Oh, okay, cool, thank you. Okay, Mehmet, does that answer your questions? Your question. I think so. What I was basically trying to get to see, do we really need to set up a maximum limit over it? But I see, you guys are saying that, if somebody wants to have a larger event size, they can still do that. And also the practical numbers are much lower than 64K. So I'm really lucky. Okay, cool, thank you. Okay, then circling back around to Eric's original question or concern, are people okay with this slight wording change of this set of line 447 to end it with of a size of 64 kilobytes or less, just for clarity's sake? Thank you, Christoph. Anybody else wanna speak up? Clemens, I assume you said you're okay with that, right? I already have it staged. Just need to go push it. Okay. Okay. Anybody else on this particular wording change here? Any comments? All right, not hearing any concerns. In that case, what about the PR in general? Let me hide all comments for a sec. What about the PR in general? Any questions or concerns? Okay, let me ask the question then. Is there any objection to adopting this pull request with the suggested wording change that we have down here? One once? All right, cool. Thank you guys very much. Another tough one behind us. Thank you guys very much for your patience on that. All right. And I'm pushing it already, so. Excellent, thank you. Oh, let me make a comment there. Approved with wording change. Cool. All right, next one on the list. This one was mine. This is just a reminder. This is just for the primer, so it's completely non-normative. But it basically gives some guidance around how you should be producing cloud events. In particular, it focuses a lot on producers that are not part of the event source directly. So they're acting on behalf of the event source and how they should populate those fields and stuff like that. I think it also does touch a little bit on intermediaries on what they should or should not do as the message goes through them. And at a high level, I believe I basically say, for the most parts intermediaries really should not touch the message, much like HTTP proxies. It's okay for them to add additional properties, but generally they don't touch things unless there's a very good reason for them to touch a certain property. Because generally these are things that are supposed to be passed through years because the receiver of the message or the receiver of the event in this case really should know or care that it's going through an intermediary for the most part. That's why they may add extra stuff, just let people know that there was a proxy in some place, but that doesn't materially change the semantics of the message. Anyway, it's been out there for a couple of weeks now. No new comments. Any questions on this? Okay, go in once. Okay, any objection to approving it then? Okay, cool. You guys are awfully quiet today. Okay, wow, that's the end of the agenda in terms of issues that are ready. Let me think if there's anything in here we could talk about. Oh, okay, so here's one. Actually, let me ask, is there a topic anybody else would like to bring up? Otherwise I'm gonna go to this Kafka one. Okay, so I'm not gonna actually talk a whole lot about this other than this to make sure you guys are aware that based upon the, what was it called? The partition key PR that went in, like you think a couple of weeks ago, that unblocked the Kafka PR because the authors of this PR thought that was a blocking thing for them. So they, I think it was Neil, made a whole bunch of edits to bring it up to speed and rebased and to take into account that new property. But anyway, he went through, made a whole bunch of edits. It's out there for you guys to review it. I don't think it's necessarily wise for us to walk through it here on the call, but does anybody on the call have any comments or questions about this if they have had a chance to review it yet? Okay. What I'd like to then is ask for everybody to look this over during the next week or so. And hopefully we might be able to get this one approved next week, assuming there aren't any major concerns with it. God, this one's been out there for a very long time. So I know the guys who are up with this would be very happy to get this one in there. It's so old. All right. In that case, I'm trying to think if there's anything else worthy of discussion here. Let me pick on Klaus for a sec. If you sit there, yeah, Klaus, you're still there. Is this issue something that you're still working on? Well, there was this other PR we merged a few weeks ago that was meant as a preparation for it. So in the meantime, I was busy with KubeCon preparations and everything. Now I could start thinking about this one again. Okay, cool, thank you. I just want to make sure we can, I wasn't supposed to close it. Okay, okay, in that case, I guess the only thing we could possibly discuss today is 0.3. What's interesting is I looked at the governance doc and I don't believe we technically require a week for approval of the spec. However, votes in general do require a week. And I feel like approving of a version of the spec is kind of a big deal. And so I don't want to honestly feel like anybody, don't like we rushed things past them. So what I'd like to do is this is once we go through the process of actually merging these approved PRs into master, what I'd like to do is send out a note officially kicking off the one view whether you want to call it a review cycle or formal vote. It's up to you guys. But what I'd like to do is get people a week to look it over and assuming no one finds anything too egregious in the spec that can't be fixed later on through wordsmithing. I'd like to close the vote down next week's call and get 0.3 on the door at the beginning next week's call. Does that sound okay to everybody? It sounds okay, Doug. Well, I have a question. This is Roberto. I'm kind of struggling a little bit to understand how I'm supposed or everybody's supposed to keep up with the spec version changes. So when we did the GDPR events for using cloud events we used 0.2 because that was the one that's there. But as the spec evolves over the next months I'm not exactly sure what we're supposed to do to keep up should be just, I mean, our format will still be compliant with 0.3 but should we actually modify our event to actually say 0.3 at this point? And then when it becomes 0.4 and 0.5 and one obviously when one ships, we should definitely move to one but I wonder if we need to do anything at this point or is it okay to just keep emitting 0.2? I'm a little lost. Yeah, so I'll let, I have an opinion but I'd like to pick on someone else first so I know who has this in production as well to see what they do. I mean, in particular, since you guys were the first out there that I picked on you, what's been your guys' strategy relative to the version numbering scheme? Okay, I mean, I've been paying sufficient attention. Did you bump your version numbers in your implementation or cloud events as we changed the version number in the spec? In the SDK, yes, in the product we have weighted out 0.2 because the product schedule didn't align and we're gonna pick up 0.3 in the common quarter. So we're basically on 0.1. Okay, but it sounds like you'd like to upgrade as the spec changes. Yeah, we're gonna upgrade as soon as this locks and then in the common quarter we're gonna do an update. We have the way all this works in our product is we have a schema mapper effectively that goes from one schema to the next. So this is just for us an update to the schema mapper. All right, okay. Christoph, your hands up. Yeah, what I'll do is that customers basically register and don't only say I want cloud events but I have to specify which version of cloud events they want. So right now I support 0.1, yeah. And then I think once, because we also integrate with Event Grid I'll wait until Event Grid also supports 0.3 and then people can basically choose which version they wanna get for some time. And then after some time we will remove the old versions when all people have migrated off of the old version. And that. Yeah, let me just add one thing because now I'm a little bit more paid attention. So what we did effectively, as you subscribe into a topic you can choose what cloud events version you want to deliver. And that's the thing we're gonna change. So effectively when you walk up to Event Grid and you subscribe you can say I want to have the native Event Grid format I want to have cloud events 0.1 or you will then be able to wish for 0.3 or 1.0 and then we're gonna map the event appropriately. So it's a subscribe gesture thing where you can go and wish the version that you want to get. And I assume everybody would default to the latest version if they did not specify it, right? Yeah, so currently by default we still use our native format since we're not locked here but we eventually want to get to a place where we go and offer then cloud events 1.0 as the native format. Roberto, does that answer your question? Yes, I know, but I mean like everybody says it requires a product change to actually make it. I mean, unless you make it a choice at the event emitter level to see which version of the schema you want to comply with in our case it would be like a product change to make the changes in the spec level. So I need to figure out how we're gonna do this. At some point what we're planning to do at Adobe is to offer all our events at the subscriber level so we can choose whether I want to get in the native format or the cloud events format. So I'm still struggling how we're gonna implement this. Yeah, that's always one of the risks that people run when you support a spec before it reaches 1.0, yeah. Yeah, yeah, so I just wanted to hear guidance from what the rest of the team is doing. Yeah, and in fact, let me pick on somebody else here. Tim, you actually opened up an issue in the serverless repo as opposed to the cloud events repo but it is a cloud event issue. You actually asked a very similar question about AWS's possible support for this and whether you guys should wait or something like that. Did you want to sort of summarize your concerns? Because I think it's a little... Yeah, I just realized a couple of days ago that I put it in the wrong place, sorry about that. Would it be a good idea if I migrated it over to one of our issues? Yeah, since there is no, since there aren't any real comments on there yet, I think moving it over at this point would be good. Okay, so give me an option to do that. Okay. So I'm actually some of the things people just said perhaps changed my thinking, because I've been, you know, we would like to support cloud events. We're going to be announcing a bunch of event related staff later on this year and it would be really great if we could also say and we support cloud events. But as an old standards geek, I get cold chills at the prospect of promising to support something that isn't finished yet. So I honestly, I'm not 100% sure what the best thing that AWS could do is. Now I guess one thing we could do is just stay with our native event format that we have and say, oh, and we'll also give you cloud events if you specify what version you want. I'll be honest, I don't like that. The reason I don't is that we have like huge numbers of people processing millions of events and they've all, every one of them has the downfield names hardwired into their code. And they really don't want to think about version numbers and different version numbers. And I don't really want to think about, you know, maintaining support for an arbitrary number of old version numbers going forward. So I guess the most important thing I would like to say is what's the past for cloud events to be finished? Maybe this is something that everybody else knows and I just missed it. But at what point can we become safe for people to start hardwiring attribute names into their code? Yeah, I think that's a good question. And I know, I'm sorry, go ahead. I would want it to be now. Yes, I'm a different way to ask the question is when do we call it 1.0, actually? Right. So let me just go back over here for a sec. This is a great topic since we have 20 minutes to discuss it. Obviously, if we go through all the issues and we don't think any of them are worthy for 1.0, that's one criteria, right? But let's see. So technically, I believe we would have reached, we would have done our 0.3 because that's what we're going to be voting on. If you look at what we have for 0.4, we have additional serializations and protocols. I don't believe that's going to change the core spec itself, which means that these things can technically happen after 1.0, in my opinion. Process issues, again, doesn't change the core spec. So in my mind, I guess 0.5. OK, clarification, not symmetric issues. So let me go back over here for a sec. We have issues like Thomas had one, and non-goals, routing. I think I have to go back and double check, but I'm pretty sure there's at least one or two issues out there that might fall into the category of clarifications or things that run into, like the PR we just closed today about size limits, right? Actual usage of the spec itself and whether we need to make some changes based upon the real-world usage. I think we have some issues around that. So if we can get those behind us, in my opinion, I don't think there's any reason why we can't jump to 1.0 very, very quickly. But obviously, it's up to you guys and whether you think some of the real-world experience people have, like in Knative or Adobe and Microsoft, whether that's sufficient experience under our belt say, yes, we're ready to consider 1.0. And the other thing to keep in mind is we did agree that at some point, when we reach the equivalent of 0.9, that we were going to let the spec bake for a little bit of time to allow people the time to get some real-world experience before we tag it officially 1.0. Now, it's possible that Knative's usage and the other guy's usage counts as that. It's up to you guys to decide that or not. So my opinion is 0.3 is really more like a 0.9. And we just need to decide whether we've crossed all the T's and dotted all the I's. But what does anybody else think? I agree. Yeah, that sounds good. So can you do you have in your mind, Doug, any type of timeline in your head about when this thing could actually happen? I don't want to just make up a number without actually looking at the issue backlog first. So let me do this. Let me take the action item. And actually, I'd like everybody to do this as well, just to make sure my analysis is not incorrect. I'm going to go back and look at all the open issues and pull requests and make sure or pull out the ones that I think we have to resolve before 1.0. And I think once we have that list, then on next week's call, we should be able to look at that and say, OK, given this list, let's shoot for a target date of x number of weeks and see if we can push for that for 1.0. And maybe it's a small number. Maybe it's a large number. I don't want to pull a number out of thin air without having done the analysis of what's currently known out there in terms of issues. Yeah, fair enough. Yeah. That sounds fair to everybody. Have a discussion next week about requirements for 1.0? Yep, let's do that. OK. OK. Cool. Thank you, guys. Maybe on the collecting feedback phase, what some other product do is they call it a release candidate, and then they have a time period where people test it out. And then if there's no big issue, that kind of gets adopted. Yep. Maybe that's a process we could think about as well. Yep. Yep, I like that idea. OK. Cool. Anything else on that particular topic around versioning and stuff like that? OK. Any other topics that all people would like to bring up? All right, cool. In that case, I believe we are done. Oh, actually, I'm sorry. The usual bureaucracy. Varun, you're there. Varun? Hi. I think. Hey, sorry. You're struggling to get off me. Not a problem. Mehmet, I heard Ken Owens. Are you there? Ken? Ken? OK. What's going on with me? I'm not sure. Oh, there we go. Hey, Ken. Been a while. Glad to be here. I'm here. And Vladimir, are you there? Hi, then. I'm here. Yep. And I'm going to butcher the name of it. Anacolio? Sorry, it's Alex Nikolaou. And yes, I'm here. I would not get that. OK. Is this your first time here? I can't remember. I apologize. It is my first time being able to join. It's been an interesting discussion. Thanks. Yeah, do me a favor if you're OK with it. Here's a link in the chat to this doc that I'm looking at. Can you just fix up your name and put your company affiliation if you want to be affiliated with a company just for the attendance tracker? Sure, I'm from Google at all. Well, then never mind that. Just give me your last name and you get a chance. No problem. All right, cool. Anybody else have any missed for attendance? Yeah, I'm here. Erika? Oh, Erika. Cool. Thank you. OK. Thank you. Yep. Anybody else? All right, cool. In that case, thank you guys very much. A very productive call. And we'll talk again next week. Thank you. Bye, everybody. Thanks, doc. Yep. Thank you, guys.