 Hey Manuel. Hey, how are you doing? Good, how are you? Yeah, great. Unfortunately, I couldn't join the call yesterday, but Klaus told me all about it. You talked about the workflow call? Yeah. Yeah, it was only on for a short time, so. I don't know how it went as I left. Yeah, well, it is not too many outcomes. A little bit of voting case. He pressed for a few votes and stuff, but I think the first priority now is to get the talk. Yeah. And I think that's settled now. Hey, is that John? Good morning. Good morning. Hey Tommy. And Eric. Good morning. Real quick. Welcome to the main line. I was looking for the link to the document that. Sorry that people have been using to collaborate on the current effort. I had a hard time finding it. Oh, it's right here. If you scroll down. It's this right here. Okay. Oh, yep. Now it's, I see it. Yep. Nice and clearly. Sorry. Thanks. Anytime. I like easy questions. Morning, Scott. Morning. You sound tired. See, I'm a, I'm good. This is what I'm been. This is my seventh hour already. Cause I had my first phone call was at 6am. So you, I don't think you have a reason to, to whine about being tired. Oh no, no. It's my seventh hour too. I have a two year old. Okay. Let me try to bet what went up you then. My dog is getting old and not only is she going deaf, but she can't sleep through the night anymore. She's been waking us up in the middle of the night. At least once or twice to go to the bathroom. So he has a two year old. No question. No, no, don't trump me with old dog. No, I actually, I do think the, the two year old is probably harder cause the dog is easy. I just opened the door, let her out, let her back in five minutes later, and then go back to bed. But you never know what's going to happen with a two year old. Yeah. It involves him for poop. Well, mine does too, but at least I don't have to clean it. Okay. I think we've got enough topic here. Hey, Kathy, are you there? Yes, I'm here. Hello. And I think I heard Clemens laugh in there. Yes, he did. I just said, I just said good evening in German. Yes. Okay. And that's Klaus. Thank you. I feel like I'm missing somebody. Oh, there we go. Colin. Hello. I guess while we're waiting, we could talk about horror stories with the bowel movements and stuff for. Oh, no. Nope. Hey, Jim. Hey, Doug. Doug, you want to see a blast from the past? Oh gosh, sure. Okay. I'm going to turn my video on. Uh-huh. And do you remember this thing? Oh, wait a minute. Let me get, I can't see that. It's way too small. Hold on. See if I can make you bigger. What is that? Oh, now I see Heinz. Or Heinz's videos taken over. There must be an IOT demo. Oh. So I, I made this little demo board for KubeCon, like 2018. And this was the physical representation of, uh, open service broker service lights. Oh my gosh, I forgot about that. I used OSB to control my home automation. And that was my whole demo board. Uh, I figured it was, it was service broker related. I just could not remember it. That is so funny. I thought, I thought, uh, blast from the past for you is like 1998. Hey, hey, hey, hey. It was in the context of, uh, you know, Dougie crash over here. I don't go back to the soap days. Ooh. Should I start mentioning OS two? Come on. Yes, please do. Hey, yes. But the Microsoft version. The Microsoft version. Well, that was Z mix wasn't it? Oh no, sorry. That was the Linux version. No, the Microsoft version went up to 1.3 and then IBM forked and did their own thing unsuccessfully. It wasn't that long ago. I want to say maybe five or six years ago. I swear I saw an ATM machine that had like a blue screen on it, but it was the OS two logo. Wow. Yeah. Yeah, I've seen that. Yeah. Yeah, the hardware failures do that. Yeah. So Thomas, are you there? I'm on the line. Yes. Yeah, excellent. Cool. Got you. Okay. I think I got everybody. Okay, we've got a small group, but it's three after. So why don't we go ahead and start it? Oh, wait a minute. Hold on. Let me ping Mike. So that he knows we're starting because he's supposed to be talking at some point. All right. Let's get started here. Um, but up. Um, all right. Community time. Anything people want to bring up. That's not on the agenda from the community. All right. Moving forward then. We're almost done. All right. We have almost everything settled. Let's just talk about a couple of things here. So just a reminder, if you are planning on being there and going to join the face to face, put your name here just so we know it's going to be there. Um, we do actually have a time. Thursday at 1030. They basically said they could have the room anytime after 1030, but looking at the schedule. Um, between lunch being later. And then the serverless, uh, session being later. Um, and then after that, I figured people were going to start leaving and since it's the last day, I thought 1030 would be the best time to do it. Um, that way we can end the time for lunch. So that's the current time. Um, we don't have a location yet, but we should get that at some point and then not too distant future. Okay. The question with the mobile work Congress having just having been canceled. Um, are there any rumors flying around that Kukan may fall victim to the coronavirus? I have not heard anything. It's anybody else. Yeah. There was an article on the new stack and it, it's still a go. I guess we'll, we'll know more as we get closer. Okay. So there are rumors. All right. Yeah. All right. Um, last thing, um, I have a question for the booth sign up. I created a little table here for people to add their names. Um, if I would say, since I only have two slots here, um, if you need more, either add another column or just hit return and put your name in there. I was assuming we'd be lucky if we can get two people. Uh, the only thing I wanted to point out is the cloud events session is here and the serverless session is here. If you did want to attend those, you can do that. Um, you can do that at the booth at those two particular times. So sign up when you guys can. Um, all right. Anything else from coupon planning people can think of them forgetting. All right. Moving forward then. Um, All right. SDK. We had a call last week. So there was no call this week for anybody from the SDK team have anything they want to update the group on. Okay. Not hearing any, uh, discussion. So far the two groups of people who have an existing SDK are interested in doing some sort of merging. Um, but there's no discussions on how to actually make that happen. So once they actually get that settled, then we can bring the formal proposal forward and you guys can say yay or nay to it. And I suspect it will be a yay, but I want them to actually settle though the merging discussion first before we talk about bringing it in. Um, Kathy, anything you want to bid the group on relative to the workflow stuff. Okay. Yeah. So we had a meeting the first week workflow meeting yesterday. And then the team decided we're going to have, um, by, um, no monthly meetings. So every, um, so the time may change to the. Monday on the first Monday of every month. Um, and then we're going to have, um, because there's some PRs on there, which we are going to discuss on, um, next Monday. So, um, so I'm going to have one extra meetings next Monday. And so what we will discuss is, um, whether, um, the event trigger could be a single event or it could be a combination of, you know, and multiple events and multiple events or multiple events. And yeah, we're going to discuss that. Um, so I think, you know, if, you know, you're interested, you're welcome to join, we'll let more people to join and then, you know, to get the input from, um, everyone. All right. Cool. Thank you. Any questions for Kathy? Uh, yeah. First, I thank you that, uh, we can now have it every, uh, first Monday of the month because I couldn't make it yesterday. So thanks for arranging that. And, um, yeah, for the, I think for the agenda next Monday, uh, we also want to see, uh, Tommiers demo. So I just wanted to add that. I don't know if there's, yeah, but there is an agenda document. I think you'll have the link on the workflow subgroup. Yeah. There's the link right there. Oh, no, the, the, the Google doc. Yeah. Isn't that the Google doc for the, oh, that's the spec. You're right. The, um, the agenda doc. I don't have that. I'll try to get that. Yeah. I created a gender on the document there. Anyone can add, you know, any topic. I would like to discuss in the meeting. Oh, and one more question just to get up to speed. Um, there was a original design discussion mentioned that happened in around June, July, 2018. I looked it up in the log of the serverless working with many, many minutes. I requested access. If you could grant it, that would be nice. So I can see what was originally discussed. Um, which doc you refer to the, the, the, the old, the, this agenda that we're looking at here, but this is very old Google doc, Google drive document. I will try to fix that. If you could grant access. I'll try to open that up. I don't know why it's not, but I'll fix that during the call. Okay. Thanks. I, I, sorry, just a quick question. So just look, um, where are the details for the, those meetings in the Dylans for that? I will add that to the gender doc right here. When I get to find the link. So later. Well, Mike and, and, uh, Clemens are rambling on about the new spec. I'll try to find that. Okay. Okay. Yep. All right. Moving forward. Let's go ahead and jump into it then. Mike, since you are on now. Um, where would you like to focus? So I dropped the link in, uh, near the top there. I put together a candidate. Please do not consider this final. Be any means open API spec. Or what discovery might look like. Um, with a sample, uh, response. I can put this right here. I assume, right? Yeah. Okay. Let me open that for people. There you go. Um, So because of that, I put a couple more comments and edits into the, to the main doc as I kind of thought through this and, and, and thought about what it should look like. Um, in particular, I think it's interesting to think about the card melody of the return. If, uh, you should get back a source and all of the types that it produces. Or if the key for that should be, um, to get some of that information, um, I think the key is essentially source and type. Uh, so that, you know, the thing I worry about is that one of the things that discovery is supposed to do is tell you how to create a subscription. And as a producer, I've got, um, you know, Uh, And different event types that I produce. And the configuration is different for each one of those. Um, does that bit of discovery get too hard to manage? taking a stand on either side, something I think people should think about. The other thing is filters, like I'm starting to question how deep we need to go on filters in the discovery part in terms of what we should publish there. We have an opinion on this. Yeah, go ahead. In the later section, we talked about filters and Ryan wrote that up. If we scroll down to the subscription section further down, further down, further down, there's some area where we talk about filters there. There are two levels of filters, effectively, because first of all, in discovery, there needs to be a mechanism, and you're just describing that file to query, and that's a filter of sorts. Then we need to effectively have a way to filter out of, once you have found a subscription API endpoint, then you need to be able to go and tell that endpoint which kinds of events you need. We should always assume that a single subscription endpoint can potentially have overlapping numbers of events and event types, which means it can spew out many different ones. First of all, the discovery API, in some way, needs to reflect the fact that a single subscription manager can potentially yield many different kinds of events, also from different kinds of sources. Then the filter, and then for the filtering model, we need to have, in the discovery API, we need to have effectively a metadata information that says which kinds of filters are supported, because we think that we're going to have multiple different dialects of filters, including pluggable ones, and by what we're currently thinking, we're only going to define a fixed definition that everybody must support of exactly one, which is effectively filtering, and Ryan is going to define this in more detail, but this is just a cursory idea that you have effectively for source, and for type, and for subject, you have a prefix, and a suffix filter, and a complete match filter for any of those three fields, and those fields are always and. And so that's the simplest filter dialect that we think of, and if you are support, if there are multiple, more filter dialects, like a more complex one that allows you to go and do a match against all the properties, and one that probably supports an or, or you have a sequel, a sequel like filter, etc., then you would effectively have in the discovery API, or sorry, in the discovery metadata, that as you that register that, that should basically tell you which kinds of filters you can use against that subscription manager. That kind of meshes a little bit with what I'm thinking in terms of like, should there be a baseline expectation that all subscription providers have a certain baseline filter that they provide? I'd be a little bit cautious about doing both prefix and suffix matches. I worry about scalability suffix matches with some of the transfers that I'm aware of. So that's effectively a query into the cloud event that comes by, and we've been, so from implementation experience, this is exactly what we have as our event grid normal filters, and then we have advanced filters, which then also allow you to take custom properties into account, which is a little bit more complicated, and we're actually charging more for these advanced filters. So the simplest one is literally just those three fields and the equivalents right now in event grid with prefix suffix indirect match, which is fairly efficient to implement. And as soon as it gets more complicated, you get into an advanced case where you then, that is more compute intensive. And then we then also, like if you, if you start using that feature, then you also need to need to pay a little bit more. It's like you said, it's more CPU intensive. With that interplay then of, you know, I termed it domain specific languages in the discovery section, but like one of the things I struggled to represent when I was thinking through this is how to represent what that subscription specific filter language is in a machine-readable discovery doc, because I think that like you're really, the information there is really for humans, to interpret and not for machines. So if anybody has thoughts on how to represent that in discovery. So the prior art I've been referring to on our call is how MQP does this. So MQP has a concept has a so-called archetype of filter in its type system that can be applied to a so-called source definition, which means you walk up to an MQP server and say, I want to have data from this particular source. And here's a filter for it. But MQP itself does not specify what filters might be. So it's basically just a blank canvas effectively in the core MQP spec. And then there are, then there are complementing spec. We have a filter expression spec. And then there's three specs that are defined in the Apache active active MQ project that define these filter types. And the filters are identified then basically just by the type ID. And the type ID is in one case, it's or.apache door colon MQP colon JMS filter or JMS message selector. And then in the in the other cases, they're called or or goasis, blah, blah. So they have effectively just human readable names, which are identifiers. And if you are asking for a filter, you're effectively just giving it's type, and that's how we identify it. That help, Mike? Yep, thanks. Okay. Were there other areas of the of your portion of the spec that you wanted to highlight? Who are you talking to? Mike, not you, Clemens. I don't think so. I said, I put a couple of questions on there. People want to comment on my comments. If you want to look at the candidate open API spec and give me feedback, I'm happy to adjust that. I see one thing I did cross out was the Othscopes. I commented last asking last week if we should take that out. Again, that's another thing I'm not sure about how to represent in a generic manner, because how you do authentication is going to be somewhat specific. So there's two sides to that. There's making a call to the cloud subscription endpoint, and then any authentication information that the subscription provider, the producer might need to communicate with you if they're putting stuff on a topic that you own. The other thing in there is that letting individual service providers use their already existing authentication means. So if Google were to provide this using Google IAM to control which subscriptions you can and cannot create, rather than putting that in the spec, I think is a wise decision. So one of the things, I think we need to go and agree on some level of token flow, but there is effectively a triplet of authentication context, which are all kind of following on to each other. And we have to think through what that means, because first you start with a discovery, you start with discovery in a flow where you don't know anything. You just know that you want to subscribe to a particular kind of event from a particular kind of source, and you don't know where that endpoint is. Then there is effectively you need to walk up to the discovery catalog, which is somewhere near you, which is in your reach because you need to be able to see it. For that you need to have an authentication token that now goes and points you to a different service, which is elsewhere for which you also now need an authorization token, but it's not necessarily clear how you get that. And then once you are at the subscription endpoint, you then need to go and pass that subscription endpoint some form of authentication token or refresh token that the subscription manager can then use to go and push events over there. So there are three scopes, and I think we need to make them all gel with each other in some way. And we also need to have a mechanism specifically that allows an interested potential subscriber to discover something and then establish a relationship with the subscription manager without necessarily having a token in the hands. So that's going to be a little tricky. Yes, so Hines, your hand is up. Hines? Sorry, I had the world famous mute button enabled again. Just a quick comment that it's good to look at the model of the open API, but I would recommend to look at the async API, which is an offshoot where open API is directed purely to a REST request reply model. And async API is the offshoot to try and address asynchronous events as opposed to the synchronous request reply. I think it might be a little bit more eye-opening as to how you can represent some of these things, what they could capture. So definitely if we're looking at purely event-driven, open API is important, but you should also look at async API as well. Okay, jam your hands up next. Just a couple of points then. So just to follow up on Hines's comment, I think I think API was more around the definition of an endpoint that sort of consumes or produces an event rather than sort of formalizing a subscription discovery and creation model, which I think is probably what we're talking about here. But just to add on to Clement's comment, I can see huge cases in our environment where if you're unauthenticated when you do a discovery call, you might get a different response than you would if you were authenticated. Because we expose different capabilities to different partners or third parties. So I just want to throw out there that certainly from our standpoint, this wouldn't be an open book. So you couldn't just look in and say, well, what sort of events do you produce? Because that's always in the context of what your privileges are. Sorry, I can't remember the guy who ever had produced that open API spec. You can define the scopes, can't you, without actually defining how they're produced? So I just wanted to know why you took those out. Mike, do you want to talk to that one? Sorry, Mike. Sorry, I took which parts out? I thought you said you removed the concept of scopes from the open API spec. My understanding that you could still define that this operation needed a scope, but you don't necessarily need to define how it comes into being. I think when we talk about all scopes, we're starting to get into the helm of defining what the authenticator tokens look like, which we certainly could do. We could also leave that fairly unspecified that it is an off-scope to be determined by the provider. So I'm thinking about if you're using a vanilla OIDC protocol that you have a job that has a specific audience in it, for example, if you're using something like, again, Google's IAM protocol that it requires, we could specify it as a certain permission, that's really, again, thinking about things that are probably for human consumption versus machine consumption and trying to strike right balance, because I want some of the things in, some of the things here are around useful tooling, and some of the things are going to be useful to humans. Sure. For interoperability, we will definitely need to have a full, first a full REST API, and then I really think that discovery is something, so discovery might be something that is really a human-browsable thing, but also discovery very frequently is something that just drives dynamic systems in a very automated way, which means machines need to understand that as well, and need to dynamically understand that. And once you do this, then we're at the level where all of this needs to interrupt, which means we need to at least agree on a minimal mechanism for how to flow credentials, and I would at least define a binding of this model to OAuth2, because that's the most common one that we have. But there's an interaction, there has to be a discovery REST API, there will certainly be a REST subscription API, and then coming back to Heinz's point, the async API path is, that actually comes to bear when we talk about the delivery endpoints, because the delivery endpoints, they are clearly async. So there's an interaction here between these two synchronous APIs, which are the discovery and the subscription APIs, and the async path, which is then the delivery path, and I think we need to have definitions for all of them potentially. If we're starting to define this with metadata as we're starting to do this here, then having a definition that is also covered with async API, which will then effectively an async API formalization of the cloud events transport bindings, and will also be helpful. So Mike, just to draw your attention to it, Heinz pasted a link in the chat for a good async versus open API thing. I'm sure you saw that. There's also a good article on event, the cloud events and async API as well, which they're already talking about as well. The async API guys are not really sure how to represent the cloud event. They propose several different ways and I found it really confusing. We can help them. I've been talking with them because I was interested in open or async API for a Knative and cloud events, but the model is more geared towards a PubSub representation versus a producer or a consumer that would like to advertise what they want to do. Async API is a great way to visualize the topology of your entire application and all the connections and cues. But if you just want to have like, what does this one thing produce, I found it also confusing. Okay. Mike, thank you for putting this together. That helps me a lot. Just curious, anybody else have any questions or comments for Mike's section? Okay, not hearing any. Let me put you on the spot here, Mike. And this is a warning for Clemens. I'm going to do the same thing to you. So I feel like right now the stage we're at is people are adding lots of sort of questions and commentary on the site and stuff. And as a result, I feel like the actual text of the spec itself is moving kind of slowly. Would it be useful for us to pick a deadline and pick a deadline for a very first rough draft, even if everybody hates it, at least it's something on paper that people can then concretely say, yes or no, I like or dislike that. Just as a forcing function kind of thing. Meaning like get a first draft that is like a markdown on GitHub. Well, then it doesn't have to be a markdown. We can still use the Google Doc, at least for right now. But basically what I'm looking for is to get rid of all the sort of commentaries that we have throughout the entire section and write it more as a formal spec and say, here, this is what the response should look like. And basically, you know, copy and paste this into the doc and say, this is a sample what it's going to look like, a section that defines all the various fields, are they strings or the integers, are they raised, that kind of stuff and just put a stake in the ground and say, this is your view or the group's view, if you guys are still talking, of what you think this part of the spec should look like. And rather than just people just sort of randomly putting comments out there as opposed to, hey, I think we should talk about this, we should talk about that kind of stuff, just something a little more concrete for us to noodle over. And as I said, even people hate it, at least it's a stake in the ground. Yeah. So I can do that within two weeks. I'm on vacation next week, personally. Vacation? Oh, man. Yeah, actually, I was actually thinking about a two-week deadline, since I didn't want to spring it on you with only one week to go. No, I need to go somewhere warm next week. Okay, now you've got to name it. Where are you going? Oh, Disneyland. Cool. Okay. So you're okay with the first pass at a rough draft in two weeks? Yep. Yeah, I mean, I don't hear anybody saying that there's major overhauls that are needed. So I'm fine to throw it out there and then maybe somebody will object. Yeah. To be honest, I think it's more just a matter of taking all these ideas that are people, I mean, ideas that people have put there, either in comments or in like this text that I've highlighted here, and just put it more into a speckish form. That way it looks more real as opposed to just brainstorming things. Okay. Cool. All right. Before we move on to the next section, last chance, any questions or comments for Mike? All right, cool. Clemens, you're up. Yes. So we actually already covered the meat of the discussion or the one coherent piece of the discussion that was the filfer section where we then spoke about potential dialects and then spoke about the constraints we wanted to put ourselves under for an initial simple filter. Otherwise, we've been walking through a bunch of the comments that were made on the side and kind of took notes for homework. Heinz had also sent three miles of text regarding push versus pull. As we were talking about push versus pull, I think we came to the conclusion that in spite of common industry and documentation usage, in the sense that we've been using this here, it is yet confusing enough in what we're trying to describe here that we're going to use some alternatives for this also to make Heinz happy, but also to make everybody happy. So we're going to find some alternative description for pull and push terms. But the intended meaning will hopefully not change with that. That was mostly, so those discussions took up most of the call. We have then, one of the things we have also decided because of timing issues is that we're going to skip next week's call, which means there will be no updates from us. Okay. Jim, I apologize. Is that hand new or old? No, that's noops. Sorry about that. Any questions for Clemens or the rest of that sub team? So I think let me just say what the roadmap here is because that will be useful. I think where we are is that we will fill out the remaining the remaining transport dependent properties. I think we would like that we have a NAT section at the bottom. And we have been thinking that someone from Sonadio could probably go and take a look at that. Also a little bit further down, there's a section where we have defined effect of these specific transport properties. A little bit further down, even there, the protocol settings. Because we're all calling that just wonderful. Please do that. And so that would be super helpful. And then I think what we will then do next, once we have filled these things in is that Ryan will come up with an initial data model for what that filter should look like, the basic filter. And I think then fairly quickly we'll land at what a REST API might look like from the outline. And then also we'll take a look at whether how we can go and realize that REST API or that RPC API at some sort over the various transports or where we need it and where we don't. So I think we're making good progress, in spite of not taking an off week next week. Any questions or comments for Clemens and the team? So just to clarify then, we keep commenting on the dock or we're going to, should I just wait until something more... Sorry, Clemens. Please keep commenting on the dock. I think there will soon come a time that we're going to make this a little more formal problem, break this out with the documents. But for now, comments on the dock are fine. Okay. I know I made one that Ryan sort of commented on and I fully understood what he was driving out there. So one of the situations we face, and this is more of a business event issue, maybe then low level IoT style events, is where our third parties sort of turn around and want to essentially re-receive or re-pull events that have happened in the past. So, and Ryan had a very good comment that we didn't really want any sort of implication around that sort of level of complexity in this spec. But I think for me, it means I've been looking for some way for companies like ours to sort of extend the subscription endpoint a little bit and to add those extra sort of more nuanced capabilities. And if there's a generic way to do that, that would be very interesting to me. So there are two things that you might be asking. One is, if you want to have a, score up and down the event stream feature, then that might be something that you can, that is covered by the pull style API. We didn't put Kafka here, but we probably should. So I haven't, that's an omission that I thought about this week. So with Kafka or with NQP and a source filter, you can obviously go and use an infrastructure that supports this like Kafka with Strimsy or Event Hub or Azure Event Hubs. You could obviously go and have an event stream. You have been discovering this event stream using the discovery API and then the discovery API basically tells you, hey, this is the Kafka source. And then you would walk up to the Kafka source and you would attach to it and you can scroll in forward. So that's one option. The other option would be one that is probably covered by the filtering API, by the filtering model. And we have been, one of the things we're discussing is like, we're going to make this basic filter and then you can potentially go and subclass that filter with your own extensions. You can say, I want to have the events of this sort. And then you have a special filter property which says, but I want to have events from this time to this time. And then you create a new push subscription into an existing subscription manager with the assumption that the subscription manager holds a backlog. And then as you're providing that filter, the filter will, the subscription manager will then go and send you the events for that particular period. And then there might be a further option to go and retire that subscription as soon as those events have been delivered. Is that something that you might think of there? That's an interesting twist. Yeah, I mean, I think our partners tend to just think more simplistically than that. They just sort of say, you know, so for instance, we expose a capability where you can, as a partner, go in and see all the events that have been sent to you. Yeah, you can ask for, you know, that set to be replayed. And then maybe that then becomes intermingled with your with other events that are coming your way. So some of that's triggered manually. I'd rather prefer that was done, you know, systematically. So but yeah, I get your point. It does imply, you know, storage on the back end, which, you know, I can fully see that probably a lot of people on this call wouldn't want to do. But it's for me, it's more, you know, can we leave gaps in the spec or mechanisms where where we can extend, you know, these subscription endpoints without necessarily breaking the spirit of what's already there. Yeah, if it can't be achieved through the sort of mechanisms that you described. Yeah, so that's what we that's, I think that's what we thought with the filtering with the filter plug ability. Because I think your scenario can be achieved through a, you have to communicate, you have to communicate to the server from a consumer perspective. You have to communicate to the server that you want to have a certain set of events again, which amounts to, in my mind, to a new subscription of sorts. Yeah, it's a historic, historic subscription of a buffer of a stream. Yeah. Yeah. And because the gesture is the same, right? If you if you go and do, if you go and start doing a subscription in quotes with Kafka, you specify an offset and start reading from that particular offset. And if you only want to have a slice of of events from the past, while you pick a different offset and you start reading from that offset and then you stop at some point. So even with a pull model, it's the same thing. And and in the push in the push world, you're going to have a the default way of doing a subscription is to get all the events that have not yet been delivered on that particular channel to avoid the word topic. And then if you if you have something that retains history, then you have to tell that push subscription manager to go and grab into that history and get you those events. And that's that is best expressed to that filter. So that's that's how so we thought of the filter. And this is why why Ryan put this in the the dialect is effectively expressed in this, right? We're going to define a dialect of filter. But then the notion is they can go and build your own filters. And of course, you can also build on the on the default dialect and then go and specify all further options. And I think of this like our effectively like the extensions we have in the core spec is that we might even have, you know, well defined multiple well defined filter dialects that you can then go and use. So that kind of history retrieval filter might be one of those options. Oh, thank you. Right. Any other questions comments? Okay, so Clemens, I'll put the same question to you. I actually I feel like your section is closer to where we want to be in terms of the first rough draft. So thank you for that. But do you think you could push for say two weeks for something you could that you would claim is a first full rough draft? Two weeks is a little aggressive three years. Okay. Just because we like we really can't get it together for next week. Okay. That's fine. I just want to make sure we don't let it let it linger too long. So Mike will be two and then you guys weigh three. Okay. Okay. Any other questions or comments related to the spec in general? Okay. If not, thank you guys for that. And I though I see my got a drop. In that case, let's talk about some PRs. Actually, let's talk about this one first only because Klaus is on the call. So Klaus, I think you may have changed the formatting down to 80 columns. But did you make any other changes on this one? No. Okay. And I think so in the comments that was just I think you proposed to give it another headline on its own. Oh, yeah, I would be open to that. But yeah, apart from that, there was no other comment asking for a change. Okay. Any questions on this one? I mean, we've been discussing on slack. So you had I think some doubts. Well, I think my questions actually are not directly related to this. I think it's more different questions. And I don't want to go into that right now. Okay. So so this has been out there for at least a week now. Anybody have any questions or comments on this? Okay, any objection to approving it then? Okay, just a quick question for you, Heinz. I'm sorry, not Heinz Klaus. Do you want to add that section header just so this stands out or not? It's up to you. I can do that. I don't really have a strong opinion about it. But I had the feeling that it's a very special case and that it doesn't need that much of attention. So that's why I just put it in the end of that section about creating events. But okay, if you want to hide it, that's fine. I'm okay with that. So okay, I'll just go ahead and merge it then after the call. Okay, thank you. Okay, there's this one. Now, this one, Jim, you had a question on this one. Yeah, sorry. I also really don't have a strong opinion. Yeah, so I think it was just the structure of it was just different to the way we do it, which was what sort of led me to some of these questions. So that, yeah, that was basically it. I mean, it could be completely valid, but it's just not structured away that we personally would do that. So I wasn't quite sure it was valid or not. So you want to change, wait, if he changes this line, I apologize, I know Zippo about the schema stuff. It's just, yeah, so that's just referring to the version of the JSON schema definition. Yeah, so draft seven versus draft four versus whatever. Yeah. Right, I'm just wondering though, if he makes that one line change, do you think it changes anything else in the PR? No, that particular line doesn't. Okay. Yeah, I think it's just more correct by doing it that way, because that schema is evolving, it's still not a standard, it's still draft form, unfortunately. The other one, the structuring one, was more to do with the way that the definitions blocks are, typically the way we do it is following that other link. Hold on. Yeah, so that's an example from the spec themselves, and if you put one up, one or down, whichever way, yeah, keep going, keep going, keep going, keep going, yes, up a bit, sorry, up a bit, up a bit, up a bit, up a bit, stop, there you go. Okay, so this is the way, typically we would do it, if we were using this definition construct, where we put definitions at the top, and then the properties of the object you're defining come further down, which means you then address it using that sort of syntax with hash slash definitions. That's the way we do it, and we typically do it like that, because that's the way the spec writers sort of, or the schema designers sort of advised us to do it. It, going back to the PR, it may be that that still works, no, it actually doesn't, because you see there the dollar ref for definitions, I don't think, I just want to make sure that somebody's run that through a code gen tool, and it does actually work, because the structuring just looks slightly wrong to me, because I think the hash means you go to the top of the tree, and then come down from there. So since definitions is not at the top, it's actually part of the properties block, I don't think the par thing works correctly. It doesn't, but isn't the definitions at the top, it's just lower down in the block? The definitions is there, it's part of properties. Properties is the root in this Oh, interesting. Okay, so I'm seeing that it's easier to see if you view, view the whole file rather than the. Okay, properties is two N. Yeah, see definitions is inside properties, it's not outside of properties. Because I see the intended two here, properties is intended to here. Where's the closing bracket for properties then? I would assume it's this. I read this completely wrong. And then, well, if I'm right, and then this, that this, no, that, if that's true, then maps to here. So I think, I think it is at the top level, it's just lower down in the doc. It's just having an early morning brain fart then. I mean, the schema reference, yeah, okay, I'll retract, I think the schema reference would like, I'd like that change though. Okay. So I'll reach out to Tim Moore, I think it's his name. I'm asking about this one then. Yeah. Okay. Does anybody else have a question about this one? About the, which version of schema to use, Heinz? Just make a comment. We have a product that's generating async API, and I was testing with the schema. And up until about two weeks ago, where we updated to a newer version of the JSON parser stuff, it would fail, but it does now pass with a new version. So depending on your version of schema parser, it will fail or pass, but the newer one seemed to be operating correctly. But it would be a little cleaner to your point where in fact, I like the suggestion of putting all the definitions at the top, and then the properties underneath, because it is a little, well, not a little, for me initially, it was very confusing because you're jumping up and down, as opposed to just sequentially being able to run through it. But that's more of a readability, as opposed to an error or anything like that. Yeah. So maybe, Jim, you should modify this comment to say rather than Liz at the same level talks about living at the top, to get the same ordering that this thing shows. I will follow my sword and accept that criticism. Yes. Okay. Okay. So I will poke in later today after you update the comments to see if he has any opinions on this one. Thank you, Jim. Anybody else have any comments or questions on this one? Or does everything else basically look okay? You guys think that the adjacent schema itself in here is basically okay? Okay. Not hearing any objection. So we'll just assume it's these minor things that Jim is bringing up. Okay. Cool. But I think that's, yeah, I think that's it in terms of what we can talk about. All right. Almost at the end of the call. Anything else for the agenda that people want to bring up? Not hearing any. Last roll call then. Grant, are you still around? No, we lost Grant. Vlad, are you there? I'm here. Hi. Excellent. Hi. Christian. Christian, you still there? What about Falco? Yes, I'm here. Excellent. And last chance for Christian. Oh, there he comes. Hey there. Hey there. I switched mics and couldn't find my, I couldn't find the window. So yes, I'm here. All right. Excellent. And Grant isn't there. And I don't, this person, Venice, we can get their last names. All right. Anybody else that missed for the agenda or the attendee list? All right. Cool. In that case, we get a whole seven minutes or I'm sorry, six minutes back your day. All right. Thanks, everybody. We'll talk to you next week. Have a good one. You too. Bye.