 Hello. Hello. How are you guys doing? Great. How are you? Pretty good, pretty good. A little sick but not with the coronavirus. Well, that's good. By what I can tell so far. Okay. It is interesting. My wife and I I think are coming down with something as well, but obviously I don't think it's the virus. It's just funny coincidence. My daughter was sick last week before I left. And so I think I brought some German school germs with me. Yeah. Yeah, I got my wife works with Karen where he went. It was it was either China or South America, but he came back with something really nasty. And she ended up giving it to me. And I've never been wiped out quite as bad as that. And it just, you know, sort of reminds you of sort of the danger of hanging around with people who travel a lot. You never know what you're going to pick up. All right, Heinz. Hello. Hello. Mr. Mitchell. Good morning. Hello. And Vlad. Good morning, everybody. Hello. Tommy. One day you'll have to say something off on the phone. Okay. Hi, BRK. Yeah, okay. That is Lucas, I believe though, right? Just find double check. Yeah, that's right. One of these days I'll remember. Hey, Kathy. Kathy, are you there? Oh, yes, I'm here. Sorry. I figured, okay. Hey, ginger. Colin, you guys there? Hey. Okay. I heard Colin. I think ginger is hiding. Just using different air pods this morning. Ah. Is it the new ones from Apple? Yes, it's very exciting. Yeah, I was. I was really excited. I hate the idea of spending that much money, but the idea of getting noise canceling ones just excited me so much. Yeah. And these are, you know, they have little adjustable earbuds. So they actually stay in your ears, which is nice. Have you tried them out on an airplane yet? I'm curious to know if they actually dead in the sound of the airplane itself. I haven't yet. Derek did. And he said they work really well. Excellent. That's what I really wanted them for. All right. Scotty there. Hello. Close. Yes, I'm here. Have you? Hello. And Lionel. Yes. Hello. Mr. Mark. Hello. Hello. And Ryan. Yes. Hello. Hello. Thomas. Are you there? Hello. Hello. And Manuel, are you there? Manuel. Hi. Yep. Gotcha. Thank you. All right. Give another minute and a half or so. Okay. Who's coming in his K native zoom? That's my comic. Shoot. I need to fix my K my zoom, I guess. That's okay. Just funny. Okay. Are I PNR? Are you there? Yes. Is this the first time on the call? No, it was in December as well. That's an idea. Okay. So you should be in the list. Okay. Good. Okay. He's with us. Oh, one of you guys. So, okay. Okay. All right. Three after one. I'm going to get started. No way. As we're talking about community time. Anything from the community. People want to bring up. Okay. In that case, let's go on to you planning or coup kind of you. So a couple of things here first, make sure if you are going, I'd love to get your name in there just so we can get it account for the face to face meeting. I guess if you're not going to go to the face to face meeting, we'll technically need to know, but it was still a nice, maybe I can rope some people in for the kiosk. So we're going to be doing. For the serverless session. I believe they're still working on the exact list of speakers. And with the abstracts going to look like, but we did at least get something submitted. So they're on a schedule. Just work on the details there. For the cloud events session. Clemens volunteered to cover the, the new spec itself. And I'll just do a quick overview of the CE stuff. In terms of the deep dive, since no one else volunteered here, I'm going to assume that Scott, you're okay with being the lone. Single speaker since probably relatively quick, but you have lots of volunteers here for helping the lab. That's not okay to you. Sounds great. Okay. Cool. And I did get an answer on the kiosk or one out of two answers. So really wanted to know whether we were limited to just ampm for the same for all days, or whether we can alternate during days. They said we can alternate if we wanted to, even though in the past, it was typically either just morning or just afternoon. One thing I did not get a clear answer on yet is what does this last option mean? Whether it's just random, whatever we want to show up and how does that work? Luckily, we don't have to actually answer to the February 14th. So you have a little bit of time to figure that out, but I at least want to give you guys that answer there. Okay. And I'm still waiting for the room for the face to face. Okay. Anybody else have any questions or comments about. Could kind of you planning. Okay. In that case, I'm assuming the owners of the particular sessions will. Take charge and actually work on their presentations and then share it with a group when they're ready. As you do start creating your drafts, if you can put a link to the draft presentation somewhere in the doc here, so you can take a look at it when you get a chance. And I appreciate that. Okay. Moving forward. Okay. So, um, I think we actually did have a call last week's Scott or anybody else. Can you remember? Is there anything worth mentioning? We talked a little bit about. The go lang SDK going one oh, and I'm about to cut that right now. That is cool. Okay. Anything else for mentioning? Or anybody else have a question? Okay. Moving on then. Um, after I sent out my notes on Monday or Tuesday, can I win about this potential rest SDK? Um, somebody else, I believe they're in Europe. Did ping me saying that they had some interns who put together this little SDK, um, but he's not quite sure whether those folks actually want to continue working on it. Cause they were just interns, went back to school and stuff. So I'm trying to get a little more information from them in terms of whether they want to promote that as a possible rest SDK, maybe merge with these guys up here. Um, so I don't want to say, I don't want to ask for from a voter anything yet on this call about this one. Cause I want to find out about the possibility of merge in the efforts, but I do want to draw your attention to it. So if you guys take a look at it, see if it seems like a worthy start. Um, it seemed like an okay thing to me, relatively on the smallest side. Like I'm not sure it necessarily had a whole bunch of different protocols, but as long as they're willing to work on that and add more, I didn't see an issue with it. Um, but again, I don't want to say have a vote this week, maybe next week. Any questions on that? Okay. Now to this one, I did reach out to the owner of the Ruby SDK and he hasn't made any changes since 2018. And he said, unfortunately, he's moved on to other bigger and better things. So he's not going to have a chance to update that. So I would recommend that unless anybody knows somebody who wants to actually manage that, I would suggest that we actually archive that. I wasn't going to propose that we delete the repository, but rather just change the name or something to indicate that it's not being worked on. And it's archived and it may be deleted in the future. Um, but I didn't want to say lose the code that may have been done so far. Instead of archiving it, perhaps add something to the read me that says that it needs an owner or that it's not being worked on right now. As a first step. Okay. That's fine. Anybody else want to comment on that? Yeah, I think, I mean, now that we know that we need an owner, I think all of us should start looking around, uh, for people we know that might be interested. I certainly have some, some, some people on go ask. Okay, cool. Okay. So I'll open the PR to at least make it clear on the current status and then maybe on next week's call, we can revisit and see if anybody found a volunteer. Okay. Okay. Cool. Um, Kathy, is there anything from the service workflow group you'd like to update us on? Okay. So we're going to have the first meeting. Um, February 12th. Um, from 10 a.m. to 11 a.m. Pacific time. So if anyone is interested in joining the workflow, expect discussion, either some direction or functional scope or the, any technical points. Yeah, you're welcome to join. We'll call in number. It's the same as the cloud events. We can call in number. Right. And what's the time of the call? Uh, it's 10 a.m. starting at 10 a.m. Pacific time. And is it weekly or bi-weekly? It's weekly. So we're, yeah. So we're now, now. Yeah. This is first meeting. We can also discuss, you know, how we would like to, um, move forward with the meeting. It's either weekly or bi-weekly. I see it's up for weekly. Okay. Yeah, we can discuss that too. Okay. Cool. Any questions for Kathy? All right. In that case, let's move on to the cloud subscription doc. All right. Since Mike, your section is first, would you like to bring us up to date on where we are and talk about some of the changes that might have gone in? Sure. The, uh, a couple of talking on Friday, um, the, the biggest changes that came in is, uh, a little bit more. Annunciation of some use cases in particular. We were, um, discussing this, the second paragraph there about, um, how you might aggregate, uh, discovery, uh, or, um, be sort of a middle word for discovery. So the, uh, trying to think about the difference there, um, I might provide like a useful collection of, Hey, here are some cloud events that you can discover, but you have to go back to the original producer to actually create the subscription versus like, I'm doing this, um, I'm actually providing a middleware provider where, uh, you would come back to me for discovery and I might go make that subscription upstream, but I'm actually aggregating the events, uh, internally. So I think we, I think we probably need to settle on better terms in middleware and aggregator because they can be, uh, kind of confusing. Um, but we, we spent a good bit of time talking about that particular, that particular use case and making sure it was supported. Okay. Is there anything you'd like to call out or have a discussion on in this call? Can't think of anything. Scott, you're also there. Kathy. That's, that's pretty much it. Yeah, that's pretty much it. I think what we are, we were discussing is, um, how we, is it a producer that will, um, so if we say it, where's the boundary for that API, um, this discovery API. So the, I think we, we decided to on the producer boundaries, the producer will provide that. Then there's a question is if there's a middleware, um, either it's, um, gateway or whatever, right? Is that a producer or that just a transparent entity which will pass the, all the discovery APIs to the producer. Well, we define that as a producer, something like that. Okay. Thank you. Are there any questions for that team? I will have a point of discussion, but I'm on next. Okay. Any, any questions for this section? Okay. In that case, I guess moving on to do, Clemens, you are up. Yes. So, um, I had amended the, uh, uh, the documents with, uh, some, um, explanation of what the kinds of subscriptions are that we, um, ought to cover. And, um, there's effectively. Two models. And we spoke about this, uh, on our call, effectively two models. There's a pole subscription where you walk up to some, typically some middleware, but not necessarily middleware. Um, and you use, uh, a gesture typically in the application protocol to, um, start soliciting events from there, uh, which is pole. Uh, and the other model, um, is, um, where you effectively configure some sort of pops up engine or, uh, the producer itself. To deliver events. To a target. Which might be directed at the subscriber itself or might be directed, might be directed elsewhere. Whether subscriber effectively acts on, on behalf of that target. Um, so that's the push model. Um, and ultimately, um, it's a, you know, whether the, um, and so the pole model, sorry. Is. Typically done using some level of pops up protocol. Um, and so I've covered, um, the cases here explicitly explained the, the methods that exist in MQT, MQP and NATS, which is, which are the, the true pops up protocols that we, that we support, which allow you for some level of, of filtering on the event stream. Or on events. And, um, there is no while, while this is practiced also with HTTP. Um, you know, to be able to walk up to a event store of some sort and pull events out. There's no, there's no, there's no way to do that. Um, I would argue standardized mechanism to do this. So I've, what I've done here, what the goal was of this section is to basically prescribe, describe what exists in the existing, um, protocols. With the goal that if you are, um, using an MQT broker, um, and using MQT, MQT broker with cloud events, you know, you know, you know, you need extra magic to manage subscriptions, right? It should be within the scope of the specification. It should be compliant. That to do, you know, pops up cloud events pops up in the sense of this specification, just by using MQT, if you're in that world. Um, because, and this is something that I think is important, uh, for compliance in enterprise scenarios where people are just doing, you know, checklist compliance, um, uh, verifications where, you know, they're now asking you, are you supporting cloud events, uh, subscriptions? And, uh, you should be able to say, to say, yes, if you do it with the MQT protocol or with the MQP protocol, with the NAS protocol without having to do any unnatural acts. So that's why, um, I have this in here. And then we've, um, moved on to, um, some of the push subscriptions, which is effective then, then configuring the, the middleware or the, um, the producer. And there is an interaction that obviously needs to exist with, uh, discovery mechanism. Um, because, uh, we think of the, the, at least that's where we, where we were headed. I think, uh, the others can correct me. Um, but, um, I think that's where we're headed. Um, the description API and the discovery API is really clearly distinct. Um, and the discovery API should yield the end point where you can go and subscribe. And should then also yield what the protocol is by which you can, can subscribe. Um, and then you, um, effectively establish a try a, uh, a subscription, um, in the, um, discovery and discovery metadata. So, and then that yields that the discovery, the discovery system should either allow for, you know, a centralized consolidated view of all the event sources that exist within the scope of the system. Um, or it should, if that is so that you have a, um, if you can literally subscribe at the producer, um, that you can also have a local view of all the things that the producer gives you so that your first interaction might just be to talk to the discovery end point of that producer. And then that gives you the necessary network information to, um, um, then, um, you know, establish your subscription. Now moving on a little bit further down to the, the, um, information model that's already there in the documents. Right. This is kind of what we stopped. One of the things that we've, we talked about is that we need to go and specialize that for various transports and at a minimum for all the transports that we have as part of cloud in the core of cloud events. And it also made that extensible. Because we have at least from a security perspective and let's start there. We have two contexts. We have the context of, um, being able to subscribe. There's arguably there's three, right? There's a discovery end points and then there's a subscription end point. And there is a delivery endpoint. If you do push, push, um, delivery. And these may all three be different. Um, one thing that we found summarily, um, insufficient is to have literal credit credentials here. Um, and, uh, while that is common, um, and it's nice that it's already mentioned tokens. It should be a mechanism that probably ties into, um, in a fairly explicit way into, um, a, um, uh, into an authorization framework. Um, so I have to go and figure out how to, how to do that because we will have to figure out how these, these, um, you know, different security contexts that we have here are interacting. And then also how stuff works like, uh, renewal, because if you're setting up a subscription and that subscription is long lived over, you know, months or years. Um, it's hard for me to imagine that for interoperable solution, we get around doing something like OAuth where you get a renewal token and then you have to present that renewal token occasionally to a, um, an STS which an access circuit breaker. So we, that's the OAuth too is the model that I think has the broadest consensus around the industry. And it's hard for me to, to imagine that we can get around, um, using a mech, a mech mechanism like this, um, across protocol boundaries here. Um, and so we're going to look at the information, that information model for specifically around push for the various transports, you know, what the information is that we need and also what does the, the transfer information, the transport information that we need for instance, for MQTT, you may want to, um, um, configure whether you want to deliver the event at with cause zero or cause one or cause two, which are various level of delivery assurance in an MQTT. Um, and these are all things that I think for the particular protocols that we have, we'll have to go and specify that. So that's kind of the state of, of where we were from the last, um, uh, discussion and I have volunteered to start to do a draft of the, the, the specialized data models, um, for the various transports. All right. Cool. Any questions for Clemens or anybody else in the work group? Uh, yes. A quick one. Uh, I, I didn't read the document, sorry, but, uh, in case of a broker based, um, subsystem, is that, would that be transparent? So would it consist of two subscriptions, one being the broker asking the, the, the origin of the event to submit events to the broker and the second subscription of a consumer to ask the broker to deliver events. So, um, we have, we have, uh, um, this, this transitive scenario with tables for the initial rounds. Uh, we talked about this initially. Um, I think we'll have to get to it, but it adds, um, quite a bit of, of, uh, um, complication. I think the way we get around this, um, is, uh, by, um, so let, let me back out. There is a clear need in some scenarios where you have such scale at the producer side that you want to avoid raising events that you know are not being consumed. And then there is such scale at the consumer side that the producer can't possibly handle that load. So in those scenarios of which we have in Azure a few, um, for instance, in, in case of Azure storage raising events, um, we literally go and notify Azure storage from event grid, whether there are any of any subscriptions and only then they turn on the event feed for that particular context. And when there, when all the subscriptions are gone, we tell them to go and turn off that events event feed. So that's a, that is, is something that I know is required. Um, I would, I would like to focus on getting the simple case managed first and then see what the upstream, um, model needs to be to effectively communicate that, um, that subscription up to the source because the mechanism is not necessarily trivial because the producer's relationship to nowhere is usually one where they simply send and you don't necessarily have a back path to them for how we can tell them, um, uh, how to, uh, you don't necessarily have the subscription API on that producer. Exactly. Yeah. The broker. Yeah. So, and I, since, since I'm incapable of magic, um, I, I, ideally the, the, the subscription API is completely symmetric in the way that, um, effectively a producer has its own and this is, there's interaction with the discovery mechanism and I think that's, that's what then ends up, ends up in the, the producer basically says, here's the bunch here, here are the events that I can produce and makes those available, makes that catalog available to the middleware in the middleware basically creates a consolidated catalog. And in that consolidated catalog, it might go and replace the, the subscription end point, end points with its own. And then as actual subscriptions arrive, it starts ref counting on, on those, um, on those events. And we'll then, um, you know, if there is a, is an endpoint that it can call to go and trigger subscriptions for events on that producer, it will do so. Um, and otherwise if there's no such subscription, that basically indicates that, um, the producer will produce at all times and it will basically just deliver through that middleware. So there's a mechanism we can go and figure out, um, to communicate that fact and either make it active because you need to have a back path or to make it passive where the producer always produces into the middleware and then basically delegates subscription to it. So I think we can go and find a relatively elegant way for that. Okay. Hands up. Uh, I'm just wondering, uh, it sounds like would all work if you assumed everything was a fully qualified, you know, subscription to like a, uh, a delimited topic. However, most producers have it like a canonical representation where each field is significant. But every time you publish those fields will vary. You know, case in point might be a simple stock market, uh, publisher from a gateway. Yeah. Where there's like 10,000 trading instruments and that would be one field. So if you went to subscribe, how would it know the 10,000 subscriptions? How do you apply wildcards? Where would they be applied? What are your thoughts on, uh, solving that problem? So we briefly touched on this, um, uh, but not too in on the previous call we had in terms of filters. And you'll see that there's a filter field. Um, but filtering is something we'll have to go and support. It's just not yet clear to me. To what level we can achieve, um, agreements on filters. And we'll certainly try, um, on a filters model just, just for, but one thing that I shared with, with our group is, uh, the, the filter spec that we have an MQP. And the choice that we made an MQP initially was to define the existence of filters, but not be particular about those filters and, and not define syntax. And it has taken us now, uh, about seven years to actually get to a point where we, where we have a filter spec that, um, uh, we, um, agreed on. Um, also because we were initially all mostly all relying on the GMS precedent and the GMS filter set, which now is no longer, no longer sufficient. So, um, yeah, there's some precedent and there's also in terms of filters, depending on what the product is and the protocol is that you, that you're working with, um, they are already built in and they're already kind of implied by what, what people practice. So I, yes. So I believe that there must be a filtering mechanism, that there must be a way to have. Differentiation by subject differentiation, probably even by custom. Um, but what the filter language is, what the filter mechanism is, is something that we'll have to go and, and, um, discuss in the big group once we get to that point. So that's something that I really want to get to. Okay. I'm sorry. I think your hands are next. Yes. So, um, I think, yeah, the filters are important for fields like subject and so on, but there are also fields that typically more go into the, um, topics like the source and even. The ways we have defined source might also vary. So, um, I'm not sure if in all cases it will be possible to, um, have a fixed list of sources of a producer. There are some fields like, I don't know if the source is something like a workflow ID or something. Um, that might vary a lot. So, so I wonder if, if really also in discovery, a fixed list of sources will be the right thing or if you will have something like templates for source your eyes. Yeah. I think you will have to for subjects. We will certainly have to for sources. You might all, depending on what the scale of those are, um, of the systems are, you may make sense. I'm not sure whether it makes, whether it's useful to have a hundred thousand, uh, different, uh, event individual event catalog registrations for what is effectively the same event, but by different instances of the system. Yeah. That's true. So we have so far very fixed list of sources, but still from how we defined it in the standard, um, it also allows to have this a lot more fine grained. I think. Okay. Um, Heinz, is your hand older now? No. No. Okay. Go for it. Uh, yeah, it's just, uh, on the, uh, my original question, actually my biggest concern was more on the, um, subscription discovery where, uh, again, if I have, you know, to use the, uh, stock market type scenario, if I have one field that has 10,000 different potential enumerations in that field, um, I mean, have you got suggestions on how would you do that discovery, which then would lead to, I would need to know some way to, uh, either potentially filter, but more importantly, if I have it as a fully qualified name, how do I know that fully qualified name? If I have 10,000 potential fully qualified names, we're only one field changes. Yeah. Um, great question. I think that's something that the discovery, the discovery partners to solve. Sorry for punting that, but, uh, that sounds like a discovery problem. Mike, did you want to comment on that or just leave it there? Do we lose Mike? Yeah, there is. I mean, I don't know if I have any immediate response. Okay. Okay. Um, Ryan, you want to go next? Yeah. Um, first I'll, I'll echo, um, the idea that a source, um, at least at Twilio, we think of a source as, um, an instance of some kind of object or, or something that emits events. Um, uh, and that is definitely not static. For example, at Twilio, um, you might have an instance of a phone call that has a specific identifier that is short lived, um, just for the lifetime of that call. And, um, you might have consumers that want to subscribe events for particular calls or particular messages, et cetera. So, um, I'll echo, um, just that, that thought, um, the, the second, uh, comment I wanted to make is, um, another thing we talked about on the call was, um, you know, what goes into, we have, uh, this config map. Um, so, uh, I would prefer to be opinionated about what goes into there. Um, and I think the, um, sort of verbal agreement, um, that we had was that should be, um, potentially just transport specific configuration, whereas, uh, other, um, other configuration or other, uh, uh, other things that you need to put into the subscription should probably be promoted to the top level. So for example, I did comment on the filters. I know you, you pulled that out. Thanks for doing that. Um, so I think I would like to have just a framework for what goes into the config. Um, and, and what goes into, you know, a top level, um, item. Okay. Vila, your hands up next. So, well, I was wanting to ask about the filter stuff, but it seems like there's other questions about conflict maps. So I don't know if we want to go and deal with those first. And then I can ask. No, go for it. Okay. Yeah. I was kind of curious if you think that there might be a, uh, or if there's even any value in, in tackling this and maybe, um, two phases where one of them would be where you can go ahead and filter based on common cloud events attributes. And then, um, I mean, just kind of going back to the, we've been trying to do this for seven years and maybe this is not the year, but is there a way to go and at least start putting in some, uh, standard filters that we might be able to do and then, um, learn from that experience. So what we've done, what we've done on, on event grid, for instance, is, um, instead of having a complex filtering filtering model, um, we, um, have initially affected the three filter conditions. That's our simple filter model, which we've done effectively, which is a prefix and suffix fill. So there's a, there's a full match and prefix and suffix, uh, uh, condition on type subject and source. So that's the, the, um, the minimal, the minimal set that we have. So you can effectively, if you're looking for, um, you know, a blob created events from a, from storage and you only want to have the events for JPEG files, right? You make a, um, you make a filter for the type blob created. Uh, the source is, uh, then effectively matching the container, what you, where you want to have that from. Um, and then the, um, uh, the subject is a suffix filter on dot JP, JPJ. And that's, and that's kind of, and that's the, the, the simple condition. And then we added also a, um, a direct match table for attributes that can set on that, on the, on the event, um, which is effectively just a, uh, you know, key value pairs with the expected values. And if they match, match directly, then, um, that's also matching. So we can start with something that's, that's that simple and see how that's, whether that's sufficient. Yeah. Cause it just seems like we can say we can do, we can agree on anything. And I think that there's some things that we kind of all agree on that would add some value to the users today that we could properly start and those seem like reasonable things to look at. So just wanted to throw that out there. I'll, I'll be happy, I'll be happy to be surprised if, if we can agree on that will be great. Okay. Um, I'll go and then Heinz will jump over to you. Um, and I'm not actually looking for you guys to do more work, but it does dawn on me that in this, in the cloud events spec, we're working on that. We did end up producing a document that talked about, you know, what are the current events that are produced by the various products out there today, just as a informational kind of a thing. I'm wondering as you guys are producing or doing your analysis to figure out what we want to put in this spec. If you're gathering information about what's already out there today and that text isn't necessarily proper for a proper spec per se, but it may be information for the primer that we're going to produce after this or something like that. If you guys can just jot that down to some documents in place, even if the bottom of this one, just so that people can see what's out there today and they can sort of compare and contrast from what we're doing versus what's out there today. Um, I think that might be interesting if you guys for both sections of the spec could, could do that. Like I'm not necessarily looking for extra work, but if you happen to be doing it anyway, you might have to just toss it in the docs. We have it as reference. And with that, I'll hand it over to Heinz. Sorry, just a quick question again is it sounds like the filtering is starting to address almost like a content routing or something that might be like a virtual service and service mesh. However, most, if you're targeting the messaging systems, all the filtering is based only on the topic or Q name. So I'm just wondering how are you going to blend those two together? And if the intention is like a pre-filter where once I received the message, maybe based on a topic or the event based on a topic that I would decide if I want to see it or not, those are usually frowned upon because if you are going to an async push model, the last thing I want to do is not have the broker do the filtering, but then push everything of the client and expect the client to do that filtering, which is really the role of the broker. So I'm wondering how are you going to address where you're kind of going towards content routing, but it doesn't really apply for messaging type scenarios. So I would, I disagree with that because the, so there are some brokers which are relatively simple. So NATS has a subject based model. MKGT has a, literally has this topic, the topic model, which is based on the original MQ, a topic model, but every most, most modern, most modern enterprise brokers actually have a fairly sophisticated way of doing filters. So if you look at active MQ, if you look at, you know, MQ, the modern version of topics in MQ, if you look at Tipco, if you look at Service Portal, we have, all of them have a SQL based filter language that can go and effectively inspect any aspect of the message and then run based on those. And we have in... Actually, that's not correct. It only does inspection on headers. They use selector functions, which are also kill performance and are usually for all those protocols not recommended because they were originally designed for addressing that didn't support a hierarchical address. So that was the way they could do some more. So I don't think that's true because we certainly, we certainly recommend that you use them and our customers are using them a lot across, thousands of customers. And what we see customers that are coming with lift and shift workloads from other brokers into the Azure cloud, mostly everybody is using message selectors for all kinds of scenarios. So they are very, very common. And yes, they only operate on the metadata of the message, but that is exactly in tune with the spirit of what we did in cloud events because we chose literally to ignore data, which is the body of the message and focus on the metadata. Trust me, it does tremendously affect performance. Having worked for 13 years. It's actually, so I know that it affects performance, right? But it's also a fact that they're intensively used in all those broker products. I mean, I run one. So I mean, we can tell what that costs and customers know what that costs, but we have enough firepower for customers to use them and they do. So I don't think we necessarily want to rattle on filters here because I think this is going to be one of those topics. It's just, it's going to take a long time. So is it fair to hold this? Okay. In that case, let me jump over to Mike then because I think his hand is up next. Yeah, I just wanted to point out that we need to align the filter piece between both the subscriptions and the discovery. So I put some filter-specific stuff in discovery, which is different than what's up here. In particular, Eklene LaVille said, I think it's interesting to think about providing some cloud events, attribute-specific filtering, as these are meant to transit cloud events, but also acknowledging that certain producers are going to have different domain-specific languages that need to be there in order to like power the subscription at all. So that's what I wrote in the discovery section. So it sounds like there's some disagreement and maybe some more discussion that needs to happen on filtering. Yes. And that's why I said initially, filtering will be contentious. Yeah. But at least we can blame V-Lay because he's the one that pointed up on today's call. So any other topics for discussion relative to this part of the spec? Any other questions for Clemens or anybody have any comment from the rest of the working group members? Okay. In that case, let's jump over to the cloud event stuff. Two things here. I said maybe three. First off, I did notice that... Actually, let me show you the... Hold on a minute. The readme was missing two... Actually, I can see here. The Avro spec and the Kafka spec were missing an entry under V1.0 column. So let me show you what it looks like. So on the main one right now, that's blank and that's blank. And I can't for life me remember if I just messed up or whether we actually did not choose to point those to 1.0 for some reason. Does anybody remember? Because if not, I'm going to assume it was just a typo or mistake on my part and we really need to add them. Okay. Not hearing anybody thinking that we did that on purpose. Any objection then to adding that? Okay. Cool. Thank you guys. I just wanted to make sure it wasn't forgetting something. Now here. Issue 545 had a bunch of different topics in there, but one of the things that they talked about or asked about was around the size limits, in particular the 64K stuff. And they're wondering whether that's a hard limit or whether it's just a recommendation, that kind of stuff. And Clements, I think that mainly came from you and my recollection was that that's more of a recommendation kind of thing and that's the minimum. We expect people to support. And while we don't necessarily come right out and say you can't go past that, it is kind of a strong hint to keep things relatively small, right? Yeah. And so when I thought, okay, that might be useful to add into the primer. So I added some texture for the primer. I'm not going to suggest that we approve this here because I just put it there yesterday and there's a rule that says everything has to be there for two days. But please, when you get a chance to take a look at this and obviously in particular Clements, if you can wordsmith it as appropriate. Since you wrote the original text, I'd appreciate you guys just looking at this stuff. Yeah. And Kristoff should also chime in because I remember there was an epic fight. That's a good point. Yeah. I'll ping him as well. That's a good one. Thank you. Okay. Any questions about that though? Not necessarily about the text itself, but just in general. Okay. In that case, one of the topic that might be worthy of discussion is this one. So Scott, I'm going to lean on you here for a sec since you're definitely involved in this discussion. You want to summarize what the concern is with this issue? Yeah, I think it boils down to it. So in the spec we allow for flattening of a JSON object into a structured JSON encoding as the third box on the screen. And that it actually is really difficult for, for consumers of the event to understand the, how they should process the data argument if it's optimized for JSON in this way. And the spec kind of goes through like, well, if it's there, then you have to like look for if it's encoded as a string, it might be a JSON string. And if you're at, if you're trying to marshal that into a structure, then the string might be a JSON encoded structure that you can then turn into bytes and then turn into the encoded version of, or an un-marshaled version of your structure. Versus the third example there where it's the, that first pass string escaping has already happened. And so there's, there's this like inspection case where you have to do some data reflection to actually understand how to marshal this thing correctly. So my hands up, let me ask a quick question. Scott, in the second example, this is the one that really confused me when you say that this can be interpreted as sort of encoded JSON. My understanding from reading the spec this morning anyway was from the cloud events perspective, this is a string. The fact that it looks like JSON is irrelevant to the spec. Right. The application, when it receives it, it can do some inspection and say, hey, look at this. I notice curly braces. I'm going to say this is a string, or I'm sorry, I'm going to notice this is JSON. I'm going to do something else with it. But from the interaction from the cloud events into the consumer, this is a string. It is not JSON. So I wanted to know. That's right. But the way you have to interpret it is based on your application, but the spec doesn't really see a difference between the two. Doesn't it? Because the spec says this is a string and down here it's JSON. I can flip the encoding. Like I can take a conical version and then two encoders could produce either of those and they'd be valid for the specification. They'd be spec-compliant results. I've got to think about that. Clemens, could you go ahead? You're next. The spec, the JSON format says the implementation for any other type, which before that is the binary path, the implementation must translate the data value into a JSON value and use the member in data to store it inside the JSON object. So effectively there's an implication that it is a JSON value, which includes JSON objects, which means in the JSON case, when it's defined when the data content type is declared that this is JSON, then it's expected that in the JSON format, which is the only case where that can happen, that the data content is effectively, if it's not data base 64, which means it's not binary, then it is always a JSON value, which means that case is a string. If that's what the language says in the spec, it probably changed because I remember reading it as a mate. Look at 3.1, third paragraph. Hold on a minute. Let me bring it up. Come on. Oh, yeah, you're right. Sorry, wrong spec. JSON format 3.1. Yeah, it's this stuff right here, I believe, right? This is effectively for any other type, it's where it's not binary. So that was changed in October, I think, after a discussion between Ellen and myself. Mm-hmm. Yeah, I'm pretty sure that must used to be a May. Was that before we had the data underscore base 64? I think that's true. Look at the blame. We won't go there yet. Although I do like the idea of blaming Klaus, so maybe these people can just do that. Okay, I just got back into the car. That's what I picked on you, I noticed you. Okay, so what's the blame anyways? I thought I was happy with this. I am happy with this. Yeah, we just need to figure out how to respond to the person to open up this issue. It's great that you would assume the same behavior for the JSON encoding, but the JSON encoding is different. And especially called out that way. I already responded that it needs to be a JSON value. And in case of JSON, it is already a value. Yeah, I was caping needed. Okay, well, I need to go back or we need to go back and double check and see whether Kevin's last response, whether he's okay or not. And if not, we need to respond back to him to make him happy. So the thing is even for the case. So when it is not binary, right? So it's not basic 64 encoded. Then it is always a JSON value. And then if the data content type says text XML, then you're expecting text and you'll find the text in the string because there's no other way. But if it says JSON, well, then you have a JSON value that it can interpret directly. Yeah, that's how it's written now. That's not how it was written nine months ago or whatever. Well, okay. Well, that's years ago. Eons. Okay. So at least it seems like everybody on the call here might be realigned, which is good. So we just need to figure out how to respond back to what's named Kevin. Yeah. I have a standard template and like, I'm wrong. I think we all have those. Okay. I think that was it in terms of issues that I wanted to bring up. Actually, no, there's one. So one of the, one of the really, really, really old issues that may have been opened up by Sarah ages ago was how do we add or remove? What's the word admins, you know, the, the guys who do manage the phone calls and manage all the, the gorp for us, you know, because everybody's maintained on voting rights, but relative to the administrative tasks, I think there's three of us. There's me, Mark and Ken. And we probably need some documentation in the government stock that says, how do you add or remove people from there? And every now and then I think about this and I was going to write a PR, but I didn't get around to it. But let me ask you guys, whether you're okay with my, the, the general direction of what I want to write up. And my basic thought process is if you want to remove one of the admins, then all he requires is greater than 50 percent vote from the voting members. If you, if someone wants to be added as an admin, again, greater than 50% of the vote of the voter members. So basically that's how you can add and remove administrative folks. Anybody have any comment on that? Does it seem too low, too high, completely wrong direction? You are not allowed to leave, but otherwise that's okay. We'll have a discussion about that. Okay. I, I say, that's a good point though. I need to add texting that says how does someone, you know, get out of this role if they want it. And you can't obviously force them, but I should add text to that effect. So thank you Clemens for the joke, but it, I do need to put something in there for that. Like a Doug bus factor. Moving on. Any other, any other comments or questions about that proposed direction? Okay. Not hearing anything. I'll write up the PR that we can try to close out that longstanding issue. Okay. Any other topics for discussion today? All right. Did I miss anybody on the call? I think I got everybody, but just so I can make sure your name's there. If not, let me know. Give me a couple of seconds to double check. All right. In that case, we are adjourned. We'll talk again next week. Thank you. Bye. Bye. Thank you. Bye. Bye.