 Hello Simon. Hello Doug. So happy. That's somewhat. Sorry to hear about Dan by the way. I didn't know. Yeah that to be my surprise I didn't realize he was sick. I guess he explains why he was rolling off stuff I didn't know. Yeah I didn't know whether he stepped down as director because of that or executive director because of that or whether he was looking for something else to work on or whatever he is sad. Well I guess age is relative but I'm getting up there but he didn't seem that old. No he didn't actually. Are we covering the pull request 7-2-2 today? Which one is 7-2-2? That's the one on schema registry replication. Yeah I was going to have Clemens at least introduce it. I don't think we can approve it today because I think it's bigger change than people know about and they need more than one week to review. Okay I just had a question on one of the comments that you made regarding the epoch. I've been gone for a few weeks so I wasn't sure if that was still open. I'll ask when you guys cover that. Okay yeah sounds good. Matt are you there? Oh oh it's that Matt. I'm going to have to. Okay thank you. Hi Eric. How's it going? It's a slow start but not bad. Yeah I hear you. Hey Tommy. So I gotta ask I'm on a different external monitor today. Does the screen share look any different from it than what it normally looks? Like maybe crisper? Personally I think it does but that's just my comment. I know that people have commented maybe three or four sessions ago that it was hard to read. I personally think it's fine. Okay I was curious because I'm still on my wife's monitor today so I thought I always thought it was a better resolution but I can never I never knew whether the monitor actually influenced what you actually shared because it would seem weird to me that it would influence it that much but maybe it does. So first thing you need to upgrade your monitor and the second part I don't know if the monitor makes a difference. I have a 4k that people complain about that I have to change it every time I share. Well yeah that's a different issue yes that is true. I know that in the resolution will change depending on the monitor that's for sure but I didn't know whether the actual Christmas would change as well I guess I guess they're kind of they could be related so never mind. All right hey Kristoff. Hey how are you? Good how about you? Yep and ginger. Hey Doug. So please let me know and I mentioned this last week the guys working outside they were further away now they're like within 10 feet of me so let me know if I need to go on mute more often but if you guys don't hear the banging I'm going to be blown away. Hey Timur. Hi Doug. Hey Brian. Oh no man can't. Hey Brian. Hello Doug. Hi Manuel. Slinky. Hey hey and somebody else went flying by. Oh Lou are you there? Lou Dang? Lou or Anish are you there? Hey Doug. Oh I'm here. Yeah I got you. Sorry I was in your day. Yep not a problem. Just give another minute or two. I'll get started. Morning Mark. Happy Cloud Events Day. Yes it's a sign that the week's almost over it's so exciting. Okay I am not even going to attempt this name. If your name is Z-B-Y-N-E-K are you there? Yes I'm here. Hello. Excellent hello and I can't remember if this is your first call or not you think with a name like that I remember but if it is your first time can you in the chat? I've been here before like maybe a month ago. Oh okay so I probably have your company name associated with you then. Yeah I'm from Red Hat Sign. I'm from Red Sistine. Okay perfect thank you. Somebody else jumped in there. Oh Klaus. Yeah hi Doug. Hello Christian. I got you. Hello. Wait till three after them I'll jump into it. Actually let me, oh hey Lance. Pink Clemens because he opened an issue and I want to make sure we talk about it. So let me ping him. All right let's see three afters. Anybody I missed? I got that everybody. Okay cool. All right let's jump right into it. Okay Skip the AIs. Anything from the community people would like to bring up that's not on the agenda. Okay just a reminder we have two office hours for KubeCon. We'll be looking for volunteers. Please reach out to me if you're interested or can join the the session. Okay this week we will have a discovery interrupt call after this call as soon as this one ends. We do have some topics discussed there so if you're working on implantation please try to join if you can. In particular I'm curious to know about whether people are running you the same spec issues that I'm running into and I don't want to do PRs if I'm the only one seeing them. Timer anything from the workflow subgroup? Yeah we're currently working on a release on the release of doing all the branches and tags and everything for both the specification and the SDK so trying to get that done before KubeCon. So it's going to be version 0.5 that's what we decided and then we're from then I'm going to work on a 1.0 release hopefully within the next I don't know how many months. All right cool any questions? All right cool thank you very much. All right before we jump into the PRs and stuff are there any topics people thought I should add to the agenda that I skipped? Okay in that case this one's from Jim and he's not on the call. This is I think just a syntactical thing just adding it to the list of protocol bindings. I guess I don't know why I hesitated but I did anybody see any reason why this should not be added to the list of protocol bindings? Although wait a minute this is in web sockets is this a web socket thing or should this do something at the read me level? Does anybody know the difference? Oh this is I this is to use the protobuf even formatting the web socket protocol binding. So you're saying this is okay the way it is? Yeah just it needs to be formatted but it's fine. Okay anybody disagree? Can you show the description again? Yep. All right cool thank you. All right oh Clemens you're on good I was hoping you joined so you could talk to this one. I was also hoping I would. Now this one I think people will need more time to review it so maybe you just quickly intro it for people. Yeah that's that's what I wanted. So what this does is it adds replication to a replication model through the schema registry which is something that since we have already implemented this is something that arose in our world and so I want to share this. We're going to read this from the bottom to the top. Okay hold on a second here you go. Not quite that low. Okay all right so here there are two events that I'm adding which are there's a delete event and there's an update event and so they will so that's the first time I think we define cloud events and cloud events unless there are events already in the discovery spec. But I'm basically having two events for stage changes. The update is really an upsert but I just want to keep that simple and what that does is it indicates the source is always the base ERI of the schema registry that's raising it so that's also the way you subscribe so when we go and register a schema registry in discovery then the subscription from this subscription endpoint would basically also be that. The event type is then you know the update or the delete have a have a have a title in the second one in delete that should be delete and then the subject is the object that has been created or updated so that's effectively we have this this path hierarchy if you remember schema groups the name of the schema group and then schemas the name of the schema so that's kind of the path and that's what we use would be used here as the subject so you know what object that is about and then the time of change and you would have not there would be nobody and the reason for that is that those objects particularly these shape the schema documents might be quite large and they might exceed our assumed maximum size that we set of 64k per event so the assumption here is that and that's aligned also like with with guidance that we give customers for for these kinds of notifications patch that document the use case for these events is that you can is replication that you have a registry so think you have a event broker that is connecting to another event broker in in kind of a replication in a replication model so you have a event broker that you're publishing to in one application and then you have another application that hasn't had also as an event broker and you're replicating events between those two of course and then assume that the publisher to the first event broker can only see its event broker and the consumer from the other event broker can only see its event broker which means they might not have access to the schemas which means along the replication path for the events we also need to be able to go and replicate events the schemas and so what this will do is it will allow the the event brokers to subscribe to schema changes amongst each other so that they can go and effectively replicate the schemas between them and then I'm describing in this spec describing for the rules so when you establish such a relationship what you would do first is that the subscriber which are called the target would first go and walk up to the source registry traverse the the object graph and basically grab whatever schemas that are available within the schema group or an entire registry and then whenever there's a schema whenever there's a state change it will go and be notified and then would go and grab the delta as the the events indicate now we're introduced so with this we're now introducing now scroll up all the way to the top of the changes yep there is a okay I'll go to the very very top here we go yeah yeah there because I'm introducing somewhere I'm introducing the the notion of an authority to the authority right here yes okay and that's the key piece that requires a bit of of explanation um so so the registry that we have right now is one that that works nicely locally so you have one schema registry that is effectively for one event broker and that works for you know anything that's that's local to it and it has no notion of foreign or local things it you store schemas in it and you get the get schemas from it um if we're creating these sorts of federations so if we have these event brokers where we're now doing effectively these forwarding routes we need to have a clear notion of what the home is for for particular schemas and who owns them um so there's the simple case of that I just described um simple in the sense of of straightforward use case where you have broker a broker b and those two need to go and talk to each other and so you need to have a disambiguator between those two on who owns which schemas um and then there's a more sophisticated case where you might have a central central schema registry which means you are in in you know a big bank or a big healthcare provider and there is someone has the job of being the god of all schemas uh those people exist so they preside over a central schema registry nobody nobody no developer can go and ever check check in a schema unless it's been cleared by the schema god and so they are maintaining you know the grand central registry for everybody and so they would be have an authority which would be schemas dot contoso dot com and and they manage it so that's what that authority would be and so in that case all the schemas would be replicated from that central registry out into registries that are affiliated with the eventing system um and and elsewhere um and you would replicate into other registries because of course you want to have your um you don't want to have your eventing infrastructure which might be doing you know tens of thousands or millions of messages a second you don't want to have that dependent on some central registry tool that might not scale up to that point but you really want to have your registries in a infrastructure that can go and cash them probably help hold them in memory and give them out as as quickly as you can without necessarily having a dependency on that big central store so replication is something that we'll want so so we have these so we have cases where in fact we have a publisher authority as i call it or producer authority where the producer has some schema that's embedded in their code and as they are as they're publishing the first schema they will go and publish the schema into the registry and that that that schema will flow along the event uh replication path and be propagated into other registries so that the consumer ultimately gets it in that case the producer the authority is really the producer uri that represents that producer and you have the center authority case where the authority is you know whoever the center registry uri that represents the center registry think of that is kind of the root uri for namespaces if you think about that in in xml terms or in in jace's schema terms that is what the authority is so that authority is really directly equivalent to the authority as you think about it in the uri that's that idea and so the authority becomes a central concept here for disambiguation where you can now go and merge schemas from many authorities into one schema registry and the distinction between between those is that the scheme schemas that are from foreign authorities are read only you can only watch them and you can use them but you can't edit them and you can only edit your and and append and change your the the the the the schemas that are under your own authority that's the that's the idea so that we try to avoid replication conflicts and try to honor effectively where those those schemas came from so much as an introduction otherwise you I think it would make sense for you to go and read this and we believe that that's a model to allow you know multiple domains to kind of arrive in a common pool of schemas and and share them in these kind of integration scenarios which we will certainly have a lot with cloud events and and ultimately I don't think we're going to have the central the grand central schema registry in the sky but but mostly every eventing service every eventing infrastructure will have its own registry if only for reasons of the coupling and and every in all the the schemas that are landing in a registry through replication from from different authorities are effectively sitting there as a cache and because because we have the principle here of making schemas immutable the cache management is very easy because you can only go and add and things effectively never expire because the versions that are a version is stable so much for the monologue yeah there are other places you want me to scroll to or no that's I think I think the rest is the rest is all something that people need to go and just go and read okay okay just I got a quick question for you on the authority stuff does the uri that you use here have to relate to a schema registry or or can the uri that's being used here be completely unrelated to any schema registry so it's just a string at that point it's like it's like uh xml names this is the same same principle it's a name that ultimately the one of the biggest problems we have in this in the entire um one of the biggest problems we have in the internet haha one of the unfortunate things is that we don't have a uri scheme which is useful for just hey this is an identifier because urn is unfortunately weird so htp i'm just using htp here but it's really it's meant to be an identifier so this htps schema about schemas dot corp dot example dot com is really just a name okay because the reason I was asking was just whether a schema registry would know whether it is the authority for something based upon a string compare kind of thing and I think you're saying no if they if they happen to match the coincidence if um I have not I have not made this I have not made this explicit or haven't really made a hard rule around this um the uh so a registry could certainly a registry might certainly have a notion of whether it is the authority but I don't think the authority uri is necessary corresponding to its networking point so so that's a schemas so schemas dot contoso dot com is more the uri for the the owner identifier for the schema god of contoso than the end point at which the registry lives that that the schema god manages okay so it's fairer than I think to say that an implementation of the specification cannot determine whether it is the whether it is the owner of a particular schema based upon this property if there's some other information someplace else that would that would tell him that yeah that would have that would have to that would have to exist ultimately I think a schema I think a registry needs to have a notion of what is coming from the outset and what is local okay cool thank you so there is a I'm writing I'm writing that there if this if the authority if you scroll up I think down to authority look to the actual attribute right up or down down there we go yes um here for schemas important from other registries and applications in there is the attributes required to be not empty that the values empty are absent pure important must be explicitly set to the so it's implied default value and the implied default value is the base uri of the api end point so that's kind of what I'm saying like if you don't have one if you haven't set this then it's this it's this network address okay thank you all right anybody have any questions David did you have one that you want to ask you I thought you mentioned I did but I'm changing my thoughts now either go back go back and reread based on Clemens talking okay cool was it was were you trying to where you're trying to push back on something no I was reading some comments in terms of uniqueness and trying to identify changes then I need to go back and think about it again I'm going to go back and reread it again then I'll come back I don't think we're meeting next week we're meeting the week after is that right are we yeah we're meeting every week are we not meeting next week what's I wasn't sure for some reason I thought that was off the schedule maybe it was just my calendar why didn't we maybe missing kubka but that's in two weeks I think okay my dates are off then I'll I'll make my comments in the PR for next week yes and I'll go back and reread this thanks comments yeah I'm I'm appreciating every every single comment because this is so this is obviously a big change I thought a couple of comments and here I was had House had made a comment in the chat that he would find that authority model potentially also useful for for discovery yes I mean it's quite similar I would say okay yeah so ideally we end up we end up with the same with the same concept for these replication cases because ultimately I think ultimately this is the same story that we need for for for both for both okay any other questions comments all right not hearing any thank you Clemens please everybody go and review that let's see if we make some progress next week I believe camera who opened this one this is Remy yeah Remy hopefully this is relatively easy so this one's kind of directed at you for the subscription API the biggest change here is let me do this instead basically what he did was you had the ID as a query parameter on all these operations yeah and he wanted to be part of the path instead and I think there are other people who would agree with that change did you have a strong feeling one way or the other um I think it was not to get it was to get on the collection so the first request the query parameter was part of the very first request and that's something which was very difficult for us to implement interesting um let's see this is I see you're living part of that that's right here is this it's not here I think it would be wise to open the subscription API spec not the PR okay hold on what's the what's the thing you deleted that's because I didn't know the bottom I guess that's thing the category yeah querying for list of subscription there you go oh yeah I think this was the one which was really really tricky I mean I think nobody implemented this no it's it's a two or three two four two the retrieving a subscription okay parameters is an ID yeah this is the one that bothered me yeah but it's not it's not it's not necessarily this definition because this is pretty abstract it was the fact that well it doesn't actually show it to me it was the fact that it's it was a it was a query string here instead of a path parameter that must have been a bad day yeah because where was it what's there's uh here this this is this is the part that was the issue oh okay I get it um oh yeah you know what I did here is yeah I think I've which I was I've just been a little cheap here in terms of the the number operations because I think what you now have is you have subscriptions the plain get on subscriptions gives gives gives you multiple and then you do subscriptions slash uh ID and that gives you the the particular subscription and I think I just had had those operations collapsing to one um if um I have I'm fine with that change okay um cemetery to the discovery yeah yeah I think that's I think that's fine and and um I'm not necessarily always self-existent so this is a welcome correction and it also that makes sense okay I think why that comes from is um we may have I may have looked I may have looked at some prior art here and and that's where I came from but this I agree that that's cleaner okay I need your hands up yeah I was just wondering because in order to be consistent in discovery spec won't we also trying to find out ways to query discovery uh collections you mean some sort of filtering mechanism yeah yeah I don't think we made it very far down that path but yes that is definitely on the to-do list yeah so I mean if and I think we but this query parameter again we would be heading towards something like OData semantics that was mentioned as an option and it's on my to-do list to evaluate yes but then I got sidetracked by other things okay okay so I think this the biggest change here was just the query parameter moving into the path on the gets or anything else and then I think you just expanded some things um Clemens did you want to take time to review this or do you want to let it in and we can fix it through other PRs later I have no objections but I'm not the only person voting but no I know but but you're the main driver of this one so I want to get your take on it first that that is the truth I have no push back okay anybody else have an opinion something this one I think it's been out there for at least a week or so so it's not like it's brand new yeah nine days ago does anybody have any concerns is anybody would anybody prefer to wait at least another week before we merge it okay not doing an objection I personally I would prefer to get it in not because I think it's the best thing that we can do but rather that change of moving into the path of someone that I know everybody working on the interrupt stuff wants so it'd be nice if we were get the specs were consistent with what we're actually implementing so last chance any objection to approving all right cool thank you next all right revisiting this one we started talking about last week um yeah that we talked about that already so a couple things I changed since last week's call I didn't see wow ignore the typo in there I did not see a way for a subscriber to know what filtered dialects were actually supported the spec only talks about the basic one but it does imply that you can define other ones so I thought it'd be nice if the discovery API told us what dialects were actually supported so I wanted to add a field there now Scott I think on the on the interrupt call that we had on Monday suggested rather than repeating the repeating the word subscription everywhere here we just say dialects and config and stuff like that which we could definitely do I don't have a strong feeling about that but I'd rather do that as a separate PR that where we get multiple things at the same time but for right now I just called it ignore the typo subscription dialects with a list of dialects in there and it is optional does that make sense to people and here's a non-typo sample okay next oh I just define it down here um okay this one is a big one so I believe based upon what we mentioned or okay last week's call we talked about how Clemens you're saying no this really wasn't supposed to be filters it was supposed to be a singleton but then through slack you said you were wrong it actually was supposed to be okay so we can I think skip that one um on this one the config did you get this one any more thought um and to refresh people's memory this was supposed to be hold on a minute my phone's reading this is supposed to be a map of additional configuration values for the subscription itself that are not directly related to the transport because we have a mechanism for specifying transport things but rather this is a maybe configuration knobs about the subscription itself um and the example I kept giving was if you had a ping service how often do you want the ping service to send a ping basically um but there were other examples mentioned as well um was that a question for me yeah just because you I think you're gonna go off and think about this one some more yeah so and I think I think that does that does make sense okay from and that is because of the the the configuration effectively of the um of the subscription of the subscription reader I would say like in so in my head there's a source and then there's a subscription thing that kind of attaches to the source and then there's a um you know the the way how you push those messages out um and that's based on some prior art from opcua particular in particular where you can subscribe on an object that you need to go and actively walk up to and read from where you need to go take samples would fit in here so that's what I was was also talking about kind of in the last and as well as we were talking last so I think for that kind of a scenario that those those settings make sense okay anybody have any questions or comments on that okay I had one the value I I believe I wrote this up so that it could be any of the data types specified in um in cloud events basically right string integer time whatever as I was coding this up I I can kind of do that that's not a big deal um at least I think I can the I'm just wondering though whether that makes life harder for people and whether we should just mandate it's a a map of string to string so this might be a question more for the folks who are actually implementing these things because when you deserialize you have no idea what the deserialize it as and it may not always be very easy to uh to get a type you know type information passed into your your stuff anybody have any thoughts on that at all Scott you came off mute did you want to say something yeah this is in regards to flattening the filter dialect and conditions right no this is separate from filters this is a brand new map of additional configuration knobs outside of filters I got okay I mean we can apply it to filters as well okay because I think we might have the same question there I don't have a strong opinion on the config I think if it's meant to be opaque and pass to the subscription broker or manager then it middle where it doesn't seem to need to care okay Anish I think we're discussing about having a schema for the subscription config right so that it doesn't matter what sort of what sort of type you want to use within the configuration it just needs to have valid schema for and marshaling okay thank you and Klaus so um who is the target of that config it's it's the event broker or someone in the middle or is it the producer because in the in the case of the ping source it's more parameterizing the producer and that seems a bit strange to me well to me this this field is actually used on both ends I think it's used by the subscriber so they know what values they can put in there and they need to know for example whether everything is a string versus an integer and then yes I agree the event producer or something on the subscription manager side of the house will use this to determine how to possibly configure either the subscription or the producer or something behind the scenes because to me it's an implementation detail it's just um I mean the subscription may contain filters that apply to a lot of producers and it's kind of weird to me that I could also pass configuration values to producers here I guess so you're pushing back on the idea of whether there are configuration knobs at all on subscriptions no I'm looking for an example where this configuration is aimed at the infrastructure or some additional information that's really needed here oh oh oh whether the infrastructure needs it yeah oh that I don't know to be honest I when I was coding this up to be honest my event producer is the subscription manager so it's all in one I don't have separate middleware right but granted it's just a demo thing and I wasn't sure if other people were going to run into the same problem where I need to somehow know whether this particular field is an integer versus a string and parse it differently right and I was trying to avoid the situation of hard coding oh I know a field is called interval therefore it must be an integer right I want to avoid hard coding stuff if I could at least for part of the processing but if everybody else is fine with saying no it's you know let it be whatever type they want and the scheme is going to tell you that then I'll just I'll just deal with it Anish I just wanted to bring the point that we probably need to outline the difference between subscription config and the protocol specific settings uh because I don't know I still don't see a clear picture between uh or basically the difference between the protocol settings and subscription config because they're very messaging very system specific settings which needs to be propagated by the subscriber right and ultimately they end up inside the the event producer so where do we draw the line is it a valid question I don't know yeah I think it is a valid question um you want to open an issue to make sure we come back and add some text around that and and obviously the result of that could be we kill off one of them if we can't have a clear description between this or we can't have if we don't have a clear delineation between the two then maybe it's justification for killing one of them right I don't know but it's a good discussion I have yes cool let's do it next week yeah I'll raise an issue okay cool thank you okay any other questions on config okay keep going then next was filters okay this one this is the big one that I want to talk to you about Clemens um okay let me actually hide the comments make it easier to see so in this particular or okay in my proposal what I'm suggesting is filters is an array of conditions and inside each condition you can specify not just the the type of the filter meaning the dialect um I'm sorry the dialect type property and value right so in this case we're doing a basic one we're just going to check the prefix of the type property and it has to match calm dot example okay what I changed here was dialect was not inside each condition before which means you can have an array where and these are all ended together where the where the operands for the and can actually use different dialects and I think claus raised a concern around that whether that's adding additional complication and maybe we should only have one dialect per filter grouping which implies one dialect per subscription and I was pushing like a little on that to me once a sufficient manager can support more than one dialect I did not think it was a huge burden to say well you can do any of them then as long as he knows how to do each one individually he can add them all together because it's almost like a not recursive but they're independent processors right each individual condition should be independent of the other conditions and just add them all together so I didn't see why we would need to restrict it to one dialect per filter I agree I agree with that we have we have just prior art on that one we have in mqp we have new a new filter spec that's just about to be just about done and we have composition filters there that are and and or so this is kind of like here where we have an and an implied one and so anything that fulfills the archetype filter can be used there so you can go and make some match between simple property matches and sequel so I agree okay now in fairness claus I don't want to want you to speak up here I did I was thinking about your concern there and it did make me wonder whether there would be implementations that would try to take these filters and map them into some sort of sql query thing or something like that right where it'd be really really challenging for them to mix and match in that way and they were trying to encode all those ands as part of a gigantic sql query and I didn't know whether that was a realistic thing to worry about or it's just and it's interesting but no one would actually ever do that but claus do you want to join me first of all I did some I just wrote my comment because I did some research on how originally we we meant this because I think last week there was some confusion I mean it's just one filter is multiple and I just wanted to be sure that our original intention with one filter and multiple conditions was made clear and if we now decide to change this then okay so it's for me it's just a feeling that mixing dialects is complexity I don't know how other filters would look like at all right now but we can also decide once we have more filter dialects okay Scott your hands up my one concern with putting dialect inside of the object inside the array of filters is that it becomes much did more difficult to un-martial those inner objects because you have to do you have to do a first pass for you un-martial the object to just find the dialect and then you have to do it again to figure out what their properties are actually the if we if we moved it out though we'd be limiting ourselves to basically one dialect per subscription right you would be no you could have another object and so next to dialects it would be oh nested yeah right another array what are you you could do special things like you could have and and then that's the array and or and that's the array or something are other people running into that concern where they want I guess these three things to be in a nested object that way it can be parsed separately I would prefer for or to be a separate subscription I'm not sure he does what he's talking about okay that's an interesting topic though um no I think what he's talking about is in particular for go um he what Scott wants to be able to do is to be able to look at the dialect and say okay these other fields are part of the basic dialect therefore I'm going to pass it to my basic dialect un-martialer for java or for jason whereas if I'm going to come if I did if the dialect is quote complex then he could pass it to a different jason un-martialer and he doesn't have to basically and he can take and he could take this entire sub-object and just dump it all into that other marshaler right otherwise I have to do it twice well actually if you think about those Scott even if it was a sub-object you almost kind of have to do it twice anyway don't you no I can walk the object graph right like I can look at it as it's coming out of the tree and if the top-level thing tells me the dialect of the the sub-leaf I can un-martial that sub-leaf but I I don't know the dialect beforehand then I have to inspect it and then re re-martial it okay I didn't realize you were doing it basically a token by token kind of thing okay gotcha are we going to change change the spec so that you can be more comfortable coding it's an interesting question um because I mean and that's that that's the the the I think the information model with the dialect being in here is it seems sound I think I think I believe the reasoning and and the the you know we're talking we're talking about if we're talking about an object here that has 400 fields I would understand the concern but really we're talking about something that's relatively easy well even with other dialects it's not going to be it's not going to be exploding into something that's enormously complicated unless you're dealing with a system with several thousand filters yeah I I I understand but um yeah I have at this level at this level I'm I'm not sure that that's that that's something that will cause me concern in terms of perf to have the dialect outside or inside or to make this a nested nested grouping thing there I would prefer clarity in the in the information model versus optimizing that for for particular implementation concern slinky your hands up well if that makes you feel comfortable clements uh if you look at the pod spec or from kubernetes like if you look at volumes uh they did like scott like scott is saying because of the limitation of the code so and then like like off of the kubernetes apis are then signed with this problem but they they only need to be as fast as 80 cd is you know um but um I have I if you guys if you guys really think that you need this I personally I'm torn um I'm like I'm not I'm not going to make anybody's life harder it's just I find I find this I find this relatively I find this relatively straightforward we could I see I agree with you technically it's slinky that's why I'm torn here because because there's so many things about going I would like to I mean I love I love goad don't let me wrong but there are certain things around jason parsing I would really love to fix like the fact that he doesn't know how to handle unknown properties and stuff right but it just seems to me that if I was right in the spec I don't know it's going to see about it's going to be a bad thing to say because you you don't want to necessarily ignore the implementation but if you ignore this if you can work the implementation side of things and I was just looking at the spec what I see here is exactly what I would expect it I would not expect the nesting because the nesting provides no added value from a from an understanding as you said the data model perspective right now we had written this differently and we had said for example instead of separate entities it was um a list and the key value here was basic instead of dialect and then it was an object with these things I could I could buy that because but that gets super clunky because all of a sudden you're having you're having a basic object and then you're grouping all the basic things there and then you have a sequel object and you're grouping all the sequel things there and then all of that gets ended together um that is weird yeah I'm not saying it's right I'm just saying I could I could buy I could buy a nesting if you if you went down that path I just it from a straight-spec perspective it's harder for me to buy a nesting here but I don't want to discount completely the fact that it may like may make the implementation harder for some people I'm not sure I'm sorry I'm gonna completely give into it but I understand it I don't think I'm convinced that dialect should be mixed inside the filter array you're back to that issue like personally I think dialect should be moved up to the top level and it describes how you interpret the filters so let's so let's say you had a regular expression dialect and this basic one you're basically suggesting then that within one subscription you could not do both exactly and yep that's that's exactly what I'm saying but why yeah uh because usually you're going to pick an engine that you're going to subscribe to and I I think it would be interesting to be able to have that subscription broker say actually I only do Reg X well you can each individual discovery every single service gets to say which dialects they support so everything has to support basic but I would so I would want to I would certainly want to go and do a prefix match on the subject and then I might go and do some complex parsing of some other metadata field right or you you do that in the complex parsing too yeah but that would require every other dial but that would Scott that would require every other dialect to support basic yeah well and or you want to torture everybody and make everything work in regex I mean I can't remember how to do prefix in regex sorry I have to look it up that's why you don't have any dollar dollar bills uh something with star I think so let me ask you this Scott since we are running a little on time here is this something that you feel strongly enough about to like hold up the PR or is this something we can re-examine later I mean it's it's it's not like a technically impossible thing because this is exactly how like the github API works you get the payload and then you have to inspect it and then you can remarshall it it's just it's cumbersome and now instead of doing that once for the payload you're doing that once per item in the filter's array which is just cumbersome okay because I don't want to necessarily ignore the issue I would like to revisit it but I do also want to make forward progress would you be willing to hold your nose for a while to to see how other people what kind of pains they go through when they implement it yeah yeah that's that's fine okay okay any other questions comments ignoring the weird indentation down here this is what the config would look like just so you guys know here's a string one and here's an integer one Doug's trying to start the tabs versus spaces I swear I did not mean to it was a mistake but I do have a very strong opinion on this but I will I'll save that for later okay I think this is just textual gorp matching what we did there oh yeah this is just syntactical I put I would lowercase these because I think that's what you actually meant to use in the actual thing itself and it might really get confused as to whether these are meant to be used inside of the JSON or not so I just lowercase those I think this is just syntactical sugar cleanup oh this one Clemens I was assuming when we do the string compare for all of these for the basic one it will be case sensitive and includes taking the account leading and trailing white spaces to do agree with that yes okay is anybody else disagree with that okay cool just some examples here's another example then and what I do here and you know and the reason why is as soon as we say case insensitive we're again in this case folding nightmare and we want to go into what I think yeah okay I don't know what I did here I think I might just did some word smithing now that I also missing an in e in 543 will be defined with the subscription okay thank you hold on welcome I'll get that okay and then down here it's not really specky but I just wanted to show what things would look like especially when we start doing the the id as the in the path inside the query parameter there's just some samples because I like samples and that was basically yet any other question and I'll fix though here I'm gonna get this there's an extra hi there I keep working to fix that any other questions or comments on this one one general question yes and did we at some point decide to stick to the the kind of naming properties that we do this like we did it for context attributes I mean without hyphens or without camel case just the way we have it here I mean in apis it's more common to have something like camel case notation I think which properties in particular use that subscription URL subscription config I mean they're just one word without upper lower case anything the reason why we did that for for the for the context attributes is that we were putting these in HTTP headers yes sure for those it's clear I mean I'm just wondering if we need to do it here ah yeah we probably don't so you're saying for example this could be capital s yep just someone asked me this actually some of my colleagues and I was wondering why we did it exactly we had a hard we had a hard technical reasons to to keep it all lower case but I think here we can we can make things a little bit more normal again and and that's that's a good that's a good point and use camel casing okay a class do you want to open up an issue or even a PR to put that I mean we can talk right yeah I I'm not hearing anybody object to it sounds like a good idea okay Clemens I forgot somebody um Mattia sit there um in the chat mentioned that I think right now the um the schema doc shows it as subscription not subscriptions and I I actually added the s here too just as a coincidence are you okay with an s with this being plural yeah okay yeah cool okay any other comments or questions on this or concerns do you want more time to think about it okay any objection then to approving okay now we have five minutes left I'm not sure how much time we get Anish you said you wanted to quickly talk about this one right oh yeah this one oh I mean and I completely know forgot the discussion we had when we're talking about that issue so I kind of ran out of context so I thought in that moment we decided that we probably don't want to deal with anything related to security or their data integrity part and that's what I was trying to sum but I do see some important points being brought up by Eric so I'm not sure whether should I address them right now or should I or should we wait till we just tackle the security and the data integrity issue in future can you can one of you either you or Eric quickly talk about what the specific issues are that you wondering whether you should tackle now versus wait just some examples yes Eric on the call yeah Eric is there yeah please do be all nice Eric do you want to chime in sure they're they're not that important but I was I was taking a slight issue with the notion of declaring that those issues those matters integrity confidentiality and whatnot are not core to the spec I think they're very important and we consider them such and it's simply that we are not providing any solution for those yet the the second piece is that it I think we should imply that extensions may exist for solving these problems and that over time we expect them to accumulate into the spec as unofficial extensions and that will visit well I've described in my comment so so do we want to accommodate these things right now because I thought like from the issues and Clements could probably also pitch in in the issue we said that we probably don't want to deal with the security business as of now so that's that's was that was one of the reasons why I was behind the statement of that it's not the core intention at the moment yeah but by deal with I think the intention is that we don't want to specify a concrete solution I don't we certainly don't want to get in the way of anyway I really I'm really just fine with it going as is says okay so hold on a minute let's just refresh everybody's memory without Eric's comments this is the current text I'll give you folks is just a second to read even though we're always out of time all right but if people think and remember this is for the primer not this back anybody want to speak in favor or against this slinky the part from we leave it up to the implementer of the spec to the end we really need that part I think the the less sentence every implementer as a different principle for enhancing their security model is I really pretty much anybody have an opinion on that do we lose anything if we drop that sentence yeah but I would still like to emphasize on the point that we can introduce it as extension field so that later on if we decide to address these issues then we can make those extensions as official extensions down the line that's what even I think Eric suggested so at least it gives a it gives a hint towards how do you want to address these priorities inside your payload so I still think it's kind of important I don't know so let me let me turn the question around oh I'm sorry we're out of time I don't want to rush this one even I know it's just the primer it's not normative text but I also don't want to rush it either way so why don't we tee this one up for next week and take the comments to the to github itself so quickly did I miss anybody for the bro call I think I got everybody okay in that case please stand the line if you can or you're interested in the interop work that we're doing and we'll talk to everybody again next week okay thank you everybody well we'll start the next comment in about a minute or so all right Doug I'm back I thought you were going to vanish well it got canceled today excellent I have to vanish oh okay well in that case Clemens before you go I'm not going to ask you to answer it just take a look at this section down here I started adding some questions earlier today as I was doing the implantation I'd like to get your opinion on them just whether we should change the spec to deal with these things just late you can read them later nothing critical okay bye wait Clemens no I do that same event thing for the the life cycle of the discovery it sent me a note of slack I really gotta go okay my life I thought you Scott you had an AI to do that didn't you I had an AI to implement it but Clemens went and won up to me with an actual specification for it oh see I thought you were gonna actually add it to the spec I I shall once it works okay it works no I was very excited when I saw his stuff in there because I agree with you I think we should start doing that for the discovery spec yeah okay um let's see we can go and get started okay um I'm not sure where how you guys want to tackle this um okay so we addressed this one and it was yes okay um so I had a question for you guys should should there be something in the uh subscription so the sync knows which subscription each event is related to actually I guess it's more of this question right so when I if I get an event and I want to stop getting those darn events there's nothing in the cloud event itself that tells me which subscription is related to not even an ID should I have implementation detail okay should the subscription API specification define a well-defined or should it define a cloud event extension that's optional to use so that everybody could use the same property I think it'd be good to define an extension that's optional that kind of defines this I think that's kind of what I was suggesting with the signature stuff like making an extension that talks about how you can add a signature to the cloud event that's coming from a particular producer you know that that same PR or extension could also describe why you're getting these events you lost me a little you said I think it's your your use of the word signature that's throwing me well so take for example the github events when you subscribe you can pass github a secret that github will include as part of the signature generation process and then you can take the message have your known secret and then figure out which for which secret this particular event was destined to that's interesting because in that model you're assuming each subscription to github has a different secret they should well you may not though right because I may say I have five different subscriptions but I'm just going to use the same token secret or it's called across all of them because it's mine yeah that's that's okay because that's a single subscription with multiple events you register but say you have different consumers that are at different auth scopes you want to know when the webhook hits the distributor which which of those consumers is intended to get this event because I don't want to fan it out to everybody that doesn't have the scope for this particular event refresh my memory does the there's nothing in the github message itself that tells you which secret was used right there is let me look it up oh is there okay I don't remember that it's going to take a minute you can move on okay well Manuel your hands up yeah so I think regarding the github signature a secret that it can be used to encrypt a token and this is so you won't wouldn't find the secret in the message but isn't what we are looking for here the same as a correlation token and that is application specific because the subscription protocol would have an idea of managing the subscriptions and when they are all multiplexed on the same channel the events I mean then this is is something that the subscription protocol or the the user of it chooses to do or keep it separate right if I wanted to have them separate I could use separate transports for my subscriptions separate transport channels like separate connections well so our the concrete usage of this in Knative we have a single web book that's registered to github but we have the ability to be able to have multiple consumers or multiple we call them sources register that they would like to get web books from a certain github payload and we have the ability to when we're creating that subscription we point it back at the same in ingress point but we use different secrets when when we're doing the handshake with the subscription to github and then on the on the web hook side on the ingress to the cluster we look at this web hook signature to figure out which user created the actual subscription for from github so that we can route it to the correct kinds of triggers but in this signature it doesn't actually have the secret right you you know the point is that you can generate the signature with your secret and you could figure out which secret generated the secret yeah but don't if you have 1000 secrets that could have been used you have to go through all 1000 until one matches right if you if that's the limitation that you have then you should route it to a different consumer right for first small numbers you can you could calculate which which is the secret intended for okay but i think i think you're touching on a slight different problem than what i'm focused on so let's let's keep in the k-native world let's say the same user in the same namespace in k-native sets up two different github event sources both pointing to the exact same github exact same repo but just two different in essence web books and i'm and now that that that namespace or the the you know the sync in that namespace now gets two streams of events basically and and that sync wants to stop one stream how does he know which stream of events to kill because there's no there's no unique identifier in the event coming back to know which subscription which event source to to delete basically and that's what i was looking for was some sort of subscription idea or something as part of the cloud event when i was discussing the signature stuff with alex alex Collins from nuts from sorry so when i was discussing this the i think the actual distinction should be made in the url that you put in github so if you want you can have the same url target but you can append query parameters to make a difference and the same could be used for authentication so when you want to have what is the the htdb callback binding for cloud events from used from github with authorization and i think there was also clements point is that you have to use the query parameter of the or out specification so here likewise if you have two subscriptions or the two callbacks which is in github then the url being used should be different in my opinion i i like that idea and i'm mad at myself that i didn't think about that because that's something we've done in the past it just completely eluded me do people agree that that's a that's the way to solve it rather than introducing a brand new field someplace just say no give us a unique identifier someplace in the url probably a query parameter i don't think you wanted i don't think you have to require that though no no no but that's but that's if someone says how do they if someone walks up and asks the exact same question i did our standard answer would be something like this and maybe in the primer we can talk about that but i don't think i agree we don't we don't need anything formal in the spec to say to change to support it yeah i think i think that's right it's an implementation choice of the implementer and right like it's nothing that the spec has to say how to do this yep i'll great okay like because like best case the source is the thing that's sending you that subscription right worst case you're dealing with a some sort of middleware and then the source is not the actual source of where the event is coming from or the subscription and then it's confusing but right that's i think you have to be aware of how your apology is okay i think my other question here i'm trying to remember why i even wrote this so what i what happened to me even think of this i was i was basically wondering whether there's something in the discovery endpoint entry for a service whether it needs to include the source value so that when the event gets received by the sink it will know what source value to expect and i don't know why i even thought about that um but if no one can chime in to say hey that sounds like a good idea i'll just skip it right now and try to remember why the heck i even thought about that i'll hold off on that one should we support a not operator on filters it seems to me that this could be a really popular thing to do in other words i want all events that do not have a prefix or all event types that do not have a prefix of calm i think if you do this then you you kind of have to start looking at what like the workflow group is doing with all the operands right like not an and and or and maybe excellent yeah anish did you want to chime in here yeah scott took the words from my head i mean if you're going to know that we have to also explore other things well just to make clear or i want to make sure i understand when you say the workflow spec i'm not talking about not or or or doing or and stuff like that between events so this is all within one event yeah i'm talking about between the properties yeah but would the workflow spec actually get into that they have this concept of making an object that has all of these operands that that you might want to produce filters for okay okay i'll take a look at that and say i mean these are like really complex filter operations so there will be cases down the line where you would want to have an aggregated filter criteria not just one filter right so like combination of two filters in an and criteria with combination of one filter with those two in all criteria so we would happen because the filter object is an array so you would probably want to also support an operator on the line for sure but it would definitely get messy yeah oh what about what if you had a dialect that was so we have basic which is exact prefix and suffix you could also make a filter dialect that is block and it could have exact prefix and suffixes as well so it's the exact opposite of what the basic filter does yes you're right right like that would solve your problem and not complicate the spec i'll take a look at that so we made the default filter language jason pass because we think of all the workflow data being passed around as jason like structures uh could you help me out why block where's the name block come from uh it's woke you shouldn't do jason parsing because you're you really shouldn't be filtering on the payload right these filters are intended to be part of the filters for the envelope only not the data uh how about regex well i think someone's going to want to do a regex one at some point yes sure but again only on the envelope properties yeah um what about something more specific for the source i mean it's a u r i and just simple prefix or suffix is not always sufficient if you want to filter your eyes would that be a regex thing i don't know or maybe you want to filter on the authority section or just the path something like this well um just getting started on the topic other possible yeah i think you're hinting at like a filter dialect that's maybe the fuzzy type where you can add in a star star or something or like a paste or i don't know something like that yeah i can't get the word the word block that throws me for a loop too but i'm not sure why that's woke but okay uh okay you have the question you're he everything you know we're trying to be inclusive well how well i wasn't i wasn't picking a gender word i if anything i would have picked not how to block pop in your head that's the that's the new word for allow lists and block lists interesting um okay this question i think we already answered on the call people are okay with uh keeping it generic types instead of just string to string for config okay so those are the questions i had for you guys so thank you for helping me with those does anybody else have any questions they want to bring up or topics they want to bring up um i did add is i did add my discovery endpoint out there it has one service inside of it i've not tried time to go and implement the new specs that got landed today yeah i'll try to merge those today i think mine actually does implement the filter stuff using the pr which to prove today um and i think it it seems to work anyway my quick test um i never got anywhere past the dummy data that i have so i i need to go and make up some services that you can subscribe to and get silly events okay yeah i just have the pink service and that's it i don't have anything else okay are there other things people want to talk about here or it's i mean i tried to figure out what the next steps are here for us other than don't know if we do like an open call uh to you know interact with stuff so we make it like a blog post and we say here here's all these event sources and like one section is it's an open call to anybody that's exploring this to go and interact with these things sure um i feel like we probably need to be a little further along with our implementation so before we do that oh we of course of course yeah but yeah no i like the idea that's good to me scott did we make the discovery in front public so that we can register for a subscription ap out of that i have not done anything but i will add a dashboard that you can do that it'll be slightly temporal like scale to zero once you stop interacting with it but that's okay yep i was able to probably wrap up my implementation so we do have a call on monday right i guess the same interrupt call um did we um i can't remember say it again do we have a interrupt call on monday as well did we did we agree to have one every monday i don't know we can i'm okay with that can you guys do a noon easter on mondays on a regular basis at nine a.m. 90 am east job pacific time yeah okay tell you what let's do that i may need to move something but i will try to adjust i'll send that a note hold on um okay i'll send that a note and try to get it out of some calendar someplace to make scott happy okay anything else or is it just a matter of going back and coding and hitting each other's endpoints to see what happens yeah i wanted to ask about um the subscriptions filtering because i think uh in the original in some at some place it says um filters like rate limiting and so on this goes all into subscription parameters and they are subscription protocol specific um but i wondered if anybody knew a filter dialect that would allow rate limitation would it be a filter or a subscription um api a subscription protocol that does rate limitation i know some database stuff allows this some query languages but not the kind of event subscription that i usually work with so would it would it would with some sort of rate limiting parameter actually be a filter or would it be something more like a config thing yeah if there was something like in uh in every one minute window i want to receive only one message uh one event yes i'm not sure i would say that's a filter i would make that a config thing it's in protocol settings i just think that i mean that's a very specific feature of the particular subscriber you're connecting to some apis have rate limits of outbound connections like github i don't think will send you a bazillion web hooks even if you ask i was checking uh web sockets and i think there is an id of draft on uh rate limits uh for headers in htp but i don't see that going anywhere so it's in section three two one so thank you the option object um yeah that's that's where it is currently uh yeah the protocol settings so yeah interesting i mean that's a very dispatcher specific uh property of attribute right so this is something which again i agree the little goal the protocol settings are subscription config one of the two so that basically the dispatcher is is aware about what sort of throughput it needs to be considering in dispatching those events to the subscriber right yeah i don't think we have any examples that take that parameter but i think it was just more of uh by the way this this is where this kind of setting would go if you needed to implement it as a subscription broker yeah the interesting idea i had was uh if you took the timestamp and you would box it into intervals of one woman or whatever and then you say that you'd only want the the first um or singular occurrence in every of those sort of could be expressed as a filter if there was a dialect for it but okay if there is no dialect no language then uh protocol settings it is i am kind of curious about this use case though um because i could definitely understand it being a transfer level concern but i'm also wondering whether there are other use cases where it's not a transfer level thing but it actually is on the event producer side because maybe you're asking him to use this interval to query the system to get the current state and send that out as an event and maybe you only want it one minute versus every hour and that way it's not a transport level thing because it's not controlled by some middleware sending it out is that a valid use case yeah if you set the sampling um that would also work but um it's rather that uh different consumers could have different requirements for this so i sort of it as being a subscription thing yeah but if it is if it is something that controls the sampling at the producer side then then it wouldn't be a protocol setting right we more it's something more like a config setting i would think but i think the point is that it's not necessarily specific to the events that are flowing through that broker but it's the subscription that you've asked that broker to do for you and so regardless of the what the event shape is and what you want to filter you could put additional requirements on the broker for this but how to handle this particular subscription and that's where like qps or rate limiting or uh maybe uh yeah but that that's that's rate limiting at the transport level right not at the sort of the producer side of things right because the producers i assume in that case is still going to be producing however however many events it's producing right that's well you but the the broker could choose to do like a like a a queuing buffer for you right so you said you make it you know i i have some wacky subscription broker that the protocol setting says i only want one request a second but i have a bursty subscription producer on the other side it could queue and and only push one event to me at a time because that's a feature of that broker not a requirement of the subscription very very right no i i agree i i guess what i'm saying is i'm wondering if we just whether we just need to make sure we can support both scenarios one where you are strictly modifying transport level semantics whether it's done by the subscription manager or it's done by some middleware like you're talking about scott but then there's also the use case of the producer itself needing to say i'm only going to send an event once every five minutes because that's what the person asked for i have a new way to to talk about this one way to think about it is i think it's probably okay with the spec if you add this rate limit filter but in that case if it's a subscription filter it will drop events that occur during a that window that you don't want events so if you only want one event every minute and you have a producer that's producing once a second you're going to drop 59 events and only get one but if if there's a protocol setting rate limit of i only want one minute and you still have that same producer you're going to get a queue that backs up because you're it's going to try to deliver every event so it depends on what you intend to to mean by that once a second right and that's what i'm trying to get to is i think i think there are use cases for both i think it makes perfect sense to say i'm going to control things at the transport level and then in some cases that will mean yes things are going to get buffered but i'm still going to get every event and then there's something at the producer's level that means i want to actually control the number of events not that i just receive but that are actually sent sure yeah and i'm trying to figure out whether and i thought that's what man was trying to get to is if they did want to control the producer side of the house um is that a filter or is it a config thing because i my mind was originally saying it sounds like a config thing but the way you just now describe to scott i could imagine someone describing as a filter because i think you have to understand which entity you're talking to yeah it's probably config and protocol setting for the vocal based stuff all right so then to circle back around to your original question though is there something meanwhile you think we need to do from a spec perspective or is this something that maybe falls into quote guidance section and we need to just put something into the primer so people know how to how to code these things up initially it was just personal interest if you guys knew anything how to do rate limiting over those um the transports we have we support extensions and protocol settings don't we do we say i'm sorry setting in do we support extensions in protocol settings i assume so yes that's true there's a min set for each protocol but there's nothing that says that the particular broker can't accept more properties i mean there's nothing to find here right yeah and i think we can treat it as an extension and let the subscription manager send that information to whoever tries to implement that flag is anyone planning to implement any other protocol besides HTTP i am planning with nats but i'm really really unsuccessful so far i might do Kafka if i can get around to it the only way how was able to live in nats was using a using an HTTP broker so all that kind of a web of dispatcher so basically a cycle and that also in a Kubernetes context and that's why my implementation is delayed yeah sorry what about you scott what were you gonna do i i mean i was only looking at HTTP because that's the contract it's had k native but it occurs to me that a good test might be exposing other protocols too i would be surprised if clements didn't come in with someone like mqtt but he would need someone else to talk to to make it more interesting yeah exactly i mean it's not that hard i think nats wouldn't be that hard either because it depends description it depends like what's the sink so if the sink is also not listening to a HTTP protocol then it becomes tricky but if the sink is an HTTP then then it becomes easier because the subscriber is HTTP and then you can find n number of middle words to basically dispatch it by HTTP or right well actually you know it's it's a i think maybe we were thinking of this wrong right like i think that that's so to implement this today i think you would have to have a filter function send events through nats to the subscriber right see i think every subscription would turn into like maybe its own subject inside of the nats server in the network so you use the nats server as a transport if you just try to send events directly in you need to you need to hook somewhere to actually implement the filtering i think you need to do that on the other side of the broker yeah nats broke i was doing everything inside the subscription manager so far and then dispatching it by HTTP to the sink that was like really lame so yeah but i'm still thinking what can be other ways besides that let's see 10 30 time to go somebody's bill it's going crazy oh but i live close to a chart so okay anything else you guys want to talk about okay in that case i guess we'll talk again on monday hopefully we'll have more implementations i'm gonna i'm gonna try to start playing with your endpoint soon scott okay well um ping me i it's it's kind of demo code so the subscriptions aren't going to work the only one that works is for the discovery endpoint and this code might be super old so i need to maybe spend this afternoon updating it okay it actually does something but you can like hit the endpoints but don't expect the subscriptions to work right now okay fair enough and let me know if you if you get chance to hit mine and let me know how it goes oh i will sounds like a threat oh wait wait sorry uh you have a discovery service there's no um there's no path here uh no slash service slash services oh i'm sorry it yeah it assumes that you you'll stick a slash services on the end here if you want to get the services okay i think i assume that you added to slash discovery or slash subscriptions well actually i do have slash actually to be honest i think it's that slash actually maybe it is is it services or discovery i think it's it's services isn't it well i put the discovery in handlers well oh maybe no maybe i'm wrong no and then i think it's the subscription i think it's i think those are my URLs but but if you if you actually hit this endpoint you'll you'll get routed to the right one so but i do think according to the spec you have to put slash services on the end of this one yeah okay okay cool yeah i'll i'll take a look okay cool all right anything else bye yeah all right see you on monday okay bye everybody have a good one bye bye