 Good morning Doug. Good morning. Good morning. How's it going? Getting started. Yeah. Hey, Remy. Hey. Hey, Eric. Morning. And Matt. Matt, are you there? All right, but Timur. Hey, Doug, I'm here. Hey, Vlad, it's been a while. Hey, bud. Yeah, sorry, I retired slash took a sabbatical and somebody did a tiny profile on me a couple days ago and they're like, and he's involved in the serverless working group. And I was like, Oh, no, I forgot about all the fun over having over here. I didn't. I know yours. I kept reading the emails and the updates, but I needed a break. Sorry. Understood. Yo, Tommy. Matthew wrote in the chat that is here. Oh, okay. Cool. Thank you. So how are you all doing? How is life? Doing good. I'm good. Just busy. It's getting cold. So where are you, David? Are you in California? Yeah, San Francisco area. Okay. granted, it's warmer here than other places, but it's seemed like it went from 80 degrees to about, you know, 40 degrees in a week. So it's still adjusting. Yeah, it's weird. It was getting colder here in North Carolina for a while. And then last week or so, it's been like in the 70s, most days, it's been really, really nice. I'm jealous. Yeah. Although today, depending whether you like storms or not, we're getting a really, really good storm out here. Tons of rain and stuff. And or is that from someplace else? I don't know, to be honest. All I know is it's very wet. So it probably is from the storm, but who knows? All right, Clemens. Has it been a week again? I know it goes fast, doesn't it? Wow, it does. We do we like two weeks between each phone call just so we can have time to get other stuff done and then come back and do our real work here. Hey, Ginger. Christian. Hello. And Klaus. Hey, Doug. I feel like I missed somebody. Oh, Anish. Yeah, I always think it's funny watching people scramble like a couple of hours before this phone call to start doing updates to their PRs and stuff like that. It's like just funny. I know the feeling. I don't think I know what you're getting at. Not just you, but Clemens too, which was doing it, but I'm definitely guilty of that myself. It's just I have these grand hopes that yeah, I'll do it, you know, like later in the afternoon, one day or God forbid, even the weekend, it's like then other things come up like not working. Hey, Slinky. Hey. Everybody ready for KubeCon next week? Is everybody going or at least attending, I should say? Virtually, yes. However much I need to. Yeah, I know it's funny. I don't actually like doing the recordings in advance, but I have to admit it really does make life less stressful to know that part of it is sort of out of the way and you just have to sort of listen to yourself ramble for 20 minutes or so. For once we've been smart. Yeah. It feels kind of anti-climactic in the U.S. Considering Corona and the election. That is true, yes. Okay, let's see. Fabian, are you there? Oh, Fabian. Yeah, hello. Somebody else is flying by. Kristoff, are you there? Yes, hello. And Hamid, are you there? I'm here. Excellent. And Simon, are you there? Yes. And Lou. All right, one more minute until I get started. Lance, Mr. Lance, are you there? Yes, confirmed. Confirmed. So official. We actually have a relatively light agenda today. So if you guys have anything you want to talk about for the SDK call, I know there's at least one item that was added recently, but go ahead and add some more if you want. Maybe we'll talk about the discovery stuff too if we have time. All right, three after, let's see did I get everybody? Yep, okay, everybody circle back around later. All right, community time. Anything from the community if you want to bring up, that's not on the agenda. All right, moving forward. So, KubeCon next week. I believe last time I checked, none of the serverless stuff overlapped with our call next week. However, if people want to attend KubeCon and there are like sessions during this time, we can obviously cancel next week. So a question for the folks or for everybody on the call. Should we cancel next week or not? Should I interpret silence as keep it? I would say keep it, but it's just me. Okay. Well, and if you cancel next week, then it'll be two weeks before because then it's Thanksgiving in the U.S. Oh, is it Thanksgiving right after that? You're right, it is. Okay, not like I participate in this call a whole lot, but I'm just reminding you. Oh, no, this is good. Although that is an invitation for me to pick on you. So there you go. I'm just a finger head. I'm the only girl. So that's why you keep us diverse. Thank you very much. Okay, so take away. Why don't we keep it on for next week? And if for some reason we only get five people showing up because everybody's busy with KubeCon, then we'll not do anything official in terms of voting or anything. We'll just talk about other stuff. But so we'll base it upon how many people we get. Okay. Okay, we'll read about that later. All right, cool. Okay, so office hours. So thank you for Clemens, Scott, and Klaus for agreeing to do office hours. I know that not everybody agreed to do both times, but I didn't notice anything on the form that I filled out to say who's going to do which time. So you may get an invite for both sessions, but just fine. If you only show up for one, that's okay. We do have some people who sign up for both. So just sign up or show up for whichever one you agree to. Anything else related to KubeCon that people think we need to talk about? I don't think there is. I think we're all set up. But anybody think of anything? All right. In that case, for the Discovery Interop, we didn't have SDK called, we skipped that one, but for the Discovery Interop, try to remember what we talked about last week. I think most people are still just trying to find time to actually do the coding. I know Remy has his endpoint up, so maybe we can pick on him to do a demo later. I'm going to get to that section of the agenda. Are there anything or any topics you want to bring up to bring up with their broader group? Okay. In that case, just a reminder again, we'll have the SDK call right after this one. Timor, anything related to workflow you want to bring up? Yeah, hi, everybody. Yeah, yesterday we released the 0.5 version. It was a big release, about a year worth of work. And so that was a big thing. We wanted to release it before KubeCon, and we also released the Java and the Go SDKs and the VS Code plugins. So we did a big bang thing. It's very exciting. Congratulations. I wanted to ask you guys, it's like I was looking at our website and a lot more than 50% of traffic comes from cloudevents.io. Thank you for putting those links there. And I was wondering if it's possible, maybe we could put just maybe some text or anything anywhere where you could say, hey, Serverless Worker released a new version that would bring up a lot more views. It doesn't have to be done. I'm just kind of like trying here. I don't personally have a big problem with it. I mean, it's not directly related to cloudevents, but because we don't actually put up the things very often on our webpage in terms of announcements, I think if we can phrase it as, hey, the Service Workflow spec was released, and hey, by the way, they use cloudevents. So here's a perfect example for it. So there is definitely a tie in there. So yeah, anybody have any objection to heading down that path? Okay, if you and I want to work offline or if you just want to go for it and open up a PR against the web repo, we can work on it later. Thank you very much. Okay, any questions for Timur? All right, cool. In that case, any PRs or issues people want to add to the list before we jump into them? Okay, in that case, Clemens, I believe you're up first. I didn't notice a push from you or was there a push for you? I had promised to make changes, but I had no time to actually make the changes, so it's still with the promises. Okay, are there any topics that you'd like to bring up for discussion? We can scroll through the comments if you want to. Well, it's up to you. Is there anything worth discussing that we did not discuss last time? No, I mean, there's one midi comment by, I forgot who, which is questioning the entire existence of the URI. Oh, where is it? I know I saw that go flying, but here we got this one, right? Yeah, exactly. By who does that translate into? Oh, sorry. Yeah, I created this comment and and the idea behind that was, I don't know, it's trying to simplify things a bit. I just noticed there was a new AP operation added and there's worth describing some logic for the consumers to fetch the schema and there was some logic to fetch the schema with the new operation or using the data schema assist. And I just wanted to propose some ideas, some options in trying to simplify that and just using one AP operation. Let me try to explain. So that's the one thing I probably want to explain on the call if possible. The rationale behind this is because this is written in formal language and so there's some extra explanation, so I hope this helps. The scenario I'm trying to chase here is one that is effectively peer-to-peer replication along a flow path of messages where you have multiple and think of the following scenario to make this clearer. Think you have several Kafka clusters. I'll just use Kafka as an example because we have a binding for it that you can think of any message brokers. And really I'm not thinking about the brokers but I'm really talking about the topics. And you have multiple of these in various regions in the world and there you do local processing and local events and now you want to consolidate all those events into a single location because you want to analyze the global view which means you need to have three locations and you have local Kafka clusters in there and you're pushing the data and then you're doing local analytics but then you also have one global Kafka cluster that you replicate the messages, all the messages, all the events into for global analytics. What you will do then is along that flow path as you're setting up that replication the replication for the data you will also set up replication for the schemas. So now you have three different areas effectively of your application which might also differ because of local differences etc which are effectively three different domains if you will of authority where you might have also different publishers and if you go and consolidate those into a single location into a single schema registry then you obviously need to have a way to disambiguate those schemas but the events that you publish in the original in the original topics they will obviously have to have a unique identifier for that schema which is ideally unique in the world. So that's what this schema this URI is meant to be it's a URI that identifies that schema document and the schema version so the schema document equals the schema version and that's the goal of that schema version URI is that you have a unique identifier that you can go and put into the data schema metadata element of your cloud event and then that lookup function is really meant so that you can walk up to whatever your local schema registry is the one that is configured for your consumer that maybe at this end of this chain they have this event and that event has this URI here as an identifier for the schema and then you can go and grab into this into your local cache effect with the of schemas and obtain that schema directly without having to know what schema group that belongs in because that might have been replicated into a particular schema group for reasons of access control and without necessarily knowing what the history of that schema is because none of those things are interesting at that point what's really interesting that is that you get a hold of that document so that you can go and deserialize your data so that's the so what I'm trying to do is I'm trying to cross I'm trying to create a effectively a shortcut if you will there is this well organized way of schema group and schema and schema version for how you manage those schemas and how you can go and organize them but then we kind of need to have a shortcut if you will into that structure to grab quickly that one schema URI and then also we need to have a way to make those URIs effectively unique so that we can go and replicate them across these flow paths I hope that makes it a little bit more clearer what I'm trying to do here so it is this is one of those things which where I'm using a URI as a global identifier where arguably this HTTP prefix is confusing but I need to have I need to use the URI scheme that is well-defined and unfortunately we don't have one that we can use that is knocked down to a particular wire protocol so that's why I'm using those so yes exactly you need resource identifier as Scott just noted and so that's exactly what I'm using that for it's literally the ID of that schema that is global and that also has a scoping function so that within your local schema registry if you don't ever participate in any of these complex replication scenarios your world should be simple but as soon as you participate in one of those scenarios then you should be able to just pick a name which is your authority and then participate in all those replication schemes and that's what that goal is does that help in any way what I just said yeah yeah yeah okay I understood actually it makes more sense to me now it was I'm thinking after what you said it's like what I proposed is actually like splitting that URI into the concepts it defines and and use the existing API we have available maybe modify the get the schema version API operation maybe modify it that operation a bit and as I said split the URI of the data schema into the concepts and use the existing API but yeah probably as well as you said it's a shortcut the main proposal from you is like a shortcut and maybe one of the most important things in my opinion that you say there is like you may have a replication scenario where you don't replicate the full schema group but still you have that that concrete the schema replicated so yeah because of that you use the shortcut the URI to to look up to to fetch it and yeah yeah that's one reason then and then there's another reason and this is why I'm also kind of enumerating several options one of the things you obviously do with schemas is and this is kind of the protobuf and the Avro use case is that you want to save space like that's so I mean there's reasons for it I want to have structured data and I want to kind of validate all the structured data so that's one motivation the other motivation is that simply you like the fact that protobuf runs very short then what we then what we should try to avoid is having schema these data schema URIs which are three miles long and so while those schema URIs that are three miles long are great because they're wonderfully legible they're probably not the ideal thing to include with every single event so I'm also trying to create an avenue here where you can have an enormously greedy URI that then just has a unique identifier that's why I'm that's why I'm mandating for this this schema version identifier effectively something that is unique and can be really terse like if you if you're running a local if you're running a registry I'm imagining because of the way how we've created this that you can just have a counter as your as your identifier seed right and then you can go and prefix that with with with a URI prefix and then you can end up with you know URI that's probably 10 characters long and and still have this structure that I have here so that's also some I'm trying to enable affecting these URL shortener this scenario here as well okay I see that okay thanks for the explanation okay anything else on the PR Clemens you want to bring up worth mentioning no gem wants us to move on notice that did you okay in that case why don't we do this because claus did want to talk about this issue right here so if you guys don't mind I'll like do a swissy order slightly because I think gem wants to talk about this one as well so claus why don't you yes so it was also brought up from some of my colleagues and I there was this change about the null values in the Jason format and strictly speaking it's not totally compatible if you had an SDK that was reacting with an error on null values before as it wasn't clearly specified and now it would be against the specification to raise an error if there was a null value in the Jason format so but that's just one example it's also that we have now more sub specifications in this cloud events repository which will have different versions than the main cloud events spec and I just wonder how we move on with this single repository and also the different branches we have in there don't have a good solution for this yet it's just I think it has become more complex and diverse over the time it was a good start to have that's just a single repository and just a single master branch by the way we didn't rename it yet right yeah so that's why I'm asking here for ideas arrested issue okay so I feel like there are two different issues here one is whether 713 was actually a breaking change or not and let's refer that one for a second okay okay I think the I think the more interesting question at this point is how are we going to handle as you said different specs being at different versions and you kind of implied maybe that means we should have different repos for each and I know when we talked about that in the past people said we're not we're not quite ready yet for separate repo for each spec and so that's why we kept everything in one in one repo would this problem be solved if we just make it perfectly clear that each spec can have its own version number and just because something is related to cloud events 1.0 does not mean that that spec itself automatically gets 1.0 so for example for the spec that Jim was working on the protobuf one I personally think it was a mistake to label as 1.0 because I don't think it's had time to gel and prove itself that it's worthy of 1.0 yet so personally I would have preferred that that be like a 0.5 version but it's it's a 0.5 version of the protobuf spec for cloud events 1.0 does that make any sense yes so that's the second issue it's true it's more or less those two yeah so Jim what do you think I'd like to get your take on something like that since your spec was the latest one to run through this potential issue it's interesting yeah because I think when I was doing that that spec in my mind I was going I was thinking oh this is just another representation of 1.0 I didn't perceive that we would sort of have what were potentially breaking changes in in a major version so I guess the question is really those clarifications or other changes that were made around nulls did we consider those to be breaking changes because I think if we did then then there's an argument that we went wrong somewhere if the concern is more by merging the protobuf changes that sort of locked it in is that the concern now it can't it can't really evolve unless the spec changes no I think at least from my point of view I think the concern I had with the protobuf spec was that it's a brand new spec and I granted I'm sure you guys did wonderful work but I'm not sure we necessarily have proof that it's been tested thoroughly right okay and so that made me uncomfortable that we labeled that particular spec a 1.0 because what if tomorrow we find a major change we have to change it right does that mean we have to introduce a 2.0 because it's a breaking change for that one spec right so I would prefer for that spec to be a like a 0.5 for at least a couple of months for people to play with it and then promote it to 1.0 and basically let each spec have its own version of it basically yeah no I get it is there a danger we run into version I'd really like to understand how the versioning and everything would work because you're saying the formats would all be independently versioned and the transport bindings would all be independently versioned as well that is you know that's the downside to this proposal yes yeah it's I understand the problem I'm going to see a nightmare ahead that that's all yeah yeah so what I had first in mind was discovery and subscription because they are really I think still learning a lot and it's definitely not ready for making a 1.0 for this yeah I think I think the discovery or I think the other specs like discovery and subscription I think those might be a little bit easier to argue that they can have separate version numbers because they're less linked but to have protobuf be a 2.0 or cloud events is at 1.0 that might look really weird but I mean you've got the same issue with SDKs yeah your SDKs are going to evolve in some way and those evolve independently to the spec yeah they support a particular version of the spec I guess now the problem becomes if you version the transports and the formats independently you've got another level of complexity in the SDK in the SDK versioning because now those authors have to say well I know version 5 of my SDK supports this version of the cloud event binding specs and this these versions of the cloud event event formats and you've got another level of complexity there not saying it's insurmountable but I think it becomes it may become difficult for the population outside this group to understand what's going on right so it's also confusing with the branches so we have a 1.0 branch and if you now want to do some research what happened for example between 0.1 and 0.2 for discovery spec and so what branch do you have to look at to get that delta for example anybody have any wild proposal personally I'm inclined to try to find a way to keep the version numbers in sync at least for the cloud event related specifications if we can so I think adding a new format like I did you could conceive that to be a minor version change yeah it should from a cloud event spec perspective is just a small add-on and I think your point is when do you say okay you know this is now formally part of the spec I don't know if we went through that with Avro yeah but as you do and Clemens has always threatened to do C-Ball or try and persuade somebody else to do it I should say or XML even so how do we see maybe we need to decide how you want to proceed in the future and then that was sort of determine how we reverse engineer all that stuff the version screen well let me ask you this at least for the protobuf spec would we have avoided this issue if we didn't label it 1.0 immediately and we said okay we think it's done but because we needed to have time to be tested in gel we're going to call it 0.9 and wait six months or whatever you know pick some period of time and then if there are no issues found with it then we can raise it to 1.0 and that's a fair point I mean I again did we do that with the other formats probably not I you know I couldn't put my hand on my heart and say you know we have enough experience you know well me personally to sort of say all the protobuf stuff works and I'm not sure what the mechanism is to garner that sort of feedback you know to elevate this stuff from a suggestion to a specification right okay so what do people want to do I feel like okay I feel like now there's three different issues in front of us one is what specifically do about the protobuf spec and the reason I say that is because I'm wondering whether we made a mistake by calling it 1.0 and we should reverse that mistake and move it back to 0.9 or something just because we needed it to have time to be tested and we were premature making it 1.0 and I'm meaning I'm looking at UGM to help help make that decision and because based on what you just said I'm not sure you feel confident that actually should be 1.0 right now okay oh yeah I would you feel that if we if we downgraded it or do you already feel like no it really is 1.0 worthy well I mean I maybe this group you know if there are people in the wider cloud event area that have you know picked up that spec and can speak to whether it's working for them I don't know if any of the STK teams have tried to support it in their formats so I mean I think it would be good to get a bit of feedback the interesting thing is you know your prototype people would generally say well we always try to make stuff back was compatible when we change so what does that what does that mean yeah so I would have to think about that a bit more I'm not sure what the implication is maybe maybe that's the bottom line yeah this is I think this is in a slightly better position than you know the unfortunate one we got into with Jason where we needed to go and essentially change the schema definition yeah yeah so to me the implication here is actually very minor in the sense that I don't think it changes what people can go to I think what it does is it gives us the freedom to acknowledge that we made a mistake and this gives us the freedom to change it because once we give it the official 1.0 label we can't change anything in a backwards incompatible way right okay right so let me pick on slinky for a second, slinky I have this vague recollection that you have actually may have tested the protobuf spec was that am I remembering it correctly or did you did you do that nope no you didn't okay never mind then I apologize and I apologize Doug I do need to drop now okay thank you thank you Jim okay thanks guys okay lance your hands up I it seems like we're dealing with excuse me a couple of different things like one we have to and this has already been said I'm just going to restate it one we have to think about how we synchronize version numbers across different specs or aspects of the spec and and two is sort of the version numbers having some implicit meaning like 1.0 meaning that it's ready and I wonder if there's another thing that we can add to like so there's the main spec that I think should iterate over the major version numbers and the the subspecs like the protocol specifications could have follow that version number that the main spec version number but have some annotation like whether it's experimental or deprecated or you know solid I don't know solid it's not the right word but you know some sort of annotation to the number that would would help people understand that okay maybe protobuf hasn't been tested enough but it still marches in sync with the main version number so what's interesting is in the past we used to do things like have version number dash WIP to imply that it's not quite ready yet or you know release candidate kind of things maybe that's what we need to do with protobuf is to make it as you said make it really clear this is for 1.0 but it's not quite ready yet so whether we call it work in progress which may not be official enough maybe it's release candidate or something like that and let that gel for a while you think that would is that something online to what you're thinking yeah exactly okay yeah I like I like that because it's it's it's almost the same as calling it at 0.9 but it gives it a little more formality to it and and as you said it links it with the specific version of the main spec because if we did have for example a 2.0 and a 1.0 of cloud events you need to know which one it's for so I like that idea right I mean I really do think that keeping the the version numbers for all of the different pieces in sync is is probably pretty good okay from it from an SDK point of view it would it would be a nightmare to have to try keep that all straight if the version numbers were all disparate just taking some notes here okay what other people think about that idea for dealing with one of the three problems meaning the problem of a spec is linked to a particular version of cloud events but we need it to be to go through a testing period before we can actually fully claim that it's like 1.0 ready and we're talking about giving it a some sort of post-fix or suffix like a work in progress release candidate something along those lines you can figure out the exact word or acronym later what do people think I guess that's similar to what Simon was kind of saying in in the chat hold on a minute I'm trying to read Simon's stuff Simon you want to vocalize what you're saying yeah some yeah some connection problems he told okay yeah so that's why he's typing into the chat okay okay so I okay so let's okay let's let's so that's the sort of the second issue that we want to talk about if we're okay with the general idea of some sort of post-fix for the name let's now talk about the one of the other issues that that class you brought up which was you think we've introduced a breaking change now my understanding that for 713 was we were going to try and I want to say lie but squint a little and claim that we just made a mistake which is why we were okay we're not bumping that to be a major version of our because it because technically if it is a breaking change then we should have called the 2.0 but I thought we were going to like I said squint a little but do other people remember it differently in particular class do you remember it differently I wasn't sure what we discussed before about this so that's why I opened up that issue okay does anybody else remember I did for your let me go back it's just technically we have created that 1.0 branch but we never have cherry picked or merged anything we did to master into that branch that is true I think I think we may have done some very minor things in the past but you're right in general we have not so the question I to me is should we be looking at doing a 1.1 because I don't personally I don't think we should cherry pick anything of the 1.0 branch ever unless it's a you know a blazingly obvious typo that needs to be fixed because everybody's going to get confused but other sort of clarifications I would think we should go into a 1.1 or a 1.01 something like that and that would and that should warrant a new branch because most of these because to me once you create a release that thing should be basically set in stone agreed so class on 713 yep let's assume for that we can revisit this this decision but let's assume for a minute that it was a non-breaking change do you think that we've gone long enough because I think it's been about a year since we went out 1.0 let's do you think pretty much a year no yes yeah do you think we should do a 1.0 something maybe that I'm worried about why are back compatibility with the existing code so question for me is do we if we change diversion numbers do we do we actually change the spec for do we have to change the spec version if if that is so then if we have most inconsequential changes from a from a wire compatibility perspective for the core spec that I would prefer that we find a different way to version those things and call them and kind of have them the orata or whatever I mean there's there's been HTTP 1.1 has gone through you know even going through different to completely new RFCs whilst while keeping this version it's on on wire version numbers stable and that's something that I would certainly prefer like that the spec even the spec version change here that we're showing here would break a lot of code and if there's actually not not something that really requires that then we should stay away from this so then for me there's the question of what is what does version mean with regards to wire on wire version and and the spec and the spec versions so literally the document versions right and I and that's actually why I changed what I was saying before because originally I started talking about 1.1 and then I changed it to 1.01 because I had this vague recollection that we were only going to have major minor version numbers in this string and so what we could do is say that going forward unless we introduce a breaking change would actually which would bump us up to 2.0 as long as they're non-breaking changes we're only going to change the patch number so from now on it's always going to be 1.0 that's something yes and we're only going to include the first two digits in the version string that's what I can look with yeah yeah it's a good point I mean just for those edge cases now to make that big step it's probably a bit too much yeah so okay so then let's go back to 7.1.3 was this a breaking change so to have one specific example we have some implementation of this REST API accepting cloud events and so far it actually reacted with an error message when you had a null value JSON format and with this change that wouldn't be spec compliant anymore right okay so the question I think then becomes are we just fixing a mistake and are we going to force people to support the mistake any comments can you repeat the last question so if we if we treat PR 7.1.3 as just fixing a mistake even though we understand it's going to it could technically break existing implementations we're willing to tell people to just suck it up and say look we made a mistake you need to support null I think that's how we should see it no mistake I mean it's I mean it's it's a huge amount of work to make it to difference inside the SDKs I don't see honestly I mean we can at least from the SDK perspective we can just brighten the next release change logs that we fix this issue the spec states and you could support both right for a while you could support both in JSON schema you have the one off thing that you can define to let's say this is a breaking change you could for a while support kind of like in Java where you have a deprecated annotation saying yeah but what I'm saying is that supporting both is not trivial it's not hard if it can be done but it's not trivial oh but I'm just saying from the JSON schema perspective oh okay you can define that where with you can define both and clearly state this one will be deprecated at some point I'm not sure that helps us with respect to interoperability though right because if you say you can do either one then half the you know you run the risk of half the world not supporting it and therefore they're not interoperable whereas if we make a a concrete statement that says look we've made a mistake you're gonna have to change your code at least then people understand that by doing so they will at least be guaranteed interoperability right I think it's fine to do this statement honestly citizen I think I think it's fine to do this kind of statement and saying we are wrong and we fixed it so I don't see anything wrong with this honestly while on the other hand I think there I think there is very very a lot a lot of override for everybody to bump a new version only for that Klaus what's your opinion since you brought this up yeah so I already agreed that changing the spec version here would be really not adequate as it's causing a lot of work so yeah maybe it's if we got it as a bug maybe we do some bug fix released then one on one I mean then we wouldn't change the spec version as you suggested okay a lot of things on those here okay what if we did this in addition to that what if we also send out an email to the mailing list saying what we're going to do here and see what kind of feedback we get and if we don't hear anything then that's great but maybe you know maybe we're going to really piss off our community if we do this and I'd like to hear about it in advance before we do it okay okay so hold on a minute let's see so I think what we have here is we're going to look at possibly doing some sort of post-fix on the version string for example release candidate one I'll talk to Jim to see about doing this for for the protobuf spec use that as a guinea pig we'll look at adding some text somewhere that talks about how the version string and the spec will only be the major and minor version not the patch version number and we're never going to increment the minor version number so we're always going to be zero send an email that's the mailing list saying that 713 is a bug I'll get feedback anything else what's going on in the chat Simon you really want to go out of the 2.0 soon and really annoy people that's not a big concern what do other people think about what Simon said in there I think I can add something about the fact that we have one point to a branch so the one point to a spec points to that branch but we never really share big bugs so when for example in the SDKs we say we implement 1.0 in fact we are implementing master that's something that came to my mind now like for example all the fixes we did at Kafka a bunch of months ago for the Kafka binding we call it 1.0 but in fact we are implementing master so doesn't make sense Scott that's an interesting point yeah I hadn't really thought about that I apologize I got a little lost in there can you can you say that again so when we implement a feature in the SDKs at least that's what I do in all the SDKs where I work on I look at master I do look at the one point to a spec so when we fix something when we fix something in master we are kind of implementing master we are not implementing one point to a spec so for example you probably recall that I found a bug something that was not really clear in the Kafka spec a bunch of months ago when we fixed on the spec we fix it in the cloud events SDK 2 for the 1.0 version but in fact we fixed for 1.0 but in reality we fix it given the master spec not given the one point to a spec because the one point to a spec was never changing right so I'm trying to figure out okay then what's the implication of what you're saying there I mean you're basically kind of saying that the master branch is in essence the de facto 1.0 even though it's not called 1.0 yes and and de facto the 7.1 trick it can be treated like and your bug fixes with it toward 1.0 and we just fix it in the SDKs and that's it okay but then the then it seems like the next step after that the next step in that thought process would be okay that's great the the master branch is the de facto 1.0 we should probably make it official at some point by creating a 0.0 I'm sorry a 1.0.1 right well at some point I would say we should ship a 1.0.1 because these are bug fixes and we don't really want to change the 1.0 spec shape but I I do see the points around breaking APIs okay so hold on a sec here so what I basically think we're saying is think about releasing whoops soonish I don't know it's something for us to think about right oh I'm sorry their hands up a niche I'm actually questioning that changes which goes into the master branch should we even call them as 1.0 because officially we are past 1.0 so should we even call them let's say 1.1 release candidate because 1.0 has been out for a year now like everybody says and the changes which goes into the master are technically not the stable version they are development versions so they are still under development so we should give them probably a different name under the specification or even in the SDKs like experimental feature I don't know well that's why everything at releasing is 0 I'm sorry 1.0.1 yeah 1.0.1 still goes as a official branch of patch release it's still not like a development release so if we are following a semantic versioning that means 1.0.1 is a patch on top of 1.0 it's still not a development version so it removes your it doesn't give you the flexibility to introduce breaking changes into your master branch anymore well but if we introduce breaking changes though that has to be a 2.0 yeah but not in development routes because we are still speculating the right specification right right now so we wouldn't know that they are breaking changes till we start playing around with it so calling it 2.0 would be too soon for that right I guess I'm not following because I'm my assumption is we have not introduced any breaking changes yet ignore 7.13 for a second right my assumption all the changes we made are either bug fixes or clarification or something like that and there aren't any breaking changes so therefore 2.0 should not really be an option at this point so my proposal over here is basically should we introduce something like an experimental version or like something like a feature version where we start getting our hands dirty and then we release these patch versions then once we see that these feature versions are stable then we upstream them to let's say I don't know 1.0, 2.0, 3.0 and so on so should we have a provision to play around in some specification like so let me okay let me rephrase your proposal what if we did this what if we talk about releasing a 101RC1 and then once we have that tested then we can drop the RC1 from the name yeah I would say so okay what should I what should you guys do here I don't think probably a different schema well I don't think the SDK needs to change does it because until we introduce a breaking change master and this other branch should basically be the same yeah but how should we align with 7.1.3 should we fix it on on 1.2 so every cloud event that now we receive should be 7.1.3 aware in the SDKs I would say so once we drop the RC1 from it Simon your hands up if you can get past your audio problems yes so I wrote in the chat again sorry for that yeah so I agree basically with Yoshi and so basically it would be nice to if you introduce a PR to always maybe classify it this is just a patch change this is a minor change this is a new feature basically or this is a breaking change and depending on that we might choose to merge it into master because we expect the master to be the next 1.1 release or the then we know we can merge it into master right away and save the hassle and if you're unsure about that we I think maybe we need to create other branches to cherry pick from them and choose the time when to introduce changes for example if we know we have multiple breaking changes it may make sense to collect them and not have many major releases but to put all of those changes together at a chosen time yeah I think it's that last part that I think is where my head is at which is I personally would prefer not to create a new branch until we're ready until we're ready to release something and that means all changes go into the master branch but with the caveat that we aren't going to consider breaking changes at this time so therefore by that that implies everything that's going to get merged into master is is destined for a patch release yeah okay if we understand it like that that would work yeah yeah because I think everybody agrees if we if we merge a PR that's going to require a major version bump then that's going to be a very very big decision and I don't think anybody's advocating breaking everybody out there at this point in time and I would have people think about that just either write down or have you know just an implicit agreement that at this point in time we are not considering breaking changes period thank you Francisco anybody else okay well think about it we don't have to decide on the call here I think we have at least a little bit of a path forward relative to this issue that you open klaus klaus do you want to send out that email or do you want me to I would be happy if you did okay okay I'll do it that's fine I don't have a problem with that okay um hey what I'll I'll take all the actions related to this okay because a lot of it is possibly just some additional text someplace or reaching out the gem to see if he's willing to do this I'll put above one okay klaus is there anything else you think we're forgetting to talk about relative to this issue I mean I think we're going to probably revisit it but for right but for today do you think there's anything else no but I like the discussion it's it's good that we talked about it yes it is and thank you for bringing it up so okay okay I don't think you have time to jump on to any of the topics but I don't think any of them are major actually Anish on your it is your stuff right Anish or is that somebody else or is that sounds good I think it's too late for primer pool request so I think we can ignore that for now because I think it might start discussion okay but we we can definitely talk about the issue what I raised the second one oh that's right I forgot yeah outlining difference with okay I mean I don't know if this escalates quickly or go ahead and introduce it then yeah sorry for being abrupt I just wanted to bring this point into the forum that I see a property called subscription config in the discovery API which looks somewhat similar to the protocol settings in the subscription API so it would make sense that we start talking about the differences between the two if there are any if but and if there are no differences then it's probably to check one of these out and then stick consistent across the APIs so yeah I I don't think it's a case because what happened is a subscription config is telling you what you are supposed to post in the protocol settings but when you call the subscription API you need to have those information from the disco the discovery API before so that's why they are similar because basically one is describing the other but I don't think we should drop any that is oh Doug is just leaving because that's his fault no no no no no no no now I'm saying bye to the guys on the calls it's a Simon and Scott I'll be here don't worry keep going I want to abandon you guys I mean this is something we also talked about last week so this is really amusing because because you're you're literally no filing an issue on something that we've that dog has been has been arguing for two weeks ago okay I was arguing to keep it and just explain the difference between the two yes because I do see the difference between config and protocol settings I mean if there are difference then it's let's yeah so when we are just for it just to catch you up I think what we were was there are there are protocol settings which are for how the event gets delivered the subscription this extra subscription config is really about the event might be acquired if the subscribe if the subscription mechanism has that so think about a thing you want to subscribe on that is not really that is not passing messages through but that is observing the state of a machine where you say I'm subscribing on the state of the machine and I'm taking a sample every five seconds then the configuration that you would have to pass along with that subscription needs to go live somewhere that is what we've then that is that is why I said this bucket might make sense when Doug presented the PR to Abbott so from my point of view it's like there are a lot of things which are present in this dictionary or subscription config which might be propagated in the protocol setting dictionary down the line right and so should we segregate these things somehow like because I'm pretty sure there would be some things in the subscription config which would be propagated to the protocol setting so I just don't see some sort of consistency but it's just me probably I think there is a lot to still define like at least when I did the implementation or one type of implementation there was a few interference that we need to fix here and maybe this one is part of it like in a global picture so technically we're out of time but I'd like to personally I'd like to work on the PR related to this one whether it's to remove config or at least explain the difference between config and protocol settings I'd like to work on a PR related to that and then and you can then jump in or if you want to take the first pass at it go for it and then we can work on it together I don't care but I do think we need to I do agree with your issue we need to clarify it or kill it one of the two yeah I mean that I think then we should discuss again next week so that I have more points and probably find a way another way probably find a place or we can put this into the specification at least differences between the two so yeah okay cool in that case technically over time so let me just do the quick gender check or I'm sorry attendee check and then we'll jump over the ack call so Manuel he's still there yes okay Ludang are you there I think no I don't see him Grant are you there yeah all right cool okay anybody else that I missed all right cool thank you all for joining if you're not interested in the ack call you are free to go and we'll start the ack call in just a moment thank you everybody have a good weekend I have to jump unfortunately okay next comments Doug the ack call is on the same recording it's right here right now yeah I'll come back and watch it Saturday cool thank you yep okay ready and amish I I jacked you because I thought I had to jump in a meeting but I can no worries no I can I can move I can move my but but slinky please do me a favor and stay at my point sorry I mean just do me a favor and stay till I talk about my my agenda if you can please what you said he wants you to stay on the call so he can so he can hear your opinion on his topic okay okay well then I can keep my topic yeah yeah yeah funny you guys crack me up okay let's get started then so slinky you're up first yeah so two announcements so I made last week the release of sdk you're asked to 0.3 and by the way and a lot a lot of breaking changes but you know it's we are still in a very very private your face but things are starting to get a little bit more stable and there is some contributors working on it so I'm not on the on one unfortunately and in the next release we will get no sdk support so cloud events on microcontrollers and then qtt while for sdk java I did a release this week and I hope to do another one next week and we finally managed to solve the most outstanding issues the most important questions in particular there was this question that took quite some time to solve it which was how we deal with the cloud event payload how we represent the cloud event payload inside databases and that's done so we solved it and maybe it's not the best solution but you guys check it out and let me know how it looks like I will try to rush for the final release for the final for the 2.0 ga before Christmas if I manage to but I definitely need some help with reviews in particular I need I would really really love to see at least the implementation of protobuf or avro because they I think they are really important I heard that Jay the name is Jay Wright he was interested in protobuf inside sdk java and yeah if he he wants to step in or somebody else and help with that because I I really really need help for that and I I already flooded with all the other issues of the sdk like documentation this kind of boring stuff so do you have any questions congratulations by the way it's exciting any questions for slinky all right cool thank you slinky as I said it's very exciting always great to see a lot of forward progress there all right Remi I think you're up yeah but you finish wants to go for like before me so slinky can answer oh so you want that thank you yeah so basically I I got a chance to play around with the java sdk and then that got me thinking that that the apm model for sdks are are really different like especially if you're coming from a goal line world where you use the these cloud event receivers you start the cloud and receivers so basically the general developer interface for the sdks are really really different when you come from go and so then you start playing around with java so I wanted to propose like should we think about a consistent apm model for the end users in order to like give give a consistent experience when they want to invoke these sdk apis so should we even think about it or yeah I'm probably thinking too much slinky your answer I have a very strong opinion about it I think we should not even think about that because I mean you weren't there eight months ago but communities of different programming languages are so different that and languages itself are so different that even thinking about a model that can work for everybody it's just a complete waste of time so I I contribute to three sdks I contribute to sdk go to sdk java and sdk rasta and I can tell you that I never ever managed to find some even in some basic interfaces a common product that we can use so like for example the whole sender receiver thing that we have in sdk go works pretty well with go because in go for example you have a single way to manage blocking and unblocking okay so the semantics of blocking and unblocking it's straight inside the language itself this is for example not doable at all with java because in java you have 10 different ways to manage sync and the sync to manage streams to manage blocking and unblocking so the kind of interface that we have in going sdk cannot work at all in sdk go in sdk java and another example in sdk rasta the receiver model that we have in sdk go doesn't make sense in some use cases because maybe i'm integrating with a with a library like arctic's web where you already have a very well-defined paradigm to handle requests to handle events so you just need to integrate with it more than trying to create your own interface that is consistent across programming languages and of course sdk is so what i'm trying to say is that it's a huge pain and it will get us nowhere okay i just wanted to bring this discussion because it was really really a world of difference when i switch to the java sdk yeah but probably see there's a javasdk has to be improved for sure and if you have any concrete top topics on the improvements of dpi so i'll be really really happy to discuss about that that's for sure i mean the goal against dk is it's just more developed okay said that again the consistent dpi model across all these dk's just doesn't sound around to it okay cool so remember your hands up yeah just agree with thinking i think like even when you look at typescript it's too different from like go to to be able to have something that looks like okay i just wanted to say i i might be remembering incorrectly but i think in the past when this issue has come up we landed in the same position that that's think you said the answer was basically no however i do think that people acknowledge that if there was a very particular feature or aspect to the sdk's that makes sense to be common that that we could explore that right and and bring it up as separate issues in each sdk to say hey look what if you did this particular small little thing not doing it because you want consistent you see but because it makes sense to do in general for that language it still makes sense then we can get consistency for those things sort of indirectly but but only because it makes sense for those all the language just do the same thing not because we're trying to push for consistency i think we had a conversation like that but i don't know maybe i remember remembering correctly so it doesn't make sense yeah so anisha if there is something you'd like to see consistent then yeah push forward in each individual repo and if they all happen to agree to it then hey you get consistency but it's not for consistency's sake yeah my major concern was with with the center concept because like the center concept in go is is just so seamless but with java it was it was just crazy and yeah so i would probably see if i can drill down that particular concept for java as well that's it okay yes link it your hands up yeah just to close the discussion i can even tell you that one of the things that you might think are simple to do like representing the data field the date the payload the data we solve it in four different ways in the four sdk the four major sdk's we have in sdk Rust we have a union type and in this union type we are tied to the json library because it's fine to do that in Rust and to tie to that particular library that everybody uses in java we didn't want you to do that because everybody wants to use its own library so we had to create an interface that abstracts over data and then everybody implement it then in sdk go we we have just a byte buffer but then the interface allows you to map directly to to the data structure in csharp we just return any an object whatever so untyped object in JavaScript i don't know how we do that but just just to tell you that even the simplest thing is hard to make i get it i completely get it for sure for the sender in java well just open issues and for which particular module did you find the issue for which particular integration you define issues the message sender the basically the message reader and message writer interfaces i mean i was not able to digest those two interfaces very well but yeah i think this is something i would just spawn a parallel conversation with you sure thanks guys sorry i was on you talking all right thank you nish um all right me you're up yeah um so it's just because i did the increment like part of the implementation of subscription discovery and while doing that i noticed that i think the way we emit a message in the javascript sdk was not super pluggable so i did a PR that can can present i don't know if it's like the right time if people i can understand like people who are not on the javascript sdk might not be super interested but uh if for lands or grants you can discuss that are you saying you want to talk about the pr right now yeah you want to share your screen then yeah i can go ahead i'll drop out or i'll stop okay it's me talking you you go out um um so the thing is so when i was trying to implement the discovery one of the like discovery and subscription basically the subscription you don't know who's going to subscribe before they subscribe so when you do that compared to if we take probably the example of how we are supposed to emit here the guy who created the event is basically already knowing to who he wants to send it and in fact even if i use emitter for it's gonna be the same it means like i already know where i need to send it while in my opinion when i do development i would prefer to not know as a developer like for not know at all where i want to send and just say okay i'm gonna emit one one event so basically i see it more like that in my code and then anyone who wants to do something with these events can just do something like that so you will say when there is a new event i will emit it to the either an endpoint or anything with that type of parodying it allows me to create a subscription service because the subscription service will know who has subscribed and will basically subscribe to that type of events to be able to push to its own subscriber and as a developer i don't know who is gonna subscribe to my event like i don't know any of those so i need a way to emit with abstraction of the other people that you are doing so to implement that i came up with just like instead of deprecating the full emitter class as it was i just use it as a single singleton to be able to listen to that singleton on those specific events so with like the event emitter to be able to abstract to who i'm sending so this way when i cut a class i just say okay like i'm gonna do five new events i'm just generating those events and i don't care who's listening because i think in the current implementation like if i have to do that it's useless because i put it sorry i should not be that opinionated but it seems to me really hard like if i have five classes that emits each of them like five cloud events i should not have to pass the emitter because at that point in time in when i code i don't care who's gonna listen to my event so that's why i came up with that idea so if you look that's probably just mostly formatting issue but uh so that's the thing i demonstrate to you how i envision it and i just remove the deprecated from the emitter class to change it to something like a single point where we emit all the events and then you can listen and but you decouple basically the transport from the emitting of the event by itself inside another application was it clear or i took too long Lance i think your hand went up there for a sec yeah i mean i i haven't had a chance to look at it yet i'm wondering are you using the oh yeah there you are you're using extending event emitter you're using the built-in node stuff so when in the code sample that you showed you have you know the singleton dot on can you show that again yeah yeah so that is that's using the node built-in event emitter stuff right so any any piece of code within that same process could call that same function provide a different emitter so a cloud event could be emitted multiple times for a single event correct is that right yeah 100% correct yeah i like it yeah and like the only question that's raised that i erased then when i finished like my implementation was i might remove the like the official event emitter to do the same kind of implementation but internally just to be able to because it's gonna probably be asynchronous like those functions and so maybe when you emit you want to make sure that it's completely emitted because if you're in the node like otherwise it's gonna be in the node queue and you don't you cannot be 100% sure that the event has been received on the other side so depending you might want to we might want to change to be to have that ability that ability so either you will do a wait on the emit or if you don't care you don't put the wait and then is just sending so i might still change on that but i really think it's useful and at the end what it so what i was able to generate was like this demo that i probably demoed a few time ago so this is like the gateway you can add services and because i did implement like so when you when i subscribe on the right panel we see the events arriving and here is a discovery to detect how many services you have and when i do that i really needed something like i just shown you to be able to plug into the SDK efficiently so let's say i have Garfield as well so now i have like a cat service and a ping service so i can subscribe to both if i want and then i'll see like the events arriving there so here but i and what i did but that's slowly a bit more out of scope you like a gateway that is able to when you subscribe so by default it has no service and if i connect the gateway it will start it can aggregate several services so right now now the gateway is aggregating the ping and the Garfield service so i can just use the gateway to to retrieve all the events from several other services but that's probably out of scope but while i was doing that so basically that's why i came up with the emitter because i think it's a nice way to completely decouple the transport part from emitting the event so if we agree on that i think i will probably change it to to be able to have a wait on the emit event to make it a promise so you can decide if you want to wait for the promise or not and by doing that i have to implement my own type of event emitter but i'll take exactly the same methods name if that's fine with you and i have the other pr is basically just it's more simple it was just to add some types i decoupled it from the rest but i think we should have all the types coming from the discovery service and it doesn't include anything in my so that's for first one unless anyone has other questions i i guess i had a comment um sure with what like the uh just sort of with the naming like with get singleton i feel like that like i guess i don't really see that naming pattern and node in javascript the singleton pattern maybe we could have it like is it like a static uh yeah in fact that like so it's good that you raise that because if i removed like i had to use a singleton because i was not able to use static because i wanted to uh over like extend the event emitter but if we remove that event emitter part and we just go with static with promises which i think is more efficient we can do emitter on and remove basically the this and declare on as a static and then we will have something more clean i think so i'm i want to move like i can update the PR to reflect that if we all agree that it's it's a better if i understood you correctly yeah i think so and uh i guess another thing is so with the on event are there are there on like first parameters on something else or it's more because it comes from the event emitter normally you define the type of event i think when i thought about it uh we might want to i was i was thinking we can like the emitter can emit an event when a new subscription arrives and when someone unsubscribe basically when you do so when you call the on it should send an event saying a new subscription i'm not sure it's going to be useful but uh that was my thought so that's why i put that like first because event emitter it was mandatory but even if we move from event emitter i think we should keep the same signature one one concern i have is um having all of the like subscription stuff um and discovery stuff in the the current module um one of the things so if i if i may that's my that's my next point and basically my goal is not to have them inside it's just that's why uh like for me that's like the only modification we need to do in the sdk and then uh after discussing with grant and you like two weeks ago i think um i do agree that we should probably split and we should have what i will call the cloud events uh dash service which is basically the discovery implementation the full discovery implementation and the full um subscription implementation but as a side module that you can either choose to pick or not it's up to you but to be able to implement this module efficiently if we don't have those kind of uh stuff it's not gonna be possible right i guess um what i was talking about is less the that emitter stuff and more let me just um the other interfaces that you added in 365 um i'm just wondering if if the well i guess those are needed but it's not really i could put it in the other module i just thought that i think it makes sense to make them available because it's just interface it's not like it's overloading the i don't see it as overloading the sdk by adding the definition all the definition from the norm but so so what what you're thinking is is that um some you know secondary module or some other module that is like the you know cloud events discovery module and javascript um is is has a dependency on this module and just uses these interfaces from this module yeah that's yeah correct that's that's what i did that's how i did implement uh i guess you do the discovery it's gonna so the sdk definition so that's why in fact i don't really care if we put them in the sdk or not for now i put them on the side here in the definition but i think it should be in the sdk so instead of that i should see uh from uh cloud events because then we know the sdk is about exporting the right interface based on the on the specification the implementation is always something that's why you wanted to get aware of the implementation in the sdk which i agree with and then the service can be opinion uh opinionated on how to implement but um i don't think we should have the because then the service will be the implementation so i don't think the service should have the definition of those extra classes yeah that makes sense i that makes sense okay okay uh i mean i i think there'd be a lot of value in extracting um especially like some of the discovery implementation from the sdk module because like for example receivers don't want to keep updating the sdk if the sdk has a new major version for some discovery feature yeah yeah i need potentially for security reason also you might have more issue with security and discovery and specifically more the subscription i guess but yeah i do agree with you so that's why i i think like i'm gonna split it and just contribute uh like the discovery what i did here is just like for now it's not clean enough it was just to make like the example but uh i want to get part of this code to have cloud events dash services unless you have another name i guess um the practical question if you split it out is um uh it is should they should we do this as a mono repo under sdk javascript that's kind of what makes sense to me i'm sorry uh yeah not not what yeah mono repo um getting my terms confused uh where we publish two different modules from single repo as opposed to like building a second repo or having another repo yeah yeah i think yeah we can do that it's a little bit more more work on the original repo it's just like learner is well tooled to do that but uh it's like a big change yeah it's it does change the structure of the thing but but i i'm okay with that of course i mean i guess the the only reason why i feel like it it should be that way well i mean um i i guess if there was a new repo we could have it be you know discovery dash javascript or something like that so that we don't take up the namespace of discovery or subscription um i think there's yeah a lot of value in um having multiple modules in the same repo um i mean for example i could even imagine um folks that don't want to yeah folks that don't want to send cloud events they just want to receive cloud events they can just use the receiving module folks that only want to send events and they want to receive events maybe you can use a sender pattern and then we can split up dependencies that we don't need to have dependencies that are used just for one part uh be required yeah uh slinky and anish you have your hand and yes uh okay so slinky first so sorry no anish go first no no go first okay so um i think this was somehow already discussed in past the thing of supporting discovery and subscription in our SDKs now i have mixed feelings about that because uh let's take for example SDK javasdk or SDK go we already have a lot of modules in the repo and we have all aligned version so which means that when we ship for example uh SDK java core we ship together SDK java api we ship together SDK java jackson uh vertex uh restful web services uh kafka and all the modules all together so when we do releases we have all releases aligned and that's the simplest way to to to really stuff okay if so what i'm hearing from this discussion is that we want to keep in a separate module because we want a separate module and that's correct that's fine but at the same time uh we think that two the the two things can evolve at different pace and that's a problem so my feeling it's the opposite is that we should have a separate repository where we push the discovery stuff and where we push the subscription stuff and in this repose and this repository depends on fixed versions of SDK if we think that there is this different pace if there isn't any different pace and we think they will proceed the same then for example uh if if i if i go g a we SDK java like we do i do a like SDK java v2 then i commit to don't break apis and if we uh decided we want to add a new module for discovery for example then when this module is released i have to commit to this model to don't break apis of this module okay because i'm i'm aligning the versions when i release so that's that's kind of problem in the sense particular for stable modules the same goes for SDK go like i think in typescript it's easier because like uh it's pretty well tooled for monorepo so you can even have separate versioning that's what i use on most of my projects so they depend on each other but it's like you can still have several pace of release so in our case on typescript i think the monorepo makes sense to me i don't know enough on some other technology to to have an opinion well maybe we can do differently for every language as we always do i mean we can do that i i have no problems i mean for for for us i must i'm aligning versions now but it's not that problem for the tool to keep versions disaligned but for example it's a problem for SDK go and it's a problem for each other yeah and but i really think as grand mentioned that uh i think the discovery and and the subscription api will evolve uh almost like on the product paces why is dsdk is supposed to evolve more like the spec uh pace which is usually i think slower um because the discovery you have basically it's the real world like you really need to implement some stuff and uh and that means you'll create some security issue maybe and you'll have to fix them like more quickly that's the way i see it maybe i'm wrong yeah not no having in a separate module is definitely it's definitely the way to go my my question is more separate module and separate report separate module but i mean if that's not a problem for the typescript tool that and each yeah i mean for me uh like i think we need to answer that how is it first of all the time that we should incorporate the discovery and the subscription api as part of our SDKs in all in all together right because currently the specification is like really really at its nascent stage and probably making it part of this dk would not be a nice idea otherwise we would end up with like these consecutive SDK releases uh you know we would probably overwhelm our SDK releases with every update into these specifications right and uh another concern what i i mean just we just came out of this discussion was like and it's my personal opinion of course that uh i really believe the subscription and discovery api should definitely be part of the core cloud event SDK because because now we when we ship an SDK we would be we kind of tell that okay this is the implementation for all of our specifications so i kind of don't like this different repository thingy for sure okay the different module it's just the way the way you build application in typescript is just to have different modules so this way if you don't want to implement the service uh like the discovery and the subscription you can just take the SDK like build with that and then you can just like add the other module if you really want to inherit like those new features but like yeah i think it it makes it makes sense to me to separate and it's because the the implementation of the api can be done through express it can be done through other framework like basically my demo is using a serverless framework i did develop so it's not exactly the same type of implementation you are not using the same stuff and when you implement an open api basically like it's the same in java you can do it with spring you can do it like old school you can do it with jack sir and it's not gonna end up with the same type of code so when you implement for real the api you take decisions when you implement yeah but today if we implement let's say if we introduce a module for the discovery and the subscription api and let's say the JavaScript SDK do we give a message to the community that okay now officially we started supporting these api as a spark of SDKs and then they would start expecting it and all the other ones as well right so when we release them in one then i think ideally we should release them in all it's i don't know if that's the standard process different different SDKs have different levels of support i think in general i yeah there's nothing in the spec that that indicates that an SDK needs to do anything more than support version 1.0 with the spec but you know as you as was discussed in the last meeting you found that there are different capabilities with different SDKs and unless there's something in the specification that says the SDKs must support you know the subscription or discovery services it seems like at least for the time being it would not be a problem for them to necessarily be out of sync on that in my opinion i think it's fine yeah i was just hoping that we don't raise expectations in that area that okay suddenly JavaScript comes up with implementations and then go and Java doesn't have one right so or we start breaking a workforce that we also introduce these specs into the go as well as the Java SDKs but that's something what we need to decide within the community right thinking do you think we should start implementing subscription and discovery API as part of standard SDKs from go and Java at least dude he ghosted he ghosted us i'm sorry are you asking me a question i'm sorry i know that i'm not paying attention no yeah sorry i usually don't do much for the SDKs that's why i kind of tuned out a little i was doing something else what was the question on the table was like so and now we have a PR which has one of the implementations for the subscription and the discovery API spec as part of the SDK so there's a PR which can go in so officially we might start having support for the subscription and discovery API implementation from the child script SDK whereas on the other hand the Java and go SDKs have not even thought about it yeah so we are in inconsistent stage at this level keep in mind i am not a maintainer on any SDK this is just my personal opinion which means nothing i personally thought it was a little bit weird to try to incorporate a cloud events SDK with the discovery and subscription stuff because to me while the while they may use cloud events to some extent i i view them as separate projects but i know in the past i've heard other people say no it makes perfect sense to merge it so i personally think it's weird but i have no personally i have no problem with people want to merge it if they think it if it's if the SDK thinks it's the right thing to do can i can i clarify something yeah so i think um we're we're not i if i understand the conversation correctly so far what we're talking about doing is having implementations of the discovery and subscription apis in the github repository called SDK JavaScript but it is published as a separate npm module each one probably subscription is published as its you know cloud events subscriptions and discovery published as cloud events discovery and then the the main bit that is that implements the cloud event specification as well as the HTTP protocol binding and you know things like that that is published as its own module npm module as well so they are distinct they they do version independently in ideally the the top level one the cloud events module that really is just the implementation of the spec would provide interfaces that the subscription and the discovery apis or implementations use but there's not a dependency in that way so that the cloud events SDK doesn't depend on the subscription api does not depend on the discovery api the discovery api and the subscription api depend on the interfaces defined in the cloud events SDK is that correct uh summarization of the discussion so far and maybe i just added embellished a little bit and if that's the case does that seem as weird to you Doug or not well i mean i so if there's no direct like dependency between them is the reason that this is happening simply because we don't want a proliferation of github rebos i would say that's part of it but i would but but there is the you know potentially the dependency on the interfaces that doesn't necessarily have to be there but um i mean the fact there's separate npm's i think sure that that lessens the weirdness a little not enough for me not call it weird but like i said my opinion is a matter right i mean if you guys think it's it makes sense to be part of the SDK go for it i mean as long as the as long as the code is out there i think that's that's the biggest thing where it sits in which repo or stuff like that i that's that's secondary right so just to give some context uh my context is just to try to get something working in my company where people can just discover new events and that's why i'm always surprised because for now just the SDK without discovery without the subscription means that everything is statically uh defined and like that's not the way i see my architecture uh so that's why i'm pushing uh more and i'm happy to see the discovery and the subscription being a little bit more real and i think it's logical as a group to try to push like the full solution to explain how it works and how it can interact together because the interaction works only if you have like the not only but nicely i will say if you have description and a subscription discovery and subscriptions right so have you reached out to the to like the goal line SDK and and about this idea and did you get resistance or or they just haven't gone into it yet i know like i didn't get any resistance for anyone for now was just trying to and i still think we need to work on the subscription api a little bit more so i'm like not like i think we discussed this uh some few weeks back that first we would try to implement it as a part of the interrupt call and if the specification seems tangible at that moment we start defining the the specification into the goal line SDK as well as in the java SDK that's what i can remember okay so it sounds like you will you will have done that path for the other ones as well just a matter of timing yeah exactly so i just so my major concern over here was like should we synchronize the timing that when these implementations step up into uh these corresponding SDKs should it be all together or should it find that uh it comes in one of the one of the SDKs first and then later on for the other ones i think it's fine that they're they're staggered i don't think they have to do it all at the same time but okay that's perfect yeah slinky your hands up so so so so let me understand you are saying that you you are willing to to implement soon sooner the uh discovery api subscription api for uh calling SDK enjoy sdk i mean yeah it's some point of time uh if not there's SDK but in a different repository so we would probably formulate the task into the SDK goal i think i think we should you should discuss i mean maybe another time but we should discuss where to place it because yeah that's cool we should have it but where to place it it's kind of a problem yeah but it it has a dependency for us as i mentioned on the interop so first we will figure out as a part of our interop work group that the specification currently even makes sense as it is right now so either we have to probably evolve it a bit or we can use it as it is so we would decide that as part of the interop and then we would start discussing how do we want to incorporate it into the SDKs so that's the discussion i can remember for me it needs to evolve it's sure so it's a work in progress but i just wanted to put like the base and implementation so we can iterate on that and make sure that we are all on the same page so at no point i thought like that implementation is the final one in the final one i'm sure we're gonna iterate on that okay um anything else for the chat for the call good discussion i just wanted the so to be sure like grant and lens you think those PR make sense and with a smaller date we'll be able to merge them uh well so are we we're splitting that the modules first right i think we could merge as one and then do the learner split because the two PR are not about discovery and it's just some tweaks in the SDK and then uh yes we can do the learner i suppose we'll have to synchronize for that oh sorry one number again all right three six uh five and three six six engines yeah i mean i i think that they deserve an actual you know longer review but yeah conceptually and all that it makes sense to me um to land these as they are and then move towards a mono repo with the discovery and subscription apis as separate modules yep sounds good but yeah i think the main concern i have is just uh like for an SDK receiver like having any discovery functionality is not really wanted so um as long as like things are split soon or i guess i don't think that's what's being proposed that like these two PRs don't have discovery api implementation in them other than the interfaces correct okay yeah um not yet but the emitter and uh i mean yeah i mean interfaces don't really um i mean i'd imagine even like the interfaces for um sorry i need to look at the PR more are these interfaces for discovery it's just the definition of the spec from the discovery so yes but it's just interface it's actually just declaring yeah i mean yeah are you planning to publish it in the main module or yeah that was the discussion for me it makes sense to be in the main module uh if we decide to split it and put it in another module it's possible also it's currently the case on my implementation so it's not gonna be the end of the world uh the the really super important point is the emitter one i think the emitter one is something that if we don't do uh like basically i cannot implement or like i cannot implement subscription well i really like the emitter one and i think that it it brings some real node specific kind of um idioms to the SDK like actually using the node event emitter um that's nice yeah don't don't remember i'll i'll move to like the same interface of the event emitter but with asynchronous so this way we have like so it's gonna look like but it's not gonna be the event emitter per se now but i think it looks like it's enough right do you want to um put this on a put that sorry put that pr into wi work in progress status and tell you make those changes uh yeah last time i did that it went into it was never merge anymore uh yeah yeah i can do that i did add the comment to just say that i'm gonna implement that but i like that's pretty quick to do so i hope to do that today yeah whatever doesn't have to be work in progress i just don't want it to sit there for a long time without changes you know no yeah i like yeah i just converted it to draft and i'll just update um i'll just update but i uh i want to do it like today uh same thing i don't want it to be sitting too long i don't know and give a second thoughts on uh like the interface grant if you want for the other pr yeah i'll get a review unless i don't worry i think we should definitely try to split the sdk um sooner than later yeah but this is fine yep or be okay thank you guys for listening all right glad you brought it up have a good rest of your day then