 Is there anybody I'm missing? Actually, oh Tepini got it. Okay Okay, wait want to go ahead and get started since it is three after the hour Sorry, I didn't just type in that all over the place. I apologize Where was I lost my train of thought, okay, here we go three after let's go ahead and get started one thing I forgot to mention actually ages ago is If you guys can't make the call because of public holiday Let me know because I do treat those differently than just you're being a slacker and going on vacation I do count Official holidays as still present so that way you don't lose voting rights because of it And in fact if you have a lot of vacation like some you guys in Europe have you actually made gaining voting rights because of it So I didn't want you guys to be penalized because you're on vacation or because of holidays So just let me know for example Vlad and Clemens today are tagged with that Luckily, I don't think it matters much because we don't actually do votes very often and usually they're landslides when we do vote So anyway, just let you guys know that you should let me know All right, so let's talk about some AIs first of all Thomas had an AI to talk about when we do or do not need the content type The the cloud event content type of attributes the spec now has this text in here that in my opinion covers it But I want to make sure you guys are okay with me closing this action item So I'll give you guys a second just to read that Is there anybody who disagrees that we can close this action item and that this bit of text in the spec covers it Okay, cool Now Michael, I believe last name is pain hasn't been on the call in a very long time and he actually Volunteered to do some investigation to open census But unless somebody on the group wants to volunteer to pick up that work I'm inclined to close this this action item. Does anybody wants to volunteer or object to that action? Okay, I'll do that then Jules, I believe he was from Docker. He was going to write a proposal for benchmark framework But he's basically hasn't joined any of our calls in quite a long time So I'm gonna try to see if I can reach out to him But otherwise if I don't hear back from him, I'm just gonna close this action item because I think he's kind of vanished And then finally Scott Rachel had an AI for adding some additional text around the subject PR If you're so inclined maybe you could ping her to find out what that was about because I honestly can't remember But if I can't if we don't hear back from you or her I'd like to just sort of close this action item and assume that we don't need it. Is that okay with you? That sounds great. Okay. Cool. Thank you. I think that's it in terms of action as we can deal with Next is there any objection to canceling the call on the July 4th? I know it's not a holiday for everybody, but I think enough us folks be absent that You may not have quorum So it's okay to cancel July 4th for everybody. All right, cool. Thank you All right community time. Is there anything from the community people like to bring up? Oh, I'm sorry Jude your hands up What is the benchmark AI? I think this was I'm trying to remember We're at a face-to-face and Jude showed up and I believe Docker wanted some sort of framework to test Weather serverless Implementations adhere to something the fact that it says benchmark leads me to believe it wasn't Functional it was more performance related, but I honestly cannot remember but he was supposed to come back with a proposal to actually explain What he was looking for and he never did But it wasn't this a benchmark against the various different functions as a service implementations Maybe I honestly don't remember Okay, but I think it was the face-to-face in San Francisco At the Google office. Yeah, that sounds right But the fact that they they basically vanished and haven't mentioned this since then leads me to believe that Doesn't it's not as a high priority for them anymore. Oh, also, I don't think that it was related to cloud events as much as it was to serverless That may have been true, too. Yeah, maybe more next time for the service for group. Yeah so Anyway, all right Last chance anybody have a community issue you want to bring up? No, but I like to comment on the open census one. Oh, okay. Sorry So anyway, open census in itself is probably not very related to cloud events as it's a library for actual actual libraries for collecting metrics and traces, but We have a distributed tracing extension, but is there any Guidance as to the metrics that the SDK should support as I guess the SDK should support collecting metrics I don't know anybody else in the SDK team want to talk about Mickey is that's the one part of open census that is not comment commented on anything anywhere. I think in cloud events That is true. I don't know. Let me pick on some people mark or Scott You guys any comments on that? We use open census in the SDK for go but not over the data plane So we don't make a choice about like injecting a Trace ID into the header Do you think this would be a cloud event thing or is this more of an implementation detail of an SDK? This it's not even the SDK Potentially you could say I support first class this concept that I'm an inject on the on the transport a trace ID I support the cloud events trace ID extension But that's different than Exposing that through open census Does that answer your question to be? Yeah, I wasn't actually talking about tracing that much as I said it already has the extension and I Don't think we can actually do that like generically, but usually at least the libraries I worked with have taken the responsibility of collecting basic metrics about their functionality and Exposing those to the application using the SDK I would find it weird if the cloud events SDKs did not at some point include metrics That you could then expose for example for in a permit use endpoint in your application You of course can do it yourself if that's I just have never seen that as a pattern Yeah, so so I did I'd agree that I think this is more of an SDK issue Yeah, it is a completely an SDK issue in my mind I just think that since we have SDK guidelines if they some of them do support metrics for the events I think they should probably be Demetric tags or names or whatever you want to call them labels should probably standardize in the SDK guidelines so that they aren't dependent on the language Would you like to open up a poor request to add that text to the SDK doc? Oh my That's what you get for speaking up That's what I get Maybe we'll see I think I should Well, they dug was it was there an actual GitHub PR or issue for this action item or was it just a on the non-call? It was on it. It was just on a call. We never opened up an issue or anything around it. All right, I Might hang Michael and see what he was thinking about it. Okay. That'd be great. Let me do this Okay So it's been you I will we'll see what happens. Yeah, sure. It's not critical Especially for 1.0. It's because as I said you can do that just as well yourself As like it's not required for the SDK. I just find it weird if it wasn't on the roadmap even Yeah, okay cool All right moving forward SDK stuff, I know we haven't had a phone call. It's probably isn't anything to say there other than We will have a call right after this one for 30 minutes as best we can. I know Clemens obviously can't make it he's out but Mark Scott or anybody else we're going to SDKs and we can meet up because I know there were some topics actually a class I think you had a topic you want to talk about too KubeCon is next week The slides are available. If you guys have any comments, please let me know And just a reminder for those of you who have endpoints keep your demos for your endpoints up and running I will be doing a demo or trying to do a demo On Tuesday afternoon, China time, which is probably Monday evening over here in the States Let's see going forward. All right incubator. We started a vote last week I think there was one person who questioned whether we should go for incubator status So we've because it wasn't unanimous. We started a vote Um, so far everybody voted offline voted. Yes Um, is there anybody? Um on the call I don't think anybody's going to object. Let me just ask this way. Is there anybody who objects to going forward with incubator status? Okay So we'll just do that much easier that way Okay, so the next step in the process as far as I understand it is to put together a formal proposal Which is I think just a PowerPoint deck Um for the TOC I think um, I can take that action and start putting that together and then you guys can obviously review it and help tweak it The biggest ask I have with you guys is to start giving me end users that we can Use to satisfy the requirement of proving that we have at least three end users or a use in our spec so offline drop me a note about who your end users are or, you know, who's willing to Claim as an end user that they they're okay publicly saying that they they use cloud events Or your product that uses cloud events Okay, and I'll let you know when the proposal is ready ready for you guys to review and then I'll get on the TOC agenda when we have all the data All right, anything about those topics All right moving forward Sorry, go ahead. I think just I I think it's been quite hard to find names for these three end users Listening to the calls for the past few months Um, I think a big part of that is the fact that the spec is clearly changing a lot in v0.3 and maybe in 0.4 There's been a lot of changes and for example, we haven't Improvented cloud events for that reason and we will be implementing them as soon as I'm fairly convinced that the spec is Not changing a lot anymore. Yeah. Yeah, that makes sense. I will get a lot easier Like in some weeks when we get open three out at least Yeah, uh, to be honest, I um I actually suspect we may not have that many issues Finding them because I know that there are some products out there that use it today I mean, all you gotta do is find a user of Denting in Knative and you can as long as they're going to put their name out there publicly that I'll count because Knative uses cloud events, so I think I think we'll be able to do I just need people to give me names basically What exactly is an end user here? It's it's the user of a product that supports cloud events So the fact that a product supports cloud events is not sufficient. It has to be a user of the product Does that make sense? Yeah, okay Um, all right v1.0 discussion Not all I have to talk about here other than I was using this sort of track out things are going So just a reminder that there are five issues out there that need PRs associated with them or a proposal to close them these Four people or five people I guess Are sort of owners of those issues or volunteer to do something. I know for example, christoph you volunteered to help scot out and I was hesitant to put your name here, but since the PR isn't there yet I thought I'd just use as a reminder for everybody That we are waiting for PRs or some sort of resolution of those issues. There's just the v1 issues and there are two Sorry, go ahead. Do do you have the issue about the headers and the map encoding? If I have this call on the agenda, sorry, I don't have the open Uh, it might be um Oh, no, it's okay. It I think it's It's it's down there. Yeah, it's down there. Yeah. Yeah, it there. I and yeah, obviously If you just don't like the ordering, let me know and I can reorder things I try to do things based upon v1 stuff first and things that are basically ready to go before Longer discussion than my rattle. Yeah, sure. That that that should don't be a long discussion That's my point. It should be kind of dry Okay, um, well, let's see how far we get I want to see if we can make lots of things today. I just reminder, um They're in it that I did it again the beanie and clements you guys have Two prs that need update for v1 point for the tribe v1.0 status or bucket And again, we're everybody agreed that we're still trying within hopefully about two weeks or so to get to the point Where we at least do our release candidates for 1.0 All right, so with that Let's see if we can close this puppy today Oh before we get to the prs. Is there any other topic? You guys want to bring up before we get to the pull requests All right, cool so Neil Just let me go here for a second. Um I have a few comments from yesterday the the content type Yeah I so in my opinion Between that and the fact that I think you're pulling in changes that obviously are not related to your stuff I look at those as more just like syntax Or rebase issues that should not really affect whether we approve this or not to be honest And so what I'd like to do is ask the group Um, if they have any questions about this if they have any concerns Because if not what I'd like to do is conditionally approve this Assuming the editorial type changes can get made and then offline if I can just get one or two lgtms We can merge this thing and not have to bring it up ever again Yeah, so Let me ask the question. Is there anybody in the call who's look at this who has any concerns or questions that they want to bring up Okay, is there any objection then to approving this conditionally that we resolve some of these Typographical type things and the and the rebase issues that have been seen in there Excellent. Thank you guys very much. I know that wasn't a v1.0 thing, but it's been out there for a long time I didn't want to ignore it. So thank you Neil for all the hard work you put on that one all right, um Now this one from james is a viewpoint viewpoint b1.0 However, I know clements had a lot of opinions on this one and since he's not on the call and I don't believe james is on the call either I'm inclined to defer this one. Um Unless you guys have any really strong opinions you want to talk about that one right now Okay, so let's defer that one In that case claus you get to go next Okay, so that's then the Poor request. Um Following the other one I had a few weeks ago regarding terminology um I think what I wanted to express here is that We had to have now this term intermediary and I wanted to use it now to make clear that intermediaries Well, if they don't intend to uh somehow modify or delete an attribute Uh supposed to forward them. So that's exactly a nice discussion. We had then as a follow-up to this poor request and in the comments um, how to express this best I decided to to just use the wording that was already there with a silently ignore and describe that Uh an intermediary for an intermediary silently ignoring an optional attribute means that it must forward it um, but Yeah, so there have been some other proposals just to state for example that an intermediary, um Should or is strongly encouraged to uh forward optional attributes Or an intermediary that is not configured explicitly configured to do otherwise Uh must forward attributes. So there are several proposals in the comments right now I think the other part in the primer I modified was mostly to introduce also the terminology. So with a producer consumer and intermediary and so I guess the biggest question is with the people are okay with the general direction of Saying that intermediaries need to forward on uh Unknown or optional attributes right Yes So what do people think that sound like the right general direction and then it's just a matter of getting the right wording Since some of the comments are from me. I agree with the general direction okay Anybody on the call have any concerns with the with the direction? And just to the best you guys know that it's consistent with other specifications such as the hgp spec When it talks about proxies and stuff it does talk about forwarding on unknown stuff Does it require forwarding? Does it say must no it should Yeah, that's what I'm just thinking about here. Like that's a pretty strong wording. Do we do we We do have a definition for intermediary. Okay So we had if we for the must it was only in case of the silently ignoring or if it's not configured to do otherwise so Because there might always be some use cases where intermediary might somehow interfere with the attributes But of course, we could just summarize it as a shoot. That's exactly the discussion Yeah, I just I I don't get the meaning you are trying to convey with that Any just some of the general direction is that a fair summary for everybody? Okay, so maybe um class we could do is take the exact wordsmithing offline and and Maybe approve it offline if we have If we can get enough LGTMS otherwise worst-case scenario we'll we'll revisit it again next week with the exact wording. Does that sound fair? I could just take over the the proposal you did in the comments. I think that was pretty much this should What did I say? Oh, I'll be over. Yeah, I'll be Scott. I didn't give you the exact wording. Oh, then I said, okay. Yeah, this should that's true. Okay I mean it could just be a matter of Changing this must or should right? Yeah, and then Take out the silent ignoring and then just save intermediaries Shoot forward Optional attributes or Yeah, I don't think the silent ignoring adds anything at that point anymore Yeah, that was just referring to the Silent ignoring for consumers and producers. So just to say what this mean would mean for intermediaries But I can just simplify it So it's yeah, so I basically I think what you're saying is this sentence would come down to something along the lines of Intermediaries should forward basically all attributes Yes, yeah We'd are people okay with that general direction and just a matter of getting the exact words Okay, is there any objection then to working that offline and I'll wait till we get One or two LGTMS to make sure we didn't go off the rails No objection. Okay Hold on. Okay. Cool. Thank you guys um And thank you class Let's see next roadmap So we don't necessarily have to approve this today if there's any Pushback on it. There's no real hurry other than I thought it might be good to actually modify the roadmap To actually align with our current plans And basically what I said is the next milestone will be released candidate for 1.0 And our target for reaching that or our our criteria for reaching that would be to complete all issues in prs tagged with 1.0 And decide on how long our verification of testing period is going to be basically how long between now and 1.0 and what the What what what we're going to do in that time period right in terms of making sure We actually feel like we have a solid spec um And then to reach 1.0. We've completed all the criteria that we've defined up here And we have completed as many of the try for v 1.0 tag issues in prs as possible There's no fixed amount. It's just whatever we can get down the time period. We'll we'll get in there but thanks to um A christoph suggestion. I added some wording here to talk about how these changes are expected to be non-breaking changes in nature Um, they're not meant to really change much of the semantics. It's more of a clarification type stuff Now that's not to say that As we do our verification testing We mean we won't introduce breaking changes. Obviously we can because we haven't gone to 1.0 yet But these try for 1.0 issues and prs are meant to be more of clarification in nature more than anything else Or additional binding specs. We have a couple of those in the pipeline And then in 1.0. I put in all the other Stuff, you know things that are not required for 1.0 process related issues and stuff like that definitely post 1.0 items Anyway, any questions or comments on this? My question some of the 1.1 So if I see process related issues Or the question is more, um, if I have something on the wire, that's 1.0 And there is no change for me because it's all process related issues that don't really affect what goes over the wire um, then it's kind of I don't really want that version change in that sense. Does it make sense what I'm saying? Yeah, so Yeah, so to me That's I don't I think that's really saying so let's say for example We don't have any issues for that are They have to go into the spec and all we ever do are process related issues I think you're saying we actually might not release a 1.1 Yeah, right. I would agree. Um, I'll Let me take that action item to switch this and pull this out from 1.1 and just say this will be this is a post 1.0 item But it's not necessarily a version number type of thing. Let me work on that because you're exactly right Yeah, I was thinking in the um December thing you have like the major the minor and the patch version And what I'm thinking is that a patch version would never actually change something on the wire So if you release a new patch it sort of Is a new version of the spec that you read but on the wire it doesn't have an effects on the wire We just used the major and the minor It does and maybe now that you're mentioning all this maybe Maybe it's a mistake for me to label this as 1.1 And I should find some other way to describe this as a tbd kind of a thing Because obviously based upon what we do in terms of issues or resolve We may or may not have to go to 1.1 ever or release stuff for a long period of time and everything could be process related stuff Right. So let me let me rework this to be more of a tbd type section and see if you guys can like the wording that I come up with How's that? Either future or 1.0 plus. Yep, that kind of thing would work too. Yes Um, just to make things more complicated About how about extensions or well-known extensions does adding those affect the version? I wouldn't think so because they're not actually versioned Oh Yes, that is true. Yeah, I don't need the version Uh, transports are though. So that's that's that's kind of that's going to kind of an interesting one And maybe that's maybe we need to discuss that as part of you know, this type of discussion or something like that But I don't want to battle on this because a lot of things. So let me do this Are there any objections with the general direction I've headed here in particular in here? In here Okay I'm not going to prove the pr. I'll work on work this section down here and maybe we can talk about it again next week But I definitely don't want to rattle on this one because this isn't worth it Everybody knows we were going conceptually. We just got to find out the right words to put it on paper Okay, so let me work on that one Um, where was I? Okay, now this one Was an issue is originally opened up by the other dug Now, um, I'm not going to ask for a word on this because it was just opened a couple hours ago Or maybe an hour or so ago, but I just want to draw your attention to it and make sure Or I want to get a sense for how people feel about the general direction um basically Doug was suggesting that We changed the definition of the schema URL to allow for it to possibly uh define constraints on other Cloud event attributes besides just data So for example, you may have a schema URL that points to a document that says oh, by the way type has to look like this um, it's and it's just to To make it clear that the schema that you're referring to isn't just about data It could technically be about other things if they choose to talk about those other cloud event attributes It seemed like a very reasonable thing to me Um, and I'm like I said I don't want to vote it on today But I wanted to get a sense from the group as to whether you guys are okay with that general direction or not Because if not, then we'll go back and rethink it. Otherwise, we'll wait till next week to do a vote Okay, uh jam. I think you might have gone up first So doesn't this break the spirit of the separation of the data from the context a bit? That that would be my only comment. I'm not anti it. I I'm I'm Just concerned it's dying to a couple things together, which I thought were meant to be somewhat independent. Um I don't know how to respond to that one Doug or anybody else want to respond to that one? Doug, you're coming off mute. I'll respond after uh, I think Christoph has his hand up Okay, Christoph you want to to speak? Yeah, I also don't uh, get how it would I don't get how it would work in practice. Let's say I have an xml document Sorry The what I don't get let's say it's an xml. So I get an xml schema. So the xml would describe the xml document How would that description refer to the type because the type is not part of the document that is being described in the xml schema Yeah, this was my own main problem as well. Like it still says a link to the schema that the data attribute appears to how do you How do you mix and match two different types of schemas for two different content types? Okay, Doug, you want to address that concern? Am I off mute? Yes, you are Okay, um, well, I see this uh, it was trying to utilize an existing attribute rather than to Define another one, but it's about utilizing cloud events as a An umbrella format where more domain specific schemas Could be accommodated by it. But those domain specific schemas like I'll give an example Uh, GS one which has a event format called epsis e p c i s That's uh, really focused on Uh, location tracking of products and in the supply chain. Um, you know, they have a format that uh Could be accommodated in here, but the but it's more than just um Uh The data attribute that would have to conform to the epsis Schema it would have to also Um utilize the other contextual attributes of cloud events in a certain way To ensure that it complied to anybody that was uh producing or consuming those epsis events um, for example With an epsis the time, uh, is a required attribute not optional um type would be You know, if you look at the uh cloud events, um attribute descriptions, there's It's it's very general. Um, it's intended to be that way to accommodate a lot of different use case Uh, and there's examples that are used under each of those attributes And if you go if you're adhering to a more constraining format You want to lock down the you know what those uh those attribute values can be So that they can uh accommodate that specific format. So it's beyond just data. So this um, so by By extending This existing schema dot our schema url attribute To accommodate other than just data than it is moving it Out of that uh data those data specific attribute categories. It's moving it up to the level of The cloud event version Uh, you know, it's at that level Is that help you guys who are asking questions or concerns? Okay I I I get the intent But I wonder if it if it does sort of demand Much as I hate to say another context property to carry that Otherwise It just looks I I'm not quite sure what I would do with that And how I would interpret it My hands up and I had to admit I'm torn on this one as well. Um For a couple of reasons one is The the url that we're pointing to here Obviously in a lot of people's minds I think they immediately jump to something like an xml schema doc where it's strictly You know xsd file and then that's all that's there But technically this could point to just about anything In particular it could technically point to like a word doc or a web page And as part of that word doc it could have the xsd file And that's what you're supposed to use to to verify the the shape of the data Um, or actually with even within an xsd file If you have a comment you could have a comment there that says oh and by the way If this payload is being sent as part of a cloud event The type The the cloud event type attribute should look like this or that the time attribute is no longer optional. It's now required, right? I I don't know how I feel about that, but I could see that being how this can be done So you're not necessarily breaking existing tooling that is assuming this only talks about the body or sorry That's only talking about data But then other people understand that there are cases where it goes further It knows what to look for to say. Oh, okay. When it's part of cloud event. Here's the extra bit of information I had to know I could I could buy into that it's a little bit of a squinting, but I could buy into that um the The other part that I really wanted to ask though about for for dug is whether The type itself Could be used as this sort of schema thing that you're looking for Because a lot of the cloud events that we're sending or all the cloud events we send they have a type and a lot of times They're prefixed with the same You know set of words Or identifiers for example the github stuff They all start with com.github and I'm wondering whether that alone should be sufficient for someone to be able to say Oh, I know that github cloud events have these additional requirements because there's some document that says that Therefore, I don't actually need to modify this URL Just the prefix of the type would be sufficient for me to know those additional constraints So I know those those are sort of things that were in my head I was wondering Doug if you had any comment on I'm trying to reuse that the prefix in the type would be sufficient for you possibly I just would say that the in My envisioning of this that the type is one of the attributes that the schema would dictate what would be in What would be how it would be structured Yeah, I guess I'm asking this kind of flip the relationship around slightly rather than the schema pointing to the type It's more like the type points to the schema kind of Anyway, okay, just I was just wondering Okay, Topini Yeah, just quickly, um I think The first thing you were talking about just pointing to a word doc or doing comments inside a schema um I myself am not induced about encouraging that kind of behavior because I know that I would miss 99 percent of those um either the whole schema or the specific comments about some other attributes in the Cloud event because they're not machine readable if someone adds a comment to their schema afterwards I'm not going to notice it ever Yeah And the other thing is that the type does already dictate Practically what you will find in the cloud event. That's just Of the wire side channel communication github will most likely Document their own extensions what they use and stuff like that. So that already happens That's just not spelled out in doc So that doesn't require changes in that sense Right, so I I think this is kind of right now It's considered that you would read github's documentation if you use their cloud events Instead of having a concrete scheme and the point is that your schema within the payload might change Without the documentation for the actual cloud event usage changing and I think that um This is only a problem because For me, it's also a bit unclear how much domain info you should be including in the context attributes. So how much of a strict schema should you have is it just contextual information about routing and stuff or does it also have domain information because subject does and source does so Yeah, okay, I would say what um I don't do you want to respond to that? Okay, so tell you what? Since it's a very very new pr. I just wanted to bring it to people's attention Please go ahead and comment in the pr itself. I think all the comments people have made have been really really good Let's see how much of an offline discussion we can have and maybe we can figure out how we want to move forward on this one before next week's call But I just want to bring this one up to your guys attention because this I think is the last pr Tag with 1.0 as of right now So I want to make sure people pay attention to it and think about it Okay, otherwise, let's work it through the issue or even sorry through the pr itself um But moving forward what I wanted to do is see if we can get rid of these Or at least get rid of this one pr. That's out there. It's one issue that's out there Because this is tag with 1.0 a long time ago. Thomas from google suggests or question whether we need both binary and structured formats I believe based upon everything I've heard in the group that there's enough people who like both For example, tim aws likes structured In particular the jason I know for example, there are other products out there That for example k native which uses the binary format at least for the one bit of it that I play with a lot So it seems to me that it would be a mistake for us to drop one of them entirely Um, and that's not to say that we couldn't change our specifications because I think some of the specs right now Require like the scp. I think it requires the receiver to support both We could change that requirement if we wanted but I think it's a different issue Relative to this particular issue about the option of dropping one of the two I think it would be a mistake for us to do so. So I'm proposing that we close this with no action and leave both in our specifications Any comments on that tapini? Yeah, just a quick question. I have been thinking about this. I didn't even know about this issue But what why does k native use the binary mode? Scott we'd like to comment on that one Yeah, uh, we use binary mode because We are strictly http and to make a an http request Compatible with cloud events. It's just adding headers instead of changing the body If you change the body of the request you have to change how everything actually consumes the event That is a great point. Thank you. Yeah, I was going to say that that is the exact reason I've been very enamored with the binary mode, right? If you have an existing mess is flowing around I don't have to change my code that processes it anymore. I'll have it Do I have are just extra headers that I may or may not want to deal with nothing else changes. I love that So, yeah, I haven't even thought about that. That's pretty great actually. Yeah So, um Anyway, the proposals from the group to close this issue with no action Any concerns or questions on that? Is there any objection to close it with no action? Okay, like I said, if you guys want to revisit the decision to require both Receivers to support both in particular for http Feel free to open an issue because I still think that's a valid question Whether I agree or not is a different topic, but I think it's a valid question Okay, uh, this one christoph's helpful. That's wanting to talk about that one Um, okay Clemens isn't here But let me see if this one is in a state where we could quickly approve it because I think he Addressed everybody's concerns and just see yeah So I don't think he made any material changes since last week I think he maybe maybe just made some minor syntactical editorial type changes here I'll give you guys just a second to review and refresh your memory about what he said here Okay, are there any Questions on this or concerns with this and keep in mind. This is just the primer. So it's non-normative Is there any objections or approving this? Cool. Thank you guys No, sorry I'm so sorry go ahead That doesn't that Conflict a bit the transport fighting and the informant encoding sections because The event informant encoding section should Define how the information model of the base specification Together with the chosen extensions is encoded for mapping, but Actually, for example in the case of the binary encoding Oh, that's a different encoding. I guess. Okay. Maybe I'm wrong So the http binary encoding does actually the event format encoding part in the transport fighting So It doesn't match this architecture Say wait, okay. Do you ask me which part doesn't match? So so the http binary encoding Is as far as I remember defining the http transport binding, right? That Is that That being there doesn't conform to this architecture description because the event format encoding is a separate thing here um I'm not quite sure I follow The binary encoding decides how the information model is encoded instead of A different event format. Oh, are you suggesting that somewhere in here it should say for the structured format? Yeah, I guess so Because the binary encoding doesn't follow this architecture It's transport binding specific Do me a favor Make a comment to that effect somewhere here and then we'll hold off Yeah, it's just a myth. I don't think it's not that like major. It's not important. Well, okay Well, let me put it this way Would you prefer to get the result before immersing this or would you like to have a follow on pr? I think it's fine to have a follow up. Okay. Do me a favor then open up a poor request to do that Yeah, okay. Cool. Thank you And and actually just a reminder of everybody Especially when it comes to things like the primer um Feel free to make edits and additional prs as you see fit None of the stuff is probably perfect as of right now. So we're always looking for prs to fix wording and stuff like that All right, uh, let's see all right, this one about there for a little while This one was somebody opened up an issue saying that there was little unclear about what the type Activity documents are related to um And so what I did is I opened up a poor request here to make it clear Um that it's relating to the original occurrence and it's not necessarily the type of the cloud event itself It's the type of the original occurrence event thingy And that's what I was just trying to convey here and during that discussion. We had I think two weeks ago We talked about this Um, I originally had some texts that talked about how a single occurrence could result in more than one event And I actually had that text down here And it was a little confusing because it kind of implied that type varies only when you have multiple occurrences and that wasn't the intention So I moved it that little sentence up here into the definition of event To make it clear that a single occurrence may result in more than one event that way it's not tied to the type stuff anymore So I don't think I actually changed anything from last time we looked at this I just moved the sentence around to not imply a linkage between the two Are there any questions on this? Any objection to approving this? This is the only normative change technically And this is just explanatory text Any objection? I'm good. All right. Cool. Thank you guys. Whoops All right, Eric Um, do you think this one's ready to go? Are you are you and clemens going back and forth still? I haven't heard from clemens since my last comment. So I don't know um Yeah, I I'm a little nervous about About doing it without him here because he may have just been on vacation or something recently So are you okay with this deferring this till next week? That seems entirely reasonable. Okay I'll fix that. All right next one So thomas from google a long time ago opened up the This pr originally and then it kind of lingered Um, and since he's off doing some other exciting stuff, I decided to follow through on it for him uh, basically what he's talking about here is uh Trying to add some text to the primer that makes it clear that transport level information is not meant to go inside of the cloud event itself in particular the Transport level routing information. So there's a little paragraph here Just what section was this one? Do you do do do you do? Okay, that's just part of design goals. Okay um So basically he introduces the notion of metadata transport level information not being part of the cloud event spec And then he points you down to the non-goal section, which gives a much more detailed explanation here And I'll give you guys just a second to look that over In case you haven't read it recently Okay Uh, again, keep in mind. This is just in the primer. Is there any questions or concerns about this general direction? Or are they with the text here? Okay. Is there any objection to approving it then? And I'll pick on Tappini Oh god done again Well, you're the one that keeps speaking then I know I've taken Clemens's role now that he isn't Exactly Oh my gosh I think this is a pretty good rewarding. It addresses all the concerns. I have raised with the pr Okay, anybody else have any concerns or questions? Okay, last chance any objection? No objection All right. Thank you guys. It's just a typo. I posted in chat. I was cool Where 38 148 is transport transport. Okay. I will fix that. Thank you very much Proof typo got it. Okay um Fabio is not on the call but Have people had a chance to look at his avro spec? I think um, there's some minor syntax things that should be 0 0.4 with the progress but aside from that It looked good to me, but I know nothing about this protocol So I was looking at this straight up from a cloud events perspective Anybody any comments questions? Okay Do people want more time? Or should I ask for a vote? kinds Yes, can you hear me? Yes, I can Yeah, I'm just wondering uh, do we really need to do all these mappings for the spec Because it might open a can of worms where avro. Yes is very popular But there are actually quite a few industry standard mechanisms for serialization deserialization Uh, might kind of be opening a can can of worms here Is that a Good or a bad thing though? That's a bad thing Where you know, you'll start getting hundreds of requests for Well, what's the what's the spec mapping for? I don't know You know, there's a whole litany of them and you just may not want to open that door, right? So let me poke on that a little why would that necessarily be a bad thing? Uh, it'll be potentially You know dozens and dozens of requests for mapping to different serialization capabilities You know everything from google protocol buffs to cryo to You know, uh, you know the list just goes on and on and on and on But isn't that a good thing? Is that an indication that people want to use it in other bindings But if it's a binary binding you're already going to be doing that serialization part of your application And then attach it as a binary attachment So if it's a binary attachment that's really outside of the Uh, you know the cloud event spec that would be again Back to I would bind it in somewhere in the header or to indicate perhaps that this is an avro binding But uh, I may want to actually put what you know, all kinds of additional things into it And I probably wouldn't be worrying about serialization of things such as headers right so This would be purely application specific At least I believe it would be a tipini um, I think that's more of a question of whether the sdk's or the user should do the mapping into the format As it's currently stated, I think the How do the sdk's do it now if you want to send json to the do they construct the json for you? You have to do it yourself I believe that was I think the java sdk serializes the event for you Yeah, I'm pretty sure that the go one does too And if that's the goal that I think we do need to have the event formats because the sdk's will need the specification Um, I think it's part of the question whether we should allow someone to just insert Or if the sdk should probably just allow someone to put whatever they want there If if it's not supported the event format they need Hines that answer your question for your Not exactly, but I'm going to have to think about this one a bit more where again it uh, I was always under the impression and again the json is completely different Where if I need to serialize that it's all going to be very simple string serialization because that's all For the json, right? But if you are actually creating some custom payload That uh, you will be defining that serialization and you don't right So if I have the payload serialized as maybe a A cryo serialization do I then want to wrap that again? into You know an avro serialization before I actually send all this stuff, right? This wouldn't affect the payload, right? This is about wrapping the actual cloud events But this is the point when I'm wrapping those cloud events Um, are we going to specify how you wrap those for serialization? Does it really matter? Because if I'm doing a binary transfer the cloud event information is in the header It's not in the payload, which is what I'm sending, right? so okay My reading of what of where we're headed here is Your concern isn't about this particular proposal itself. It's a higher level issue And right and so can I ask you to open up an issue to have that discussion in the issue itself? Because I don't think it's like fair to pick on the on favio's pr to have this discussion Without them on the call. Yep. Yeah because I I think Given the current path that we're on we should accept this pull request on its merits and then leave the higher order discussion That you want to talk about separate because I think if if for example, you're correct and we should Do something different than just have a open up the floodgates and get tons of Bindings into our into our domain Then I think that's going to radically change what we do with all these specs, right? For example, we may decide to kill them all and do something different, right? But I think that's a bigger discussion we have separate of From this current path that we're on so I'd like to have people evaluate this particular pr as a standalone decision Yeah, but this is actually Part of I think the confusion comes back into all of this is a lack of examples other than http for the transport binding to the actual You know sdk. So for example in the java sdk, we have taken the Well, at least they've done a good job of saying here. I have an object that represents all the parts I need to do to form a cloud event But if I'm doing A binary transport binding to for example amqp I still have to form that amqp message Which is separate from all the header properties that I put in according to the specification The header properties are going to be serialized based on the libraries for amqp Not cloud events or any cloud event definition And the payload is just going to be the the actual method event itself which I don't see that you would actually take the entire event object And send that which will include all the stuff that's in the header And the payload Or is that what we really are expecting because that is not My understanding of the spec may be I'm missing a fundamental part here that When I do a non-http transport binding All the cloud event stuff is in the header And the message payload is just the payload Which is the event and the header Describes what is that event? right Okay, so I'm gonna have to unfortunately Give him time. Let's try to have a call of time Can I so the co sdk shows examples of doing the binary and structured encoding for amqp It also does structured encoding for gnats and it does binary and structured encoding for pub sub As well as htdp Okay, has that been added recently because I've been a while since I looked at it and it was only http Well, it it is completely rewritten now Beautiful beautiful. I'll go back and revisit it before I do the issue because that will clear up my concerns Okay, cool. Okay, so Dan I got you. Thank you. Um Okay, so we are at the top of the hour. So I think gem you put a comment into the chat Then you're asking for more time. I assume it was related to this issue or this pr. So Um, okay, so yeah, so we won't we won't push on this pr But please do take a look at it I would like to get this one in next week if possible because I don't think there's anything that controversial in there Aside from Heinz's higher-order issue, which you said you open up an issue about So with that, let me go back and do a final roll call um Let's see Javier, are you still there? Yes Okay, and gem I got you Uh, William, are you there still? No, I lost William. What about glenio I don't see them on call or Gilbert Okay, is there anybody I missed for a roll call Okay, um in that case. Thank you guys very much and if you're on the sdk team Please stick on the call because that call started a minute ago Everybody else you're free to go. Thank you guys very much for a very productive call Later early Yeah, I don't know. You're both Or we're all both all right Of course, that's okay, Jim. You can even unmute yourself if you want to All right, let me I'm going to learn a bit, but I'm probably not going to unmute myself anymore You're so funny Unfortunately, we were supposed to talk about clements is pr And with him on the call it's going to be a little harder. So you may have to skip that one All right, let's see claus. You're still there good. Um So claus You want to talk about your Comment here? um, yes, I have also to record what exactly it was about but um I think I was doing some recap of but You know, at when we were preparing the demo for um, kubecon. We were in a rush and and I was Now looking into the issues. I ran into over that preparation And I think yeah, exactly. So, um I just saw an in the the go sdk That I mean we just had this switch from This string representation, I think so For jason, I think it wasn't exactly it was something So I think the the strings were and then the http headers were supposed to be jason encoded before and we switched that to A string representation and that was over those Types we the type discussion, I think And I just stepped over it because in the go sdk. I saw some lines of code where For http headers, I think at least for extensions The the jason dot martial routine was called and So and I saw a few more things where I wondered if if that was Implemented the same way in all sdk's and if it wouldn't make sense to have something like common Test cases test events and to check Maybe in unit tests and the according sdk's if They all work the same Just to make sure I understand the the issue that you're talking about. Um, do do we say anything about? Um encoding or or escaping things or do we just say Take the value. It's a string and just stick it in there. I think that was also you have um It was about double quotes also I think I there was one issue in the go as for the go sdk opened up I was um yeah, so We had this discussion that um If we represent types correctly and http headers, then we have four strings We would have double quotes to determine that it's a string. I think And we we changed it So that we now have just say that everything has its canonical string representation and Yeah, so That's what I said on the selection. I just wonder if those recent changes are already Reflected well in all the sdk's because I saw in the go sdk. It wasn't yet done. I mean it was very fresh that change and and Um And I said it because I think that might be handy to have some common Test events to to send and receive with those sdk's and see if they all work the same with those sdk's Okay, so I think I understand where you're headed with this so Because originally I when you when you when you pasted this issue to slack to me or wherever it was Um, I thought maybe you were wondering whether there was a problem with the spec But really you're just concerned That perhaps all the sdk's are not adhering to the changes we made in zero three. And so you just want Test cases or some sort of verification, right? So that was some example. I just realized I mean, that's always you can discuss back all the time but once you you start doing The actual implementation you run into a lot of subtleties and and and Uh, wonder if if that's handled the same way in all sdk's another thing might be the Discussions around the map type So in the in the go sdk, I think I saw some unit tests that had some nested types and um, yeah, so I was just wondering if that's really consistently handled in all sdk's One one note about the sdk the go sdk and map types is uh, I have chosen to not Uh, continue to nest the map types because I think the spec is wrong So I only go one level down Okay. Yeah, that's what I realized and okay, so that that's then um Intention okay Scott, um, I'm sorry. Can you elaborate? What do you mean by you only go one level down? If you give me a map that's a nested map of maps of maps. I only I only pop out one map level Oh, does the spec say you should do multiple the spec says you should go as far as the maps go down and I say that's wrong Oh, yeah, I don't like that. I don't like the I think we do in general. That's even worse. Okay Those are exactly the the cases I wanted to bring up or maybe Encourage that we we have a collection of of bad examples somewhere. I mean of really tough events with edge cases So so scott, do you think that the current set of issues? That we have opened up cover that particular case or do you think for example, james is focused on a different aspect? no, it's it's covered in the conical hard eventing encodings as the issue that's posted in the comments from topini. Okay, because because Evan opened up a poll request, but He didn't make any change to the spec. He just wanted to list some Some gotchas to watch out for Yeah, the that shows that the the spec has some holes Okay, because I'm trying to figure out how we go about getting those changes made to the spec Do you Can one of you guys open up an issue or a poor request or something to make sure we covered this topic? Because this sounds like kind of a big one Are you talking about the maps in binary encoding? Yes The issue for me in the call that we didn't get to was it's basically morphed into that issue done You even made a comment about changing the changing the spec Is it this one? The one below Oh Gotcha, okay Do you even just made a comment about how to fix this issue there? You expect me to remember that? Okay Okay five hours ago That's way too long ago. Okay Okay, so we do have an issue out there. Okay, that's good So we will at least talk about at some point this however to me. Okay. It is tagged with 1.0. So that's good. Okay So we will talk about this at some point cool Okay, so Okay, so there's two aspects of this one then there's one is that issue that we were talking about But then there's how do we do testing to make sure everybody's correct? What do you guys want to do about testing because I know this has come up before like in this 381 issue How do you guys want to address that? Okay, I think that actually might be part of this testing and verification period that we're talking about doing before we go 1.0 Is it Someone take the the action item to come up with some sort of test hardness or leave maybe a test client or something like that So at one point I I actually had a part of one The trouble is you you unless you leverage like the a golden sdk you have to rewrite an entire sdk We possibly could do this with like pre can responses that are hand edited And just send it and then expect a certain response or something But I I started writing one. I realized it was exactly the code that I was writing for the sdk and I stopped I was thinking exactly about that too. Yes, so um Could we perhaps start by just collecting? I mean if we are working on sdk's and then do debugging Maybe we create some test events To to track down some issues Yeah, I think the I think the right answer is to make a small framework that Is specific for a transport that you can record or create new encodings on the wire And then you send that to the cloud event or to the sdk configured with a certain transport So it's just kind of like pre can messages. I think this would probably get us pretty far So, how do you verify that the receiver the receiving sdk actually parsed it correctly? We would have to have a conical form of the event on the other side Hmm, of course our way to spit it out in xml and make sure the xml matches Oh Well now I need a new laptop because I But I mean like if you spit out json for example because every transport supports json Like show me what the json representation of the event that you received is that's easy to compare Yeah, the one that's funny I know I was it was a half joke though But the reason I picked on xml was because I'm pretty sure no one supports it today And the last thing I want to have someone do is to say oh I'm passing the test because I support structured mode and I just echoed what I got Yo, I support json or sorry xml Do you I do excellent That's wonderful Okay, I even have tests So How do you guys want to move forward on this? Should make a task force to to make a client library that you can direct it at a An sdk or a running service and say like Verify that you get all these messages and like what what exactly that means is tbd so in other words Every sdk needs to have a client echo server that will parse it and then retransmit it back out I would say more every sdk has to provide a An application that you could point at and Did get this result Like doesn't have to be part of the sdk, but it has to be part of A thing that's implemented by the sdk that you're referring to Right, but it would be up to the the the test framework to See what it sent and validated against what it received, right? The that's right the test framework would expect Like a file to be written to the local disk or something Or just yeah, if you want to if you want your guide if you want your sdk to be to participate in the test You got to provide us with two URLs One is the URL to send the event to and the other URL is to do a get to see what the results are And whatever format that may be take on xml as an example, right that way the tester can do a get And see how you can do a post followed by a get and you should be able to verify the results Yeah That makes sense Yeah, that's the end that's easy enough anybody could participate then you could host that anywhere on the web So people can test whenever they want as to know to combat the echo problem You sent the event in one format and really in another so for every single ever format You sent you verify the conversion into every other event format that the sdk supports And check the responses each way Yeah, you have to It gets tricky with other transports though So like if you want to verify that you received something on amqp you you have to know it's both its amqp address and it's Give me the result URL Isn't this about the code it's all but binary encoders, right? But is that for their problem though scott? I mean if I give you a URL it starts with amqp colon something and then I give you an hp URL for the check the result thing Is that is that really an issue? But it's table that there's there's a lot of details there. Okay credentials and things like that Oh, well, yeah, I was all right, but it's it's non-trivial Yeah, okay Well, it sounds like if this isn't necessarily that's hard to do It's just a matter of the first step if someone needs to actually write down The structure of this entire testing framework and then if we all agree with it, we can go off and implement Whatever is necessary There's some is there's somebody who wants to take the first passage is writing down what the framework looks like I I have half a one if no one else wants to volunteer. I just heard of volunteer Okay Or I guess I could stage it on my personal We could create a test repo I think give me a name and I'll create a repo for you to do something STK test a lot of it compliance Something's shorter compliance compliance. I'll do that Okay Okay, I'll do that cool verification or yeah some word that means like Test me Just call it test No, yeah, it's not really oops Okay, we'll do compliance for now. We always rename it. We should let's be more international friendly. What what is the german word for compliance? House that's a good question. Um Direct translation I don't know we are so used to to the to using the word compliance I don't know Let's keep compliance. Yeah, I have no idea. Yes. Yeah, okay. Thank you try scuba. No Although if you really want to measure the people let's go for Where is it Chinese? Yeah, well, that's what I was that's exactly what I was thinking. We're Chinese that how's that go do that? Yeah No one will ever know what it's about except the Chinese folks Okay, anyway, um Okay, so I think we address those Is there anything else on your issue that we need to talk about? Um, well, um Maybe that we can do that in the same repository probably just collect a first List of sample events or good candidates for testing. I don't know Yep Makes sense And that's where I think Evan's PR is going to come into play. Yep. How about how about conformance? That's that's better. I think Anybody object to conformance? Because then we could write a framework and then we could list examples That's fine. Any objections to conformance? Done cool So the same Oh, I'm sorry. I forgot. Great gem. You want to say something? Yeah, just a general comment. Um How are the stk's meant to be aligning with the stk primer or spec that I guess clemens put together and and also I understand obviously that jason needs to be supported out of the box But i'm curious how some of these stk's are Expecting to work with something like protobuf where they've defined the of You know the entire event format is defined in proto. So i'm i'm a little bit curious about What the thought process is there? Um, I can see in clemens's thing he's got, you know Providing marshallers for the data payload. Um, I assume you also need marshallers for the extension As well so that they can go into headers so I'm I'm just trying to understand where we're going with that stuff Because I think as a company we're on the verge of starting to actually Code to cloud events but From what I can gather our engineers are not looking at the stk's at the moment. I know i'm not entirely sure why But I wanted to understand your guys direction Very open-ended i'm afraid. Yeah, I haven't I haven't attempted proto yet, but um, it's not impossible You you can technically provide your own transport and you can provide your own content I call them codecs where you Inspecting the the media type of the incoming request Or hdbp or the data content encoding for the cloud event You can see what What the data encoder should be So you can select a decoder that matches Right, I that was that was scott. Yeah, um, I I guess what i'm saying is that in the spec it sort of says you should define a cloud event object so so though the stk's have gone and created a cloud event object representation in some cases, but When you talk about something like proto then You all you really want to do is layer an accessor or something over the top of the Generated proto model. Yeah, your stk is not going to own the the proto definition. So I Uh, yeah, it's just a struggle for me to understand how these would work and especially When you throw in avro which is coming and then I think we were talking about other binary schemes as well Just yeah, just trying to get a sense of direction and um, is the stk spec Um, uh Meant to be adhered to from a patent perspective or is it just a A best practice sort of document. I've been ignoring the stk document that clements wrote Is that because you think it's wrong or you're just having a time to see what's in it? I think it's an opinion That does not apply to how every language is composed Hmm Yeah, and that that I guess is is the interesting point. Yeah, but I mean Presuming you follow a similar pattern Yeah, or are you sort of saying no, it's not idiomatic. So it doesn't make sense in my situation. It's not idiomatic the general Thought process I've been using for the goal laying sdk is that your You deal in the cloud event structure So that higher level object and you never ever think about the transport And so you interact with this client and then under the covers the transport does its job But you never ever see that Like the only time you actually see it is when you created an original client that you're using to send and receive I think clements takes a very different approach, right? Yes And what's interesting is from my perspective, I always thought both approaches were valid But I had to be honest I was honest I haven't checked the sdk doc to see whether is whether that doc sort of forces you down the clements path or not It kind of does which is why I've been ignoring it Interesting. Well, that's something we should probably get fixed then because if if If we have at least one sdk author who disagrees with the doc and you're doing stuff that isn't consistent with it then that kind of In my mind, it makes the the sdk doc kind of pointless and we should either Make the sdk doc align with what you guys are doing With what's actually being implemented Or kill the doc because it doesn't provide value I think the each sdk should be The domain expert on the language that they're using and every language would do something very different So if I'm wrong correctly one of the reasons we came up with the sdk doc was because Someone at some point said gee wouldn't it be nice if we could have some level of consistency across Of the various languages or across the sdk's Do you think that that's a pipe dream and again it actually Can't be done because there are so many differences and And you want things to look more native and and so trying to get consistency is actually going to hurt that I think it's If you're trying to get consistency around how you compose each language it's I don't think that's a goal At least it's not a goal for the sdk go Okay Interesting because it doesn't really make any sense and in fact like the The sdk document says that basically any object at any state Any object that represents a cloud event should be valid in and any state in in memory Which is completely untrue for go because it's not possible What does that mean? Not that it's not possible. What is what is clemen's looking for? I can't remember. I don't know exactly. I think he's He's assuming like some very big structure around factories and conversions and marshaling and so the You always get a valid event But I don't understand how you would Assemble an event because like as you're assembling it You either need to make a Builder pattern and at the final thing you say build and you get a valid event and you're only interacting with the builder But that's not how go works Hmm Okay, maybe I need to go back and reread the document. Okay, that's interesting But it sounds like a good topic for us to have on the next call with clemen's on So Okay, but we actually only have four minutes left on this call And I wanted I wanted to be able to touch on your on your subject scott. He said responses to events What do you want to talk about there? So the I've been working with Alan Conway I don't know if he's on the call Let's see. No, he's not. So he he's Trying to push down some very big changes in the Golang SDK to make it Kind of unidirectional by default All right. Okay. Sorry. Okay. Let me step back his general concern is that There's been some misunderstanding about the The cloud events SDK in in regards to an event shows up And it is expected potentially that you can have a response So this is very true in HTTP and it's actually very useful property Because you don't need to have any you don't have to have the next stage wired up You can have an invoker that goes and invokes a function with a cloud event and that produces A new event potentially It doesn't have to be as like as a response to that event in the terms of The sender of the original event Intends to receive this response. It's more We in so incanated we use this mechanism to do filtering So if an event goes into a function the function that can act as a filter The response then gets forwarded on to the next stage If there is one, you know, there isn't one that event chain gets filtered out Or that filter has the chance to mutate that event Or potentially make multiple events But this is only a feature of HTTP and so there's Probably some guidance required from the cloud events sdk Sorry the cloud events spec to talk about How can you leverage this feature of HTTP but also have it apply to every other transport This is not possible in aqp unless you give it a response topic Anyway, I can't go ahead Jim No, I was going to say under you sort of you're sort of drifting into You know, there is a esp territory at that point aren't you where you you're more trying to define Interactions than just in transport encodings, which Do you see what I mean? Are you trying to you're trying to put behavior over the top? Is that a higher order function than an sdk? I guess that's the question So I have a client and it I Some transports support this mechanism of having a response and some don't So I made the choice that the The top level interface has this optional piece. You can you can say I accept responses and I or I produce responses Most of the transports in the go lang sdk ignore this field because it's not actually supported by that transport So This question comes up. So you can't decouple behavior From the transport if you're talking about an sdk I'm I apologize. I'm I'm not quite sure I completely get it in terms of is this a spec issue or an sdk issue Well, I think it becomes a pattern issue. Doesn't it? I mean, you know Much has gotten my one of disregarded clements is sdk document. I that's the sort of stuff that would go in there Yeah, to define the expected Contracts So Scott it sounds like you're saying, okay. I d marshal something off the wire. I give it to somebody to do something with But is it my job to send a response back or is it somebody else's job? Well, the because I own the the h2gp request that's opened I'm invoking you in a callback I the sdk is responsible for responding to that That call back has the opportunity to return a h2gp response because that's where the request is open I apologize. I actually need to jump off to another call You guys can stay on because I don't own the zoom thing guys can continue just take notes in there But I need to jump off. I apologize. Somebody's pinging me So you don't know. Okay. Bye guys. Yeah, I need to drop off to you. Thanks so Scott I I I see a point but the way I sort of I said, I've not spent a lot of time with the sdk yet, but I'd sort of viewed them as They would be rapid by something else. Yeah, so Um, I don't know I'd have a jacks rsn point that that I would code and it would go Oh, I've I've hit this endpoint. Therefore I'm going to use this sdk function to Demarshal my h2gp request Rather than it rather than expecting the sdk to actually own that jacks rsn or the actual Endpoint protocol It's literally just a transport bind And Maybe do you see what I mean? I maybe I'm not explaining myself very well, but it's also sorry to Interject that depends a lot on the language though True. Yeah. No, I get it. I get it So for example for javascript, there isn't a single For no js specifically there isn't a single Format for h2gp request you could give the sdk if it doesn't own the client Then it doesn't know the format of the headers, for example right You then you would have to have separate Separate marshallers for different h2gp clients and stuff and that that becomes a very problematic I think but aren't you going to have that problem anyway? I mean, this is a the double-edged sword with sdk's Yeah, I mean that I don't know We as a company may have said no, we're not allowed to use that particular Construct for vulnerability purposes or whatever yet the sdk actually uses those and it's it's tricky Yeah, I mean, I don't know that there's any way around that But it's sort of that is a good point you're running to dependency problems where You're sort of the low level sdk's are starting to drag in You know other dependent libraries, you know That's true. That's true So I've been I've been writing this thing to The client wraps the transports, but you can use the transports directly Or you can choose to use just the codecs that the transports use and if you want to Do all the work of marshalling yourself you can use the h2gp Codec and and you get the same result Right, so maybe this is a bigger question then you know as an sdk writer Am I required to be Unopinionated and make it more complicated so that People can plug in their own bindings or whatever Or am I allowed to be extremely opinionated and say nope. This is the way that That I marshal stuff And you know it's on me I do think it would be good for the official sdk's to provide And if they do provide an opinionated approach as scott has argued for which I think is correct They should provide also the option You know doing the transport yourself because of exactly those for example company Policy issues that you might not be able to affect in any way um yeah And that's also something that scott talked about so I think both are better But for someone just doing a random cloud event supporting sdk I don't think it makes sense to require both they can do only opinionated stuff Yeah Well, just no reason in my opinion to make the What's the word here um Make it harder to start making them your own sdk Or your own cloud event supporting client Yeah, I get it Yeah, so I my goal has been uh kind of edging towards Function as a framework Using the go sdk library So it's very easy to become a producer and a consumer of cloud events And you potentially don't even understand the transport underneath Yeah, I actually have a similar thing for Kafka that Came to existence before cloud events in our company that will be ported to cloud events and supporting multiple transports Once the spec is 1.0 and I do think that's actually a great approach But I also think by it would be pretty weird not to have a If it's considered an official sdk to not have a more low level low level codec endpoint that you were talking about Yeah, yeah, so exactly. I I have three layers. There's the client There's the transport and then there's the codec And the codecs are transport dependent Why because the I've made an abstraction between the codec and the transport where the the codec and the transport communicate with a transport message which has the the necessary required fields to produce The message on the on the wire like for htvp. It's headers and body for amqp, it's headers content type and body for pub sub its attributes and data Right. So that's the codec handle both event encoding and the transport marshaling Yeah, so okay. Okay. So you just put them into one and that's why you need to have transport specific codecs That's I guess that's fine. I was just like now that the there's dpr for architecture section on the primer and it Like I was actually thinking more on those lines, you know, having a separate j-zone Codec and the transport marshaler then you don't need to have 15 different j-zone codecs Just one but 15 different transport level. Right. Yeah. So I share marshals The j-son like structured encoding is shared among all the transports But each have independent binary transport encodings. Oh, yeah, sure. Yeah For header values and things like that and how you pop that out of the transport message Sure, sure. So yeah, yeah, that's thing Yeah, that's what I was thinking about. That's the issue. I've been I tried to bring up the architecture pr where The binary encodings just don't conform to that prs the architecture. Yeah, they're pretty Um, it's it's fairly noisy, especially when you start adding extensions It's yeah, I I find it hard how we should Or I've been thinking about a little bit about the the combination of canonical string encodings The need for binary encodings and the fact that the binary encoding would require a codec per transport makes a pretty Hard combination to crack. It's true but For htdp I I think you you have to Otherwise you have to rewrite every client library that consumes the post Yeah, yeah, I got the k-note an example. That's that was a really good example. I There's no reason not to have that When that's the case Yeah All right, I also have to run Um, yeah me too well, but yeah, if Jen if you would like to play with the go lang sdk, please do let me know what you think I will try and have a look. Thank you Awesome. All right Thanks guys. See you next time. Bye Bye