 Hey everybody. Good morning. Hello, Greg. Clemens, you there? Yes! So excited. Mr. Mitchell. Good morning. Good morning, Christian. Oh, hi. Hello, Dustin. Hello, hello. Hello, Heinz. I'm here. Hello, David. Hello. Hello, Eric. Hello. Sorry. Hello. Yep. Ginger. Yes, I'm here. Hello, Javier. Yes, here. All right. Lance. Yep, I'm here. All right. Manuel. Yes, I'm here. Next one. And Nick. Hi. Hello, Scott. Yeah, he's got Thomas. I'm here. I'm going to completely butcher your last name. I think I'm getting it. Maybe. There we go. Okay. Ryan. Hello. Hello. Tommy. Hello. Let's see. There was one other. Timor. Timor. There you go. I got you. Hello. Who was that? Who was that? It's not too happy. Oh, there he is. Oh, it's Linky. It's Linky. Okay. Francisco. Oh, let's see. Serge, are you there? Uh-huh. All right. Klaus. Yes, I'm there. Wow, we got a really full house today. This is cool. All right. I know I'm missing somebody here. Hey, dog. This is Vlad. Hey, Vlad. And my window big enough. I saw you at the bottom there. All right. Did I get everybody so far? This is Vinay here. Okay. Vinay. Okay. I thought someone else was trying to speak in there. Yeah, that was me. Christoph. Christoph. All right. Jim, are you there? Yes, I am. Wow. How's everybody doing? Thanks. How are you? Good. Good. Ian. Um, I don't know your, I can't remember your last name, Ian. If this is not your first time in, I apologize. Are you there, Ian? Looks like you're trying so hard. Do a favor, Ian. If you can get to the meeting minutes, I'll paste the link into the chat. If you can just add your last name and your company if you want to be associated with a company. I apologize if you've been here before. I just can't remember. It's how much you'd actually don't need to do that each time. I appreciate the thought, though. All right. Three after. Why don't I go ahead and get started? Let's see how many people do we have? 27. All right. Um, community time. Okay. Anything from the community that people would like to bring up that is not on the agenda. All right. Not hearing any. No, we have no updates from the SIG discussions. I really need to go back and poke Liz, the chair of the TOC to find out what she wants to do next in terms of the next steps. I'll let you guys know if anything changes. We do have a SDK call today right after this one. We do. I know we have at least one topic on the agenda. So please join that. All right. Timor. Oh, yeah. Quick, quick. Yeah, quick question. This has been in here. So you mentioned the conversation with the TOC chair and what, sorry, can you just remind us about the context on that one? And that's about what do we do with our working group? Do we turn our working group into a SIG or do we come up with a working group under SIG app delivery? And the current thought is to make us a working group under SIG app delivery. Okay. Ryan, your hands up. Sorry, I don't know if we did community time yet. I distracted for a minute there. And as I mentioned in chat, I think last week that there's interest at least within my company of defining a binding for Amazon Kinesis for cloud events. And I know you pinged Tim Doug. I haven't heard from him at all. I don't know if there's anybody from Amazon on the call, but I actually, I took a stab at it. I don't know where to put it since the proprietary findings live in separate repositories typically owned by the organization that owns the protocol. So I'm happy to throw it in a gist or open up an issue, but just looking for some guidance there. So just a curiosity, is that more like the adapters that we have or is that more like a actual transport mapping? It's pretty straightforward, but I felt like it might be useful to formally specify it. They use HTTP, don't they? They do. They have some weird things like the actual structure of the JSON object encloses the data and everything needs to be base64 encoded. So just formally specifying that. And also they do have, I know in the Kafka binding spec, we have a section that specifies that the, if you're using the, I'm forgetting the extension name, but the essentially the partition key extension that could be other properties in Kinesis as well. So, I don't know, just figured it might be useful to spec and get people's thoughts. So tell you what, let me take the action item to send a note to Tim from AWS and put you on it and that way try to help force the conversation, would that be helpful? Sure. Okay. Thanks for doing that, by the way. Okay, anything else for community time? All right. Thank you, folks. Okay. Workflow update. Timer? Yeah, we're still waiting for the SEGAP delivery to make the decision on our, since our presentation. We also have worked with comparing the specification with Argo workflow and also currently working on the Tecton pipeline one comparison from their example so far that's going pretty well. I do have a quick question, which can be taken offline to not take up too much of the meeting time, but regarding the GitHub repository, we wanted to add some API and SPI so there'll be like Java code, but currently we're in the the workflow sub directory and I can't really set up like CLI and GitHub hooks and stuff like that, kind of like what we wanted to do with the code with, I think similar cloud events is knowing for the SDK stuff. And I just would like to get some maybe ideas and help, possibly getting a separate GitHub repository where we have all the, all the rights to do that stuff or what would you guys suggest but again, this can be taken offline. Yeah, let's take it offline because remember in the past when we tried to get you guys your own repo, I got some resistance and they said no make them a real work group first or a separate project first. And that's why you ended up down this path of becoming a sandbox project. Okay, so we'll wait for the decision then but yeah. Unfortunately, I think we kind of have to. Okay. Okay, any questions about the workflow stuff. Okay, in that case, Clemens would you like to do a very short reminder, everybody your proposal with a pointer to it. Yes. So, the idea is to create a very simple HTTP based or primarily HTTP based schema registry which is really a repository for any kind of serialization or validation schema so whether you want to store Jason schema or average schema or whatever you want, you want to store with a very simple mechanism that there is a notion of effective three levels, there's a notion of schema groups, schema groups allow are there to allow grouping of schemas and specifically for schema groups of schemas because that's usually a concern for applications that inside of a schema group live schemas, and each schema may have multiple versions. And the mechanism here is that you can go and create a new schema and the seeds version if you will by just doing a put against the schemas collection with the schema name, and that will go and create the first version. You can also do the same basis the same again if there are existing versions and I create a new version, and then the ski the mechanism on the back of the implementation may then, or you can also go and target directly to schemas collection with a post. And then you, the schema collection, then can have smarts optionally and that's something that's up to the application and how it wants to do this if it has particular I particular idea about how to know of policy for the schemas, like for instance if you want to enforce that if you are storing for a particular schema group. If you want to have a policy that makes sure that all the average ski must must be backwards compatible. So that's why that policy and which means if you're if you're now creating if you're adding a new schema to the version collection. For that particular schema that if that is not backwards compatible that that gets rejected so that's a mechanism that's that I've also effectively described here as the four nine conflict so that's a super super super simple mechanism. That is a starting point for discussion. We don't we're not, we're not wedded to this particular proposal, but we believe that a schema registry that works cloud events overall is useful and necessary. Because we have a data schema field and we are now we're talking about a schema ties all kinds of schema ties serialization formats and having something that is supported in the community broadly is I think generally useful not only for us for the cloud events but more broadly for the messaging community and for the eventing community and that's something where we can as the cloud events project or as the serverless working group do good work. So, I think this is so this is an initial proposal but really it's an invitation for an interested group of people to come together and hash out what the right structure for that thing would be. And then come to effectively rest API that we can then jointly implement and write some code for. Any questions for Clemens comments. Okay. In that case, I think from a process perspective. Since this was just opened yesterday. I think it makes sense to at least let it sit there out there until next week's call. And I think the next step in the process might be just to take it a vote or unanimous consent if no one objects to saying yes or no to having this be a formal work stream under. I assume you have those under cloud events not serverless right Clemens. I don't I haven't decided on the right places for that yet. So we have we certainly are interested in making that making that happen. And with, you know, seeking, seeking for a venue here and we're coming here as the first candidate venue, believe because we believe that that's the right place for for to do this, whether we do this under cloud events or whether we do this in the serverless working group is something that I'm not particular about. Yeah. But so we believe that this is urgently needed as a convention because there are now several vendor specific and interfaces, and there are non free implementations that people are using that seem to be very popular. And I would like to get us to a point where we can have a free implementation that everybody can go and use. And for that we need to have an interface that everybody can go and support. And since there is none of that of that simplicity, it requires simplicity that everybody can go to agree on. We're just proposing one here so that's the, that's the point of it. And to, there's a question in the chat, what I showed that to the Jason schema open API and a C API communities. The answer is no I did not because this this year is the community, and that I just showed that to first. Okay. I feel like there are a couple of other process questions that we need to resolve, for example, which get rubber repo would the proposed spec go into. But I guess we could, we could, we could answer that question when we get past the high order question which is, should this broader group work on it at all period. And I think we should probably look to maybe have a vote. We should have a week, unless a whole bunch of flurry of activity comes up and it get into a big discussion but if there is no major discussion then we should push for some sort of resolution next week. And I will send that a note, drawing this, John people's attention to this. Yeah, that's fair. Yeah, and if that if that ends up being if that ends up being a no vote that we're going to take that elsewhere. Yep, that makes perfect sense. Okay, any other questions comments for Clemens. Okay, cool. Thank you Clemens. Okay. Okay, peer reviews. So, hopefully an easy one here. Actually, I guess I should ask, before we jump into PR and issues, is there anything else people think I'm skipping that we need to talk about. Okay, cool. I mentioned this last week, but it was too soon to to approve it. Basically, this is just adding the new specs to the checkers that we have. And I fixed a couple of bad ace refs and tried to modify the RC 2119 wording to make sure all the uses are actually aligned with that. I believe Mike found one case where I think the href was maybe wrong and I fixed that form. Anybody have any questions about this need more time to review. Obviously, this isn't the final version of spec so PR is always welcome later as well, but I just want to be able to make sure that the checkers work. Okay, any objection to approving them. All right. Thank you. All right. Slinky, would you like to talk about this one. So I've rewarded the distributors in extension, as you explained me last week, or the previous week and remember during this call. So, yeah, people tell me what do you think about it. Okay. There was one question from Ian. The question for me and is more does this change, I mean, because this change somehow restricts the behavior of this extension. Am I right? Is he not here. Yeah. So at least the way I read the change is it kind of it. It's not just a rewording but it kind of semantically changes the meaning of the extension and makes it less useful as mechanism for trace propagation so that's the reason for my comment. And I'm also not quite sure what the motivation behind the changes. But maybe I'm not understanding the change. Maybe clarify a little bit. Kind of the difference. And the use case for the change extension. So, Francisco, could you elaborate on why you thought it needed to change? The reality is that I merely brought what you guys told me like last week so I think I think this is a discussion. It's something that the original authors of the distributed extension should like it because I really merely rewarded what you told me last. No, I think what Ian was asking for was why did this even pop up on your radar in the first place and what was wrong with the original spec. Because personally, I didn't, I didn't felt it was clear to me what this distributed extension is about. Really, I, I, I told him, he is working with me on implementing this, this is an extension for a specific use case in another community project. And I found like, it was not really clear if what the way we were using this extension was intended to use that way. Like, for example, middleware, the middleware should change, should modify the trace context, the middleware should modify, or the middleware should that the trace parent, or it's something that should be done by the source. I mean, those, these kind of questions popped up while we were implementing this back. And that's the reason why I, in first instance, I said to me, this doesn't look very clear. So Ian, your question here, especially talking about using it for non HP protocols, I think is is a good one. Because I think implicit in your question is that the tracing value may change from hop to hop. I'm sorry, the CE value will change or might change as it goes from hop to hop the same way the HTTP header would, right? And I don't think that's the intent. So the way I read the current version of the spec is that it's meant to be like a trace propagation mechanism. Essentially the same as the HTTP trace context headers are, but for non HP protocols and format. So, so yeah, so I guess are you saying that that's not what you believe the purpose of the efficiency. Yes, Jim, I'll get to you in a second. Let me try to answer. The understanding was that this value is meant to represent the original value from the original sender, and that the real trace header actually may change as it goes through all the various hops. The CE value is not necessarily going to stay in sync with the real header value, which is why I can I rephrase it as the original header, the original value, not necessarily the true complete value. So that's why when you start talking about, oh, it's supposed to be used for non HP protocols. Well, I don't think it is because that implies that when you are an HTTP that they're always in sync as they go from hop to hop. And I don't think that's true. At least that's my understanding of it. But to be honest, this one's always confused me. So, Jim, you want to jump in. This same thing actually the my understanding is exactly lines up with Doug's in that this is meant to be the trace from the sender to the receiver in their context is not meant to mutate as it travels through intermediaries because, you know, it's not there to do the derivative tracing of all of the processes that the intermediaries use. Yeah. So, for instance, if I have a sensor that emits an event and maybe I pass it through Clemens is event grid before it ends up in one of my processes. I wouldn't expect to understand all the component tracing within event grid. Yeah. I'm only interested in the trace between the source and the, and my business processes. That was my understanding. Okay, so I guess my comment is more what what is the case for having to separate different traces one that include some middleware and one that doesn't. You know, for example, if you, if you do update the tracing extension, you know, it's always possible to recover the spans that you're interested in by filtering out those intermediate spans from later stage. See, I'm not, and it seems like trace team systems in general are just not well suited for dealing with more than one trace context at a given time. In fact, I don't know the system that does that. See, so the, I'm not sure what, what the motivation for that, or the use case would be. I think the only maybe where it got confusing. And guys back me up on this is originally when that extension was written there was no reference to the HCB transport aspect. But when, but as that was being finalized, I think somebody connected the dots to the W3C thing. We're ever that other spec is coming from and sort of proposed it as a, you know, as a mapping for, you know, that particular transport because there was somewhere to put it. Because there was quite there were similar questions coming up about, well, you know, this other thing exists is specified so should I put it in there and I think that was where some of this confusion came from. Okay, so come inside my hands up first. Quick question and I don't, I know almost nothing about the distribution spec. So I'm asking this just because this question keeps coming up many times. Does it make sense to even have this extension at all, I guess is my question. What would be lost if we just dropped it because I'm wondering whether it's causing more confusion than actually helping, because when it's there people are going to say hey, it's there I should implement it but we can't explain the use case that is actually useful, because I've heard people say, you don't need this because the real tracing header has the original value in there embedded someplace I think I heard that. So why do we need a separate field just for the original. Let me just put that question out there do we actually need this at all. And I think Clemens your hand was up next after mine. So in the support and gem that in that discussion. This exists. So those, those two fields exist because they are exclusive to the cloud event which we are with with what we're doing here effectively tunneling through different transports. The, the, the relationship to the HTTP trace is only that the HTTP trace gets seated in the context established for the cloud event so the cloud event is being generated somewhere. And from that application, you effectively create a trace context that trace count context is now effectively inherited by the HTTP pipeline. If we want to call it that way, where you're starting an HTTP request you run the HTTP request through proxy that goes through a reverse proxy that goes to an application server. And those are already three elements of HTTP processing that probably for debugging should be traced, but that's only relevant to the HTTP processing. That should be able to to link up to the original cause of this, which is, you know, the original request that has however no no relationship that HTTP processing really no relationship to the end to end handling of that event. So you have effectively two different graphs that are both rooted in the same cause and that is the original trace context. So there's an end to end relationship which is expressed in the cloud event. Which is an, which is an immutable value once created, because that that value basically ends up in the consumer and then the consumer can again root its own further tracing in that, which is originate which is effectively based on the original application that has admitted that event has created, but that is independent of the tracing that happens for at the respective transport path. So the only reason why we need this extension is that we want to be able to manifest the original trace context for propagation in the in the event, independent of what happens at the transport layer. Next. No, Francisco, did you do it? Did you really want to drop in the queue? Can you hear me? Yeah, no, I can. No, I first I want to echo what you said that if you want to keep the spec as is now so that that the trace power represents the original trace. It really makes sense to have this back because tracing systems in theory are designed to from the from the child span from the child span ID that I can go back up to the original span. So that's that's the first point and second point is that I'm single so in the chat is that you're saying that we should have we should use this part of this distributed tracing. It's really to transport trace informations in non-hp protocols. And my worries that maybe this is not the right place where this should happen, also because we have this. And from a mere step, from an implementation point of view, we still have the structured encoding that needs to be decoded needs to be written to read the transparent and so on. So I'm not sure the really called events is the right place to do that, also because some open telemetry, for example, really has some integration with Kafka, where they send things using Kafka headers using the transparent, which is similar to HP. Okay, Scott, your hands up next. Hey, so one thing that's weird about this particular extension is, if this event goes into things like Kafka, we could potentially have these like replay events out of that queue and now we have traces that kind of branch out very largely. So it feels like what we're trying to do is trace an event from source to wherever it's going to land but the replay event feels like a different event that's that needs to be traced in a different way right. So I wonder what people think about that problem. You mean the delivery of that needs to be traced separately. I think so. Yeah, it's because it's a different, it's a different request that caused it to be recent than the original production of that event. Yes, and that is a different, but your your retrieval is actually it has is not is about that event but it's not, it's not the same context. The retrieval originates in you starting the retrieval that's your that's the origin context for the retrieval. And then, and then what you fetching ends up being one or many events which then have their own trace context, which is the one of the producer, but your retrieval is an operation that you without knowing what you're going to get that you're starting out of your own retrieval context, but that's not where how how we're using this because that would mean that even today the traces drop at each queue bounds. The whole point of this is to try to link an event that goes through multiple cues as a single line. Once you have the event in hand, then you can go and continue with that context but the just the retrieval, you can anchor in that context because you don't know what you're going to get when you start it. Like when you just say queue receive, you have no idea what kind of message you're going to get which means, you know, you need to choose a new you need to root this in a new context that is for that particular retrieval operation. Well, okay, so incanated that's not what we're doing, but maybe we're doing it wrong. Well, no, but what, but really how like you say you say queue receive. Well, we don't because we incanated everything is push based and every box looks like it's push based so so that single trace parent propagates throughout the entire push. So with, so with Kafka and with with any sort of queuing system. It doesn't work because you are you are actively you're actively soliciting messages and you're often actively soliciting messages for good reason because you're just done with the work that you've done, which means now the cause the reason of why you're fetching the work is, is motivated by a different threat of execution you're actually measuring the performance of your of your task dispatcher. So I'm going to move the next person to queue. Let's Scott you're the one to jump in there. Okay, thank you. Okay, Ian your hands up. Yeah, so I think we've already, we've already kind of talked about whether there's utility and having the original span ID. The event that why I think that having an extension for publication actually is the wall because, you know, while it is redundant when you have a protocol that orchestrates propagation, such as HTTP or maybe Kafka. Many protocols don't. You know, if we don't have an extension where those protocols put the trace context, then they each kind of have to figure out the wrong way to do it or we create, you know, a new extension for protocol or something. And, you know, the same thing is true for when events need to be persisted some layer. So, you know, it feels useful to have a single way of doing this. It can be every protocol middleware coming in their own way to persist trace context. So, yeah, I, for that reason, I think it is useful to have something like that aspect. So my comment here is kind of like if this isn't the mechanism for trace propagation, then maybe we should have another extension for trace propagation. Just hearing this conversation, I mean, I'm I am in danger of conflicts with one of my previous comments, but also I wonder if this is what maybe the answer here, maybe this is what Ian was saying, I'm apologize if if I'm repeating that, but moving this into the transport specs. So you say, you know, explicitly in the HTTP transport spec, this is how we propagate tracing and you do that in Kafka and getting QP and all the others. And I'm not sure if that's feasible, but then it becomes a very transport specific thing. I'm still arguing with myself about the whole sort of, you know, is it end to end and I don't care about all the intermediary in the middle. But that would seem to be a bit of an out, you know, that you could remove it as an extension and just make it part of the transport specs. As part of the transport specs, then if we think about that, then we should stay away from this because then we're having a very obvious overlap with the W3C and the work. Yeah. And that's my concern. Yeah, because I think that's the reason this extension got added in the first place. Yeah, just to try and be agnostic to the transport specs. Yeah. Correct. And then I'll be myself into a corner. Yeah. And because it really is, is we're doing this because there is something that the W3C spec doesn't do and that is the end to end relationship. Yes, exactly. So that's the logic of the logic of even having it is because we want to stay out of the thing that W3C specs do. I feel like we need to call time on this one. So we don't rattle too much, but I'm not quite sure what the next step is, is it just to encourage people to keep commenting on the issue? Or did anybody feel like we were circling around that answer? I'm not sure we were, but I may not understood the flow. Yeah, I think. I'm not sure that we're correct. Yeah. So I'm not hearing any huge things. Let's take it back to the issue and see if we can make some more progress there. The only thing I would ask is that we don't wait until like a day or two before next week's call to try to get a little more conversation going and in particular, if you spoke up on this call today, please comment on the issue itself to try to get some conversation going there. I think that'd be appreciated. Okay. All right, slinky. Another one of yours. Do you want to talk about this one today? Or would you rather wait? Yeah, let's wait. I want to refactor all to them. The Jesus swimming one. And also because I look at the comments. But I didn't reply. Okay, fair enough. That's good. Okay, in that case, we'll hold off. Okay, next one. So this is not a PR. It's an issue. But Grant was proposing that we have a PHP SDK. And before I just went off and create the repo, because it sounded like a great idea to me, I figured process wise, we need to make sure the group doesn't see any potential problem with doing so. Anybody have any questions, concerns with this? It seemed like a no brainer. Okay, cool. I will make it so. Thank you, Grant. I wanted to make it quick. I have a question on the previous one. Which one, this one. PHP. Yeah. It just could just be me. But do we see, you know, when you talk about cloud and event sources and those kinds of things that, you know, PHP is more would be or is more prevalent than rust, for example, because rust is being mentioned in the, in the, in this issue. I would have thought that, you know, languages such as more server side, like, you know, rust and go lang and Python and others are more than more prevalent than PHP. I'm just curious what everyone thinks. Anybody have a comment on that? So at commerce tools. So we're coming from the commerce side of things and PHP was really strong there back since the 90s and that keeps on and like the PHP community is also migrating to the more cloud native modern things. So yeah, it's not the most popular language but there is stuff going on there I think that I'm the PHP developer. That's cool. Thank you. That's fair. Okay. I think someone else was trying to speak when Vinay was talking to someone else trying to say something. Yes, there was a bit of delay so I started talking to her. Sorry. So what I wanted to say is that we now have formal conformance process with cucumber. We already integrated into go lang. It's combined announcement and suggestion for the PHP SDK. So in case I'm not sure who will be working on the PHP SDK but if you want can also help you integrating conformance testing. So that you stick to the same set of tests we have for go lang and jala, for example. There you go. I think Grant is on the call. At least I thought I saw him earlier. Oh, maybe he dropped. Okay, so maybe we'll see the recording. But yeah, actually if you want you can put a comment in here to draw attention to it. That'd be great. Thank you. Thank you. Ian, is your hand up old or is that just left or that new. Okay. Okay. All right. Cool. Moving forward then. Okay. Both of these were from Mike and he is not on the call. I'm sorry. No, the first one is not from Mike. I'm sorry. It's about Mike. Thomas, you are on the call. Right. Would you like to talk to this one, Thomas? Thank you for giving me the time to talk about this one. So actually for the next one for the considering GraphQL as you sent out the notification that we should have a look at it. I first tried to understand what the discovery and subscription APIs are about. So it felt like ours. It might have been one or two to read through discovery and subscriptions. I literally put both side by side and I figured out that there is a slightly different terminology used, especially around producer source. And yeah, I gave also some examples in the text. And then I also tried to understand the discovery Mike is not on the call. I heard. Yeah, I don't think he has it. I think it has conflict might be able to comment later during the week. So I tried to figure out what is the model behind or the relationship between the different entities and I tried to make a little drawing below. The relationship between event provider type and producer. And by the way type is a very misleading or confusing entity name for me. And the type has an attribute type, which I think it should have been named name rather. It was really not clear for me that the relationship in between that that was just my guess here. So maybe Mike needs to comment on that one. And also the source was mentioned in the terminology. So the context in which the occurrence happened. It's somehow not really linked to the information model. To the subscription, which I think it's interrelated with the discovery there. Thank you, Clemens for providing the YAML file, which actually helps to understand the API. I also try to make a little model there. We really have a justice subscription object and the settings and somewhere on the outside the filters that was at least my understanding. And that's why I brought it up. So I would love to also file a PR but with suggestions, but if I don't have the real understanding behind and what was the intent, it's really difficult to find the right direction where it should go so that it's at least for me better understandable. Maybe it's just me. I guess these two documents, they were created kind of separate. They were. Since there are two authors. But really a surprise. Nevertheless, I think that it would be great to align a bit the terminology and give some more background. How, how are we supposed to use or how is a user supposed to use those APIs that would actually help a lot. So when did you join the group. Not so long ago actually a couple weeks ago. Okay, so that explains it. We had, we had, we had formed two subgroups, which worked on both discovery and on discovery and on subscriptions. And there was some, there were some requirements thrown over the fence. But we have those two documents basically were put into the repo as working graphs for people to look at and do what you're doing right now. Very enough. First of all, first of all, looking into the individuals specs and see what they make sense as they are, and then also work on reconciling them. So, thank you very much for a fresh look. I think this is ideal, because you just come at this uninitiated. And so I think what you're, what you're just doing here is, is great work because ultimately the documents need to stand for themselves and need to make sense. Then in conjunction, the idea, the idea behind this is that you have a way to a common way to subscribe to events. And then to be able to subscribe to events and to find the subscription manager as we call them a subscription specification. You obviously also need to have a way to discover those subscription manager. And that is what this discovery spec is for. So discovery spec basically gives you a catalog of events and the subscription managers where you can go and find those. And effectively the endpoints are being described in discovery. And then subscription gives you the mechanics of how you can set up a relationship between your endpoints. And the subscription manager, so the subscription manager can then facilitate giving you the event. So that's, that's the relationship between the two. So we have done some work on making those specs and then we checked them in and now we have, I think we're taking a breather right now to then kind of resume working on them. But also the idea was to kind of have the specs there and then for people to find cycles to start proving, proving those out because those specifications certainly need implementation for them to make sense. So they're coming exactly at the right time and you're coming exactly with the right, with the right level of information that is none and having read the documents. So if you want to go and start filing PRs to help reconciling, I think that would be super welcome. Yeah, I think exactly this, this relationship you just explained this, this how they interact with each other. That's the missing piece and that's what I probably cannot really provide, maybe on a detailed level. Yeah, but if, if you, you should be fine. You should be bold and and go and propose. Okay. If you think there's a link missing, make it nothing, nothing as the greatest thing I learned working where I'm working is nothing happens until you do it. So just go ahead. That's absolutely true. And if you don't have an idea, please file an issue anyway, because at least that will force the discussion and maybe someone else will have an idea of how to solve it. But Klaus your hands up. Yeah, so I just want to say I'm really happy that someone starts to draw those diagrams. I'm missing this thinking about the model behind all this all the time. I actually also started by my own drawing similar diagrams, actually the result what was different. That's not good. That's great. Yes, so as we are in the same time zone don't hesitate to contact me. We can have discussions. Maybe we can have something comparable to the primer that would also explain a bit more the concepts behind, I don't know subscription discovery. I think it makes a lot of sense. And for this is not just for you, Thomas, but everybody. I actually love it. If people just start opening up, not even PRs but that's the dream but even just a whole boatload of issues of things that just don't make sense, just to force some of these discussions because I'm trying to think back when cloud events first got started. We had a whole bunch of issues and some of them made no sense whatsoever they're just one, you know, couples words short little sentences, but at least it helped force a review and a discussion about some of these topics in the spec. And I know Clement said we're taking a little bit of a breather right now but I don't think it's intentional breather. I think it's just maybe it's because everything going on the world right now people are just sort of relaxing or have other things on their mind. But I think we need something to help force somebody's discussions to happen, especially in an asynchronous way and sometimes just random silly issues is the answer right to get the ball rolling again so please don't hesitate open them up we can always close them. That's the harm. Okay. Thomas, any other questions or comments about this issue. Okay. So Thomas I'm assuming that you will either close this issue because it opened up more precise issues or PRs is that correct. Yeah. I actually wanted to also hear from Mike because he created the bigger or the bigger discovery document so maybe some comment from him would be appreciated. Okay, well I won't close this one. I just want to make sure that ultimately because I don't think this this issue itself. It's so broad right I don't think it's going to result in just a single PR is expected to be a whole bunch of PRs. So, it's on point, but we'll close it. Okay, cool. All right, in that case. I don't want to call but graph QL I think there were a couple of comments on this one within the last couple hours with anybody. Christoph would you like to. And it was mine again. Yeah, it's honest you to you guys want to talk to your thoughts on this one. We have a couple minutes left very short though. I agree it should be just an alternative to to rest apis because rest apis are so common and so so used in the whole communities. So, but, but just I added some thoughts that it might not be so easy to use the rest especially for the discovery, when you see the different resources which we need to call and the matching and the search time and so on. So, it's something to read through just some thoughts put together and what would be the alternatives and the advantages of graph QL. And in the end it's trade offs. You know, I have a chance to read the comment. Yeah, but it's like a good one. And Christoph you do want to chime in or save it for the issue. So basically what I said on call like a couple of weeks ago. So, we're coming to see adoption of it, and it's working pretty well for us. One thought I added is that there's like a pattern. I don't have first and experience with it. Either you have microservices or just services that only offer rest and then you have a GraphQL gateway or API server whatever you want to call it in front. That takes the GraphQL requests and forwards then and turns or resolves the GraphQL fields by making rest or opposite or whatever calls to the individual services. There will be one way where a service like climate said that is too constrained to offer GraphQL itself could still offer it by having that in front of it. So for clients, it doesn't really make a difference if latency is not a big concern, which I don't think it is here. Okay. Yeah, maybe I just want to chime in otherwise we'll just keep talking about it in the issue itself. Okay. In that case, are there other topics people would like to bring up? Okay, before we adjourn, actually even before I do the roll call again. We do have an SDK call right after this one. And actually after the SDK call, Clemens, Scott and I, we're going to discuss KubeCon. And the reason it's initially just us three is because we were the three that had volunteered to run one of the sessions that we had a KubeCon. So we were going to discuss what to do about that than now that they're going virtual. Anybody else is free to join. Unfortunately, in order to do that, you'd have to hang on through the SDK call because we don't know when the brainstorming session will happen. But I wanted to give anybody who wanted to an opportunity to join. We will be talking about what to do about KubeCon after the SDK call. Okay. And with that, let me quickly do final roll call and then you guys can go. Normal, are you there? No, I don't see them at the end of the job. Fran, are you there? Yes, I'm here. Cool. Okay. Grant, I think, dropped. Oh, I got normal twice. Oleg, are you there? I'm here. All right. Does Scott tell you? Thank you. I'm sorry. I'm here. Don't worry. Okay. And did I miss anybody? Okay. For the last two folks, if you want to be associated with a company, just do me a favor and put your name or your company name here next to your names, then I'll add that into the roster. Okay. Like, can we edit this or? Yeah, you can edit the doc. Just go ahead and add it in there. And let me paste the link into the zoom chat in case. Thank you. There you go. Yep. Just feel free to add it. All right. Anything else before we adjourn? All right. Cool. Thank you, everybody. And SDK call will start in just a couple of minutes. Have a good one. A lot of people sticking on. What else are we going to do? Well, I'd like to eat lunch personally. My day started very early today, so I had a really early breakfast and I'm getting grumpy. Where are you located? I'm sorry. Say it again. Where are you located? North Carolina. You made a really good storm this morning. It was really cool. I love a storm. The way downside was I was, I had to do some demos this morning for a Webex call and I had this fear that the power was going to go out because of the storm, but luckily everything worked out good. So it was a good day so far. Yeah, but that was because one of the controllers for our window blinds had a massive short. The thought of your window blinds causing your power to go out is just hilarious. Yes, it's, it's sophisticated until it's not. A little too dependent on technology. All right. I remember this first issue to do. Oh yeah. Wait, where's the first came up. I think it all started with ad support for cloud events in spring cloud function. There should be a link associated in this pull request. And then we continued with the pull request and more conversation happened in the request. The first one because it was by, so his name started with a B, wasn't it? B. Yeah, B side up. That's the one I was looking for. Oh yeah. Okay, there you go. Sorry. So where do you want me to scroll to in here or do you just want to start talking? I'll start talking. This is all egg. Okay. Am I ready? Yeah, let's go ahead and start. Yep. Go for it. Cool. So basically. So first of all, about me, I'm a president, I guess, spring team. Lead for spring cloud function and spring cloud stream projects. Not about me. So basically on spring cloud function side, we got a request from one of the. We got a request from Google dev advocates, James word to provide support for cloud events within spring cloud function. So naturally, we had several internal discussions as well as external debates as to what does that even mean. And the cloud event is a very clear Jason friendly specification and as such, at least in spring, could be easily dealt with using existing spring obstructions even right now. So perhaps maybe, you know, create a few more if need to. On the other hand, this discussion is natural led us to the fact that there is a Java SDK and certain obstructions defined by SDK. Maybe of value to us, at least for purposes of avoiding defining those our own types representing sort of a similar things like cloud event. Right. So in other words, it really became about the difference between supporting cloud event cloud events or supporting cloud events through Java SDK. But for the sake of this discussion, we're going to assume that support for cloud events is through Java SDK. So we started looking at some artifacts and approaches that eventually led us to this discussion. So after these lot of internal discussions we yesterday sort of I've submitted a PR that simplifies and streamlines cloud event interface by effectively actually bringing it back to the state. To a state similar to what it was in version one to ensure it stays clean and clear of any assumptions about, you know, implementation transmission storage binding and all those things right. So, and with that, it would greatly, which Sergey will talk about later on simplifies or provides actually a path for gradual migration from version one to version two. And there seem to be, yes, resistance in accepting it. And I believe it is perhaps due to some misunderstandings on both sides. So thank you dog for sort of facilitating this discussion so quickly, we can we're hoping to come to some type of resolution. So, with your permission, I have a few slides that I kind of allows me to allow me to summarize this entire PR because as you can see this PR has just enormous amount of comments. Go for it. I can share the screen. Basically what's at question so this is the kind of probably the most important slide actually because the question is I want to make sure that we clearly maintain this part of the discussion within the scope and scope is the structure of the cloud event interface where we basically moved several components out of it and removed one operational together so let's see what those are so basically motivation is, you know, some of the best principles of software design so single responsibility interface aggregation so do one thing, but do it well and do something I don't need to. So, with that sort of a motto, we removed, for example, get attribute removed in favor of get of individual servers and get some sort of removed in favor of individual getters. It was defined by the spec. So why so it forces one to implement another interface which is not really mentioned in a spec, while individual attributes are. And, you know, for example, with GSR 305 we can also distinguish from just the definition of the interface very easily which attributes are required, which are optional. So, and it kind of makes interface very clear and very concise. And if I, the one who wants to implement the interface I don't have to, I don't have to worry about learning how to implement cloud events interface but also how to implement, for example, attributes interface and then try to correlate it to specification. Get binary mess sort of as binary message as structural message those methods were moved out of the into utility class for now, because they, in terms of whether the place where they were moved out to is a permanent or temporary place is a matter of separate discussion because we need to find out actually assigned proper responsibility whether those those are builders as we discussed or adapters or converters and so on and so forth. So I classify these methods as, for example, adapters, right because they adapt cloud event to various binding transmission storage purposes. So, and you know you can kind of read the rest. So, you know, but the main point is that we and we discussed this extent that these are these type of operations these operations belong to were facilitated from the optional part of the specs so in other words, the cloud event can exist without ever being converted to binary or any other message or vice versa. Right. So, and if in my world. That's the life cycle of the cloud event why should I be forced for example to implement something that I have no intentions of using in reverse why should I have an implementation that provides implementation for something that I don't know that have no intentions of So, again, so this this is kind of all with the desire to keep the sort of interface as lean and as clean as possible. So, to the 123. Those are, again, I look at them as a converters and clearly till the operations originated due to specification change in their full optional by default in other words, those are almost like accidental operations because if there was no change that would govern that type of conversion in the first place right so again, I may choose in the new world saying okay you know what I may choose to not support version three that's my right right so why should it be forced to even question why I have that method on my interface. Right, I may have utility class that provides me with that method and that's fine I may choose to use it or not. But on the interface I just don't we just don't believe that it's the appropriate place to have those operations and two more methods build version of our build version three. Again, the same kind of arguments those are factories builders whatever I mean they're not really builders because they're. Well, in my world builder is a very clear as a very clear distinct very clear definition of what the builder is so those are factories but again, the same point. It's even more interesting because, well, in Java world, there are instances quite a few actually, where we have factory method on the class that effectively creates itself like get instance that's a very good example for singletons right. However, having similar operation on the interface raises the question which implementation is it going to return this or that. Right, so, in other words, when class returns the get instance, it is clear, which implementation is going to return so five implementation of the same interface five classes will have the instance method, we can call either one of them and get a particular implementation there's no ambiguity there. However, when it comes to having something like get instance or build version one version three on the interface. Well, I can have multiple implementations of version one or version three. And it's just the wrong place to have it. So, so okay I'm going to turn it over to you now for the rest three slides. They'll be quick, could we start presenting just. I mean, key to play button so that it's a little bit bigger. Oh, what would you like me to not just start presenting. If you can. And the top bar, there's a button that says play in the very, very top area. I stopped sharing. Oh, you stop sharing. Okay, I'll just share my screen. Yeah, sorry for for the inconvenience. Okay, can you see my screen. Yeah. Yes, we can see. Okay. Sorry, I'm just quickly just quickly change it to. Okay. So, when we started looking at the cloud events SDK and here, when I say we, I mean, both spring set of projects but also Leakluse, which is an event gateway for Kafka, Pulsar and others where cloud event is first class citizen now, which is, which I'm really happy about. And when we started comparing version one, version two, we realized that maybe there was too much of a difference between these two. There are some good ones, but also some questionable ones. So we decided to take a step back and look at what we had in version 1.2, the current latest release. So the interface was very, very small, like just four methods. And it was good enough. There was some issues, but at least from the implementation perspective, it was good enough and we were able to provide our own implementation for the interface for performance reasons but also for making it easier to adapt our internal representation into on to cloud event. It is very well aligned with the specification, you'll find data extension. There is some attributes object, which is not something you'll find in the specification. At least, when you look at it, you would usually expect attributes next to other fields like data and extensions. But anyways, that's version one. And there are some issues with allocations, for example, in agile, we have to pay the price of allocations where we deal with, for example, optional class in it allocated to return wrapper or attributes. But it was lean, and it was great because it was so easy to implement it. Then after the version 2, like after the community started working on the version 2, the interface have changed. And now we have a lot more. Most of it already highlighted, but I just want to reiterate. So we still have the attributes object. We now no longer deal with optionals. It's the GSR 305 change he was talking about. But now we also have to be 0, 3 to be 1 as binary messages, structured messages, some static methods that are the new concepts to implement. And you as an implementer, you have to know about them. And they bring some issues like as structure requires and location because it has to capture the parameter and some other things. And last, it's not something you will find as a specification like binary message and structure message. These are formats, but not it's not a part of cloud event itself. It's just how you transform it into wire representation. Okay, I see the chat now. So if you want to share something. Yep, I'll read the chat. So it's not lean anymore. And it makes it made it much harder to implement. What Oleg proposed in his full request is to take a look at version one of the SDK in line the attributes, but the rest is basically the same. So you will find some similarities with version one interface. And the benefits of this approach is that it maps one on one to the specification. And it's easier to implement and it does not contain any implementation details of the SDK. And while things that messages obstruction is a great one, and it really helps to implement the bindings. I don't think that interface that represents a cloud event, the ones that we potentially may use in frameworks, not only in user code, but in frameworks, should expose any implementation detail of how messages work and how conversion works. And I think that's really important and friendly because you don't need to allocate anything to represent the cloud event except cloud event itself. And when we compare version one and version two, which version two, I mean, in Oleg's pull request, then it becomes clear that after the change, version one version two will differ mostly by inlining the attributes that we got rid of optional, which is something we already did in the master anyways, but just wanted to explain the difference. So long story short, we think that during the version two efforts, we were focusing on the messages obstruction too much, and we brought this obstruction to the main interface. In fact, it's not necessary, we can keep the old interface the old lean interface and do the conversion. Next to the interface, but as a main interface of the whole libraries whole SDK, the one that should be used by other integrators should be as small as possible and as as close to the specification as possible. Okay, go ahead. No, I'm saying I think that's that sort of includes a presentation, right, Sergey. Yes, that's that's a presentation because what I really wanted to accomplish here is if we were if we were not, we didn't do this then just passing through the pull request itself with all the comments and we're just taking more than one hour to just deal with that. Yeah, in individual comments so I just wanted to kind of say okay well let's forget the PR per second here's what was done. And here's where we heading 20 years why so and Yeah, we can still get back to the PR like offline for more comments but we kind of would like to gauge what the rest people on the call. Think about this change and about the view of the interface that you can see right now on the right hand side, which is what which is what this PR is all about this is the gist of the PR the rest is just to make sure that this works and the tests are passing so no breaking changes, everything compiles everything builds, but this is the cloud event interface which identifies cloud event and as Sergey pointed out, clearly matches the core portion of this back. Okay, and just let you guys know one of the reasons I specifically pushed to have this on today's calls because I could be wrong here but I got the impression from reading some of the comments in the in the PR. That there may be a high order question here for the group. And because in particular Francesco you said something about the way that the direct the direction they're headed is an anti pattern that kind of stuff and, and the reason I thought this was important to bring up on this call, as opposed to just leave it as a Java SDK issue is because in the past we've talked about trying to have consistency as best we can anyway across the SDKs. And so I wanted to see if there was a consistent thought process here across the SDK authors to say whether, yes, this is an anti pattern and Java should not do it or say no for Java doesn't make sense for them to do it or something on those lines. So, but that was back off. That was related to the comment that should application code ever use cloud event type or not. Right. Oh, yeah, yes. Yeah, that's the one that jumped out at me is as a high order question. In neutral to the Java to the Java particularities but looking at the broader scope. I just pasted the the C sharp class of cloud events into the into the chats. And that actually I cannot see where you pasted it worked in the chat. I'm in a chat. Yeah, we cannot see it in the chat. Which checklist zoom. Yeah. Oh, it's up a little. There you go. Oh, did you see us. Yeah. Oh, my idea. I'm sorry. Yeah. So that effectively tracks your proposal with the exception that for the model. So, first of all, I have an attribute. I have a method that is gives you that can give you all the attributes as a dictionary. So you get raw access to everything without knowing what's there. And then there is a particular extension pattern that is that I'm using these sharp SDK where you are having effectively strongly typed extensions. So that's this notion of an effect of the of an extension, which I'm leaving fairly undefined here and then in the there's if you if you go one level back, then there is a effectively there are these extensions which are implemented I cloud of an extensions. And they are that that'll effectively allows you to plug in a strongly typed extension interface into the cloud event, which you are effectively so as you are there's some there's some examples in the in the read me, where as your parsing event you basically give to the parser you're giving the extensions that you are understanding or that you wanted to understand your application. And then if there are attributes present it will basically go and slot that into the cloud event so that you can have a strongly typed interface for them. And the way how you get at those is like if you had like the distributor tracing extension. And you would walk up to the cloud event and he would say on the cloud event you would say in cloud event dots and then extensions off on distributor tracing extension and would give you the strongly typed interface for it. But otherwise, if you want to get an extension attribute to simply go and tap into the into the attributes collection and that's how you get them. But everything else is a strongly typed property as we as they are in C sharp so that so that the general shape with your get extensions is really my get attributes. And but and then your, your property getters are my property getters and centers but that's that's metric to what I have. I only have this extra layer of, you know, a strongly typed extension model that I put into into that SDKs and that also that is also supportable with with parsing. Scott your hands up. I've considered making the cloud event interface be get data, get attributes and get extensions, and then have basically everything that's an arrow be the attributes interface and cloud events extends attributes and returns itself if you access get attributes. That's what how it was before. Right. It's not exactly how it was before but that's the workaround I applied in in my own cloud event implementation based on version one of the SDK. So get attributes was just returning these. Okay, I'm sorry. Yes. I think, I think one thing that is so in in go, we, we do a lot of a lot of learning around the different layers of integration. And we found that the cloud event interface this what we're showing here is is not enough. It's really only useful for the in consumer functions. So the the UX around how to build up these events and how middleware deals with events and trends and shuffles it between protocols is a much more telling SDK feature. So I wonder if we could, I mean, like, I really don't see a problem with this simplified interface if you don't want to have attributes. I think it was kind of nice to be able to iterate over attributes but it's okay. The, the mechanics of how this thing turns from a active HTTP request into an object that's accessible and possibly being able to leave it in its fairly encoded form so you can shuffle it between HTTP and Kafka is the bigger test of like does this SDK work. And I agree with you. And we are not questioning that what we're trying to do is say, yes, this is a responsibility that has to be handled, but there are utilities there are builders there are converters I mean we again from even from the spring we deal with those type of issues on a kind of a daily basis we have frameworks and extensions around those issues and certain obstructions that are been around and out for decade now. So, again, it's not a question of whether whether those which are saying agree or disagree it's definitely agree is whether for example, the builder methods operations should be exposed through the interface. This cloud event interface or whether it is a sort of a utility functionality that, in fact, as one of the slides I was looking about the fact that in fact it is shared and I'd rather be able to go and many people can reuse that functionality with their own implementation of cloud event right. So that's another benefit of saying yeah, all events and dip into this reusable functionality provided by his decay and that's the value that I can get behind, for example. Yeah, I agree with that to in go we do the same thing where there's a there's a getter interface and there's a writer interface. So, totally I think I think that sounds reasonable and so we now we can start picking these things apart where we can have message interfaces and cloud event writer interfaces and things like that and if you really just want to integrate on the cloud event interface with this this thing that you're showing that seems like a reasonable approach is an example from my bias view within within spring cloud function again forget spring cloud function, just look at it from pure function I have function cloud event cloud event, another function cloud cloud event. So, the reason why I have to is because I broke my complexity and implemented it as two different isolated functionality now I want to reassemble it how do I do. So now I do function a and and then function B so I can pose two functions into one will one out with cloud events and other inputs cloud event. Regardless that it still has to pass one internally even though it's composed I mean this types have to be passed so it's simple by reference passing but this is where this is exactly what I want to deal with with nothing else right and then if I want to start sending it off carabin or whatever and with bindings without bindings, regardless of how I'm going to do it right, I will I have layers and like was up on in stream we have binders that will do that right. So, again, this is just I'm not questioning whether those things should be done. It's really more about who should do that should it be a jackable trades or should it be you know, everyone has their own responsibility and I just delegate to that one guy who does one thing but does it very well. So, Francesco your hands up good. Oh wait, did he already was that way to leave his hand was up there. So, what's interesting is I'm hearing. Wait, no slinky you're still there. No, I want to talk. Yeah, that's okay sorry. So yeah. No, so, since I'm the one that had the biggest pushback on this PR, I want to make a couple of things clear. So, first of all, I think there is a clash and in what should be the goal of the SDK. And this is clearly on the line but our discussion that we had inside the document that I created. So, can you open the document please. Wait a second. What's the clash. Let's finish one discussion. I want to make sure that. So, so the question is my opinion is that the goal of the SDK is to provide a core module, which contains tpi which contains the basic implementations, which can be eventually split into different models. So, there is a range of sub modules that the SDK provides to eventually integrate with existing children out there. So the user can just download the SDK and use it with Kafka. Okay. We're talking about strict. Well, this really comes down to this interface, because I fully agree with you. So, first, this is the case really not working from a state. I, I frankly admit that I made some mistakes just to make the code compile. Like the build the one matter should not be there. I completely agree with you about that. But about the conversion to messages. I think they should remain there for a couple of reasons. First, in as in SDK go, we externalized the conversion between event and message. And the result is a huge elephant called versions sub module where we need to handle all the differences between the various versions. I mean, I want to avoid it because in Java we just do inheritance. So the event itself knows how to provide of itself and a structured view, which is really what as binary messages. I mean, as binary messages as binary message is nothing other than from just having a map that returns the actual goods. Only it's not a map because you're not locating, but it's, it's something that you visit with a visitor. That's what as binary message really is. Elephant are we talking about when I simply moved the method that was inside the interface that I believe should not be there. I moved it to utility class and right now you passed that same event to that same method, and the same thing because because you need to handle the different specification versions if you move outside, while if you have inside the method. Sorry, if you have inside the the class implementation, which is specific to the cloud event version. In that case, we have attributes but forget about attributes. Let's assume we have cloud events be one cloud events. Okay. One second. So first of all, again, if are you envisioning when we guess a different question, are you envisioning version two version three version four version five like how many I mean, I'm assuming that I mean operating under assumption that cloud event is kind of like the middle of the land right so the fact that we have version of three and now version one. There was, you know, now, it was kind of expected the very early version like early adapters will learn something we created version one. Sure, I expect maybe there's going to be few additional amendments over the years, but, you know, that's it that's that's the cloud event that's how much how much more complex can it get so now with that in mind if I am correct and then somebody, you know, but if I, if my assumption is correct, then we're talking about very edge case, which again I can handle in utility class with a simple interrogation of the actual cloud event that was passed to that utility class and say, Oh, this is version of three. So I'll get, I'll pass it differently versus because this is version of one right so it's not like I'm going to have 20 30 different versions of it. And if I do then maybe we should have a whole different other discussion. So really, for, I think we can make that assumption that things won't change. I mean, in my opinion, we can do that. And all SDKs are designed around the idea that there could be an order spectrum which drastically changes some stuff. And second, moving as binary message to another to create unnecessary location, which is exactly what I want to avoid in the message of the eyes. If I may, if I may comment on this one because Ideally, this converters from cloud event into messages should be stateless transformers, and there should be any allocation or anything they can even be singletons if we don't know do not aim at extending them. And what I also wanted to ask ourselves and as a developer in particular is version one was capable of supporting the whole spec with this interface. The question is, why cannot we do the same inversion to why do we strictly need this as binary message and as a structured message. Why can't we keep them as implementation detail while still keeping the same lean interface. Because it was, because it was also awful the usage of the one. That's exactly the reason that's exactly why we and and we had the same accept problem as the K go and we resolve it creating the obstructions around the message. But we could add additional interfaces and it doesn't have to be locked up in this one cloud event interface and the implementation say it's implements this that and the other, where if you're only a reading implementation you just provide the normal cloud event interface and if you are, if you're a middleware you, you want to take an object that's cloud event, and also a cloud event to message interface or something. Well, I'm sorry, go ahead. I'm just saying that's one of the options we can do it and it's definitely something worth discussing to have hierarchy like the pure cloud event and then decorated cloud event with additional functionality I mean. Yeah. And then here comes two questions first. How do you handle an event or conversion from cloud event to cloud event message. Let's let's call it how you want but how we how do we handle this conversion. And that's the first question and the second question is why in first instance, in first instance, you should use cloud event if at the end you don't go to the war or you don't read from the why I mean why in first instance in first instance you're using a serialization format to that you type maybe to your business logic. If you don't write it to the why I mean I think I think I can answer this one. It's a good question and I went to the specification to the SDK documentation and it says that cloud events in the SDK should be easily transformable as an in memory representation. It should be immutable, but it should be easy to transform one cloud event into another. And that's exactly the case we have, for example, spring cloud event, sorry, in spring cloud functions. Sometimes we want to accept one cloud event return another one, and then eventually maybe send it over as a wire, or maybe just log it. We don't see any serialization at this realization at all, or maybe we want to represent internal structure in spring cloud function which is message has cloud event to make it more, I would say, to represent it as more widely adopted interface which is cloud event and I believe that cloud event will become a very widely adopted interface. I personally think that on this particular point, you could be wrong for the really simple reason that you are tying your framework business logic to something that is a serialization format. As I said, to me it really looks like saying I tie my framework business logic to a soap envelope while a soap envelope is really just a serialization format. Yeah, but that's not what a cloud event is. A cloud event is a thing that is an event is a first class programming construct and serialization is something that we do separately. And that is you choose in JSON, you choose the Avro and that's your serialization format, but an event is a construct that you keep in your, as an element of your architecture. It's called event driven architecture because you're moving events around the events are driving the logic of your application so it's not something that's just for the wire it's a it's a thing that you handle inside of your application. And so for example, how does it make sense for example to have a spec version for a cloud event that goes in just inside your code without ever being serialized or deserialized or make sense to have subject. That's very easy, because you may simply have a fairly complex in memory application that is made out of multiple modules and those multiple modules are put together using friends reactive extensions. And they need to go and they need to exchange information in a in a way that is also useful for exchange across the wire. Cloud events is a perfect model for that because first of all you get you get a way to standardize events across all of the models that you have without having to invent a new one. And then you can also go and effectively scale that system out across across process boundaries. So it's ideal for that. And I also wanted to mention some use cases like it's it kind of the same of Clemens state but there is a real use case, for example, DB zoom where they needed some format to represent events generated by the databases in a common form. They used to have their own format. They just added integration for cloud events, and they were using the binary representation so they have to parse it. And I've been asking them like why didn't you use SDKs they decided that they don't need SDKs there, I guess partially because the SDK wasn't easy to use. But I clearly see how DB zoom generating cloud events, and then the end user in the same process the same job in the same GVM consuming them, and then doing some business logic without storing the cloud event can happen. If that's the goal I kind of question the, the lack of the generics on the get data, because it seems like it'd be much more simple to interact with this interface if you actually set typed data. I can try to answer that. I thought about this, and the analogy actually kind of comes from we have a very similar interaction model. For example, on the sprinkler stream sprinkler function aside where message kind of look at message, at least in the context of what I'm saying now as cloud event. It also comes in with a payload as byte array because what happens is that the adopters like Kafka or rabbit or whatever messaging system using will translate the byte array to headers and everything else but not to the actual type because we do like a type conversion we can convert byte array to food to bar to whatever. And we know the type only at the time of function invocation, for example, right so we need to know the type as I said, okay, you want person right so I have a byte array content type application Jason so I'm going to send it to the Jason converter and it's going to create a person. Fine. So but the point is that we really don't know at the time event creation until because you may have the same Jason for example representation is a good example. If the method that you want to invoke is a string, then you just passed the entire Jason string but if it's a position you'll attempt to convert it to that position so now we have that flexibility. You don't have to create it twice you just simply passing around the binary. But to your point, that could very well be for the internal representation. Yeah, there could very well be a T I mean if you're trying to do you know this event based architecture like Clemens was talking about and it might make sense to not have to pop it all the way down to just bites. Unless that's easy and I, it's been a long time since I've looked at Java. Maybe it's easy to just like put the raw bites of the object in that in the get data. It is but like it's like you just said I mean you like I was talking and almost realize it as I was talking that that right I may read but now the cloud event itself as I'm passing it around. May I maybe by that time I may have a cloud event with the actual type. Yeah, strong type and why should I convert it back to byte array as I'm only passing it to another method or something like that. It probably I mean so in go there's a concept of a raw and so maybe you need an interface that says it's like cloud event that implements the raw type, which you just get base 64 or, or bites or whatever the type is bites. In C sharp I made that an object, which means you can, you can set you can set and get raw bites, but you can also set a graph, and then depending on what you set there. The serializer will then pick that up if you can. Yeah, like I said object or T and people can define whatever they want. Sergey, what do you think. I think this deserves a separate discussion because I think we all agree on this topic that bite as gate data mean may not be as efficient and as widely used as it could be. But I believe that this is a bit off topic to the main conversation. And I just wanted to mention quickly is that we also have James word who started the topic basically in the job world of supporting cloud events in spring cloud function. So in case you have some input James, I would love to hear your input as well. Thanks. Yeah, I think for me I think the big question around this is, is, do we imagine that people are going to build API is that they use cloud event as a construct that gets passed around and shared across across different libraries and stuff and, and I definitely see that as being something useful to be able to share cloud event across libraries and make that a foundation that we that we build API is on top of. I think there's a lot of really compelling uses for that and interoperability between like spring and Kafka clients and spring and other frameworks and libraries that build on top of that so so that's that's something that I would certainly like to see happen with us. Yes, lucky. So for work regards to data. The real reason why they're, it returns a bite array. It's just to. It's because we don't have a data codec like thing that we have in SDK go. So if you don't, without any conversion, you can't effectively do the conversion like we do in SDK go that we have a matter like data as it converts the, the binary representation into structure into the structure. And that's the point about data. About the interface, I still think that we have a big problem when we start saying. I mean, from them from the serial, serial, serial, serial, serial, serial point of view. How do I handle if a cloud event, it's not serial, I mean, should I set only cloud event serial, serial, serial interface. I mean, how that, how that really works. Because that's, that's the point in the end. Well, I think this can be solved with the, the, like we take a look at those personas and we, we can think where each interface makes sense. And so like this interface looks like very simple consumer, but it's probably inappropriate for middleware, because it's going to be too cumbersome and you don't get the ability to let the object itself understand how to turn itself into the structured version for a protocol. Right. So can, can we like, I think, I think the simple way to agree here is that we say we're going to look at an interface hierarchy for certain implementations to implement as optional and if spring would like to be to just use the in the cloud event interface and internally do a bunch of magic. That seems like it's okay, because it doesn't have to use the message interfaces. What I really propose here is to, we can, what we can also do is to keep as binary and the structured message and, and provide a base implementation that implements this just calling cat spec version, get a V get type or whatever. So, to the point that doesn't make the, like the binary side of it is optional for a lot of protocols. See, but as binary is weird, because what what binary really does is it is a projection of a cloud event into a particular transport message. So, HDB binary is a specific encoding of a cloud event using the HTTP message as its carrier. The same thing is MPP binary is using the MPP message definition as the carrier for a cloud event the cloud event gets completely exploded on top of those, those carrier messages. And then they also get pulled back from it. So there can be an implementation of get binary that is not specific to the transport because those representations are different. No, similarly, similarly, there can't be a format independent implementation of get structured, because the JSON implementation, and the JSON, the JSON for internal structure differs from that about work. No, no comments. I think, I think in this case, maybe you misunderstood from the names, because as a structured message. It's really a view of the event, but serialized with the provided format. Yeah, well, as well as binary message is an unstructured view of the message. But we don't but avoiding any allocation of a map of us or of a stream. But that's really what it is if you look at the interface of a message of a binary message. What you see is that it's just an unstructured view that then the serializers that uses for writing the attributes for writing the extensions and writing the payload inside the messages. Yes, the thing I don't understand is that if you the only thing you need for serializing events is because we treat all the attributes alike, like they're all the same from from a serialization perspective. And so there's a there's a there's a collection of attributes that you need to maintain because you need to maintain one anyways because you have extensions. So, and as you read them, you're not reasoning there, you're reasoning about them likely. So there's a there's a there's a there has to be an attribute collection. And that's the attribute collection you give to the serializer and there's two kinds of serializers we have. There are serializers which turn the event into a standalone payload, which are all the structured ones. And then there's a second kind of serializer which turned the cloud event into map that onto a message or particular chosen transport. But those both can feed from the same thing which is a list of the attributes and access to the data. I'm sorry for interrupting, but since I guess we have other topics other than this one. This one is still important, but still, perhaps we can conclude it with something and I would really like to ask the question and hope we can get a concrete answer, whether cloud event is a type that should be consumed by the end user and should be seen by the end user or not. Yes, it should be. That's the purpose of it. Okay. It's not a soap bubble. So bubble. This conversation at least kind of ease some of the, the heated text debates that are happening. I, from Nazar's point of view, it seems like it has, because I think that was the high order question, right? Is cloud event the first class entity a user should see? And it sounds like the answer is yes. Well, maybe not unanimously, but I'm hearing more yeses than noes, but it's that way. I think it's yes for certain personas, which like the existence of a persona that only cares about this particular interface means that it probably should be its own independent interface. Okay. And I guess we can move the conversation into the direction of whether this interface should also cover the serialization, deserialization, or maybe there can be more concrete interfaces for the wire question. I think, I think that's where we don't agree. That's, that's the whole point. I mean, because again, for the order, I agree with an interface like that, except in my opinion, you should have the view of the event ready for being deserialized. And we can, again, we can. And again, I don't see how a base class can solve this problem. And at the same, I mean, maybe we should take it offline and cooperate on solving it all together. Okay, yes, for sure. Yes, then, then to your line, Sergey. Obviously, there's going to be any meeting is going to end up with a lot of open questions. Should we, what should we do with this PR and should this other discussion be discussed as part of this current PR or should it be incrementally as you know we also discuss about some small PR is manageable. Say, okay, well, if it's not on time, we just like to add those operations back into the interface for whatever reason, then that's going to be a separate PRs and so on and so forth. So I would say first, let's get rid of to be one and to be three, which we can easily get rid of it. And in a separate PR and also we can also get, and we can get rid of the build methods that should be in a cloud that build their interface. So a cloud that builder interface should eventually propose the study methods to create be one and be three. So that's, that's, that's a beginning and definitely unboxing situation. We're saying get rid of the right way of sending the person version while they didn't even exist so they were added recently without actually having the discussion like we're having right now. So maybe we should really say, okay, well, you know, the real change between version one and what's proposed right now is the attributes. So we can kind of take a vote on that and say, okay, fine. So whether we're accepting it or not and then say, okay, well, maybe next week or whatever have discussion about the right of other methods and whether they should be there. Francesca, you can, you know, take the floor and actually, you know, present a bigger argument there. I don't know, because unfairly we've been sort of kind of to control this discussion, but maybe the next time. I just want to make sure that we don't have within this effort there's no PR that's sitting there discuss and keep changing changing changing changing until the point where we don't remember with the game. Right. So, so can we can we take this back to the PR or because you can be honest you guys can actually have another phone call if you want to discuss this because I do feel like we made some good progress. And you can actually use the zoom channel if you want that there's no password you can start up anytime you want if you want to have another face to face chat. But there are other topics on the agenda and some of us have to stop at the top of the hour. Do you want to discuss the Java v2 one or is that part of this discussion already. It is part of this discussion already and the V2, I mean, we only have eight minutes. I don't think that document with Francesca is pretty extensive there is probably taking another hour to discuss. But I would really like to and some something tells me a lot of people in the school would like to to sort of bring this discussion to a conclusion, some type of conclusion. Because again, we're not discussing a release that will happen tomorrow we're discussing. Okay, this is what we agree on. Let's merge that this is what we didn't agree on. Let's discuss it next time. But let's take the starting point as where we were in version one, not where we were where we are intermediary. And all of a sudden, you know, it's a different way of looking at it. Are we adding or are we removing. And at this point of time, I'm saying, we added something that I believe we shouldn't. So let's discuss it. And if we, at some point of time do agree that we need to add it then we'll add it we have plenty of time to do that. But at this point of time, I just don't think we had enough discussion enough debate to come to any kind of discussion about any of those methods other than the ones that represents the attributes right so adding it keeping it there is not the right way of looking at it. We can we we want to do this clean sheet approach where we're going to add things based on, you know, technical requirements, not throw everything and then start removing things we don't really need so. Okay, so how would you guys like to proceed do you want to just stay on this call and keep going, or do you want to set up another call do you want to go back to the PR how would you guys like to work forward, because you can say on this call to if you want that I mean, that document is a starting spot not a proposal. It's an invite to come and collaborate on like what the future of this as the case should look like. Yeah, definitely, definitely. I mean, I just dump it for less what I what I already done. I mean, it's just a dump of ideas of what I already done what I would love to have but yeah, please collaborate on the document. I'm going to block some parts of that. Yeah, easy. So, what's the answer to my question. Take it offline. Well, my fear is that it's the other for me, I will become eventually unmergeable and this is just going to debate things but we almost need to like, I think taking into flight is a good idea because we just got some new information, whether a cloud event is a user type or not and some other things so maybe we can just reevaluate the pull request and make some new assumptions and maybe get rid of the previous ones so that we can progress on the pull request and I really like the idea of having map of attributes from the C sharp version, for example, and I think once we evaluate it, then Francesco find it very useful for encoding the event so that maybe we can get rid of to structure to binary. For example, so let's just take it offline and look at the code again with the new information we received. Okay, cool. Okay. Um, I apologize. This isn't an SDK topic but we talked on the previous cloud events call about talking about kubcon. Unfortunately, I cannot stay past the top of the hour, which is in four minutes. Clemens, are you able to talk tomorrow morning. I'm sorry, not tomorrow morning tomorrow afternoon your time 730 Pacific time, which is what 430 your time. No, because we have a holiday tomorrow. Monday. Okay. What about what time Monday would you guys be able to do. I mean, I, I get out, I get up pretty early. So, you know, like that's 7am, 8am kind of time. That would be great because that doesn't conflict with my with my Seattle people. Okay, as long as we can end by 1030 my time, I think I can do it. Let's Times on math is too hard. Yeah, that's 430 your time. Clemens 430 my time works great. No, no, I had to end by 430 your time. Oh, so let's do let's do for let's do for We don't need more. We don't need more than half an hour. Okay. So for my time, what is that for you? That's 10 me that's seven. Yes, okay. All right. All right. Okay. Anybody else on the call who stuck around for that particular conversation is there a major objection to going with 10am Eastern on Monday. No. Okay, cool. I'll send that I'll send out a note to people who want to join. I'm also going to leave now then. Okay. Okay, we'll take the other topics to either to the mailing list or to the next next call that we have. Okay. Thank you everybody. Thank you guys. Thank you. It was really helpful. Thank you. Thanks.